Quantum Computing Series, Part 1: IoT Challenges

In this series, I explore strategic opportunities in quantum computing and how it can give IoT a leap like never seen before. I will expound on the possibilities, current limitations, and breakthroughs needed to reach a state practical for the industry at large. I will also present implications to business models, computing, security, communication, networking, and more.

IoT Challenges

There are pervasive obstacles and challenges in the Internet of Things that’s hampering broad-based adoption across industries.

According to the World Economic Forum, some of these obstacles include:

  • Security
  • Integration with legacy infrastructure
  • Privacy
  • Cost of investment
  • Perception of risks from the unknown

In addition, there is widespread disagreement and fragmentation regarding IoT standards and protocols among device manufacturers. This, in turn, prevents seamless interoperability between IoT devices. Further, managing IP addressing to identify devices and the need for higher computational power to handle the volume of data exacerbates the problems. Finally, complexity of optimization problems adds even more challenges.

I next present each of these challenges hampering the growth of Internet of Things in greater detail.

Cybersecurity

Security is the undoubtedly the number one challenge for IoT. The recent spate of DDoS attacks are not taking advantage of amplification techniques, which have been the most prevalent types of DDoS attacks in recent years. Instead, the attacks are flooding links with traffic generated from the sources, largely comprised of Internet of Things devices.

IoT manufacturers are not financially motivated to invest in highly secure IoT devices due to the costly R&D and manufacturing process involved. That means that most of the IoT devices and gadgets, whether enterprise or consumer, are highly vulnerable.

Every single device and sensor in the IoT represents a potential risk. How confident can an organization be that each device has the necessary controls to ensure confidentiality and integrity of data?

Researchers at the French technology institute, Eurecom, downloaded around 32,000 firmware images from potential IoT device manufacturers. The study revealed 38 vulnerabilities across 123 products, including poor encryption and backdoors that could allow unauthorized access. One weak link could compromise security of the entire network.

Corporate systems are bound to be bombarded with data through connected sensors in the IoT world. But how sure can an organization be that the data has not been compromised or interfered with?

Consider the case of utility companies automatically collecting readings from smart meters. These meters, used widely in Spain, for example, can be hacked to under-report energy use. They were able to send spoof messages from the meter to the utility company and use false data for that.

Consumers can buy an anti-virus software off the shelf or download it. In the IoT, however, that security capability doesn’t exist in many of the devices that will suddenly become connected. These devices and systems must have built-in security to create trust in both the hardware and data integrity.

Privacy Concerns

IoT aims to make our everyday lives easier while boosting the efficiency and productivity of businesses and individuals. The data collected will help us make smarter decisions. But, this will also have an impact on privacy expectations. If data collected by connected devices is compromised, it will undermine trust in the IoT. We are already seeing consumers place higher expectations on businesses and governments to safeguard their personal information.

With IoT, things go much beyond this. What about the security that protects the critical national infrastructure (CNI), such as oil fields and air traffic control? With everything connected, the IoT smashes the separation between the CNI and the consumer world. Cybercriminals can potentially exploit everyday household items to gain access to the connected CNI.

Businesses need to start now to identify their current and future risk levels for exposure to the IoT. They must also consider the privacy and security implications associated with the volume and type of IoT data.

Trust is the foundation of the IoT and that needs to be underpinned by security and privacy. And it’s a conversation we all must start having now to reap the benefits of the connected world.

Connectivity

Connecting billions of devices will be one of the biggest challenges for the growth of IoT. The structure of current communication models and the underlying technologies are not cap able to address that. At present, we rely on the centralized, server-client paradigm to authenticate, authorize, and connect different nodes in a network.

Interoperability: Standards and Protocols

Network protocols, communication protocols, and data-aggregation standards dictate handling, processing, and storing data from IoT sensors and devices.

Market fragmentation in standards and protocols mandates using additional hardware and software to interconnect the diverse IoT devices and systems. Interoperability issues dictate non-unified cloud services, to lack of standardized M2M protocols to non-standard firmware and operating systems.

Another challenge is the lack of a standard for handling unstructured data. Unlike structured data, unstructured data is stored in different types of NoSQL databases without a standard querying approach.

Network Latency

Currently, the market connotes the IoT with small-footprint M2M nodes (on LPWAN, LoRA, SigFox, NB-IoT networks for tiny publish/subscribe packets). The near future, however, will be IoT that will be data communication intensive. Consider, for example, autonomous vehicles that will handle the auto-driving and safety decisions internally. But, for city-wide traffic optimization, these independent vehicles would need to share and connect to each other. This is essential to orchestrate movement of individual vehicles with public transportation, emergency services, and crowd management. In such situations, the systems of systems can’t afford network latency. Optimizing a city’s traffic mandates performing information exchange and data analysis in real-time.

Computing power

The expected volume of IoT data is set for an explosive growth by 2025. The variety and type of data too is going to increase manifold. All this will result from devices, autonomous vehicles, robots, factories, buildings, and infrastructure transmitting and exchanging operational and environmental data.

IoT and the associated concept of smart world gives rise to many complex optimization problems. Efficient use of embedded, distributed, or hosted intelligence in IoT is fundamental to address the smart world challenges. Examples include smart building management, smart logistics, and smart manufacturing, which all lead to difficult combinatorial optimization problems.

Let’s take a look at the example of smart logistics in greater detail. Logistics need to factor in on-time delivery and costs, optimization of load and vehicle routing, real-time cancellation or change to delivery orders, fleet management, vehicle faults, traffic jam, weather conditions, and so on. Today’s computational power isn’t always sufficient to support highly complex, constantly changing multi variable optimization algorithms for optimized deliveries. On top of that, companies need to ensure quality of delivery service to remain competitive.

Moreover, as these systems interlink with other systems, especially in a smart-city context, the computation horsepower has to constantly re-optimize. It’s clear maneuvering optimization requirements across an orchestration of systems of systems with interdependencies is a huge challenge.

AI

IoT is all about data analytics and insights. To drive these insights, IoT applications rely on neural networks and deep learning to extract meaningful, actionable decision support.

Neural networks are computationally intensive, needing to constantly update millions of parameters to minimize error and produce an accurate model. These updates are basically large matrix multiplication operations.

These neural networks ingest a ton of data. The only way to train neural networks on very large datasets is to continually feed massive amounts of data.

Specialty AI chips are being actively manufactured on a commercial scale to meet the demands of AI intensive computation. This includes GPU and other custom chips such as Google’s Tensor Processing Unit (TPU) that delivers 15–30X higher performance and 30–80X higher performance-per-watt than contemporary CPUs and GPUs. These advantages help Google to run state-of-the-art neural networks at scale and at an affordable cost.

CPUs are designed for more general computing workloads. GPUs, in contrast, are less flexible, but they are designed to perform in parallel the same kind of computation. Neural Networks (NN) are structured in a uniform manner such that at each layer of the network thousands of identical artificial neurons perform the same computation. Therefore, the structure of a NN fits well with the kinds of computation that a GPU can efficiently perform.

GPUs have additional advantages over CPUs. These include having more computational units and having a higher bandwidth to retrieve from memory. Further, in applications requiring image processing (Convolution Neural Networks), graphics-specific capabilities of GPUs can speed up calculations.

GPU Shortcomings

On the flip side, GPUs have lower memory capacities than CPUs. The highest known GPU contains 24GB of RAM; in contrast, CPUs can reach 1TB of RAM. Another weakness of GPU is that it requires a CPU to transfer data into the GPU card. This takes place through the PCI-E connector, which is much slower than CPU or GPU memory. One final weakness is GPU clock speeds are one-third that of high-end CPUs. Therefore, for sequential tasks, a GPU comparatively performs poorly.

In summary, GPUs work well with NN computations. This is because GPUs have many more resources and faster bandwidth and NN computations fit well with GPU architecture. Computational speed is extremely important because training of Neural Networks can range from days to weeks. In fact, many of the successes of Deep Learning were unlikely without GPUs.

Decentralized Networks and Edge Computing

A centralized model works reasonably well for tens, hundreds, or perhaps thousands of devices. The paradigm of a centralize system and network will, however, break, especially as the Internet of Things scales to 20+ billions of nodes by 2020. Centralized systems and network connectivity will become choking points. Moreover, using cloud infrastructure to handle the gargantuan IoT data will be challenging for infrastructure providers and costly for customers.

The sheer magnitude of data emanating from these devices is driving adoption of Edge computing, where connected devices and sensors transmit data to a local gateway device instead of sending it back to the cloud or a central data center. Edge computing is ideal for deploying IoT applications, because it allows for quicker data analytics and reduced network traffic. This is essential for applications, which require localized, real-time data analysis for decision making. Examples include factory optimization, predictive maintenance, remote asset management, building automation, fleet management and logistics.

This is the reason that recently the Linux Foundation announced the launch of EdgeX Foundry. It is an open source project to build a common open framework for IoT edge computing and an ecosystem of interoperable components that unifies the marketplace and accelerates enterprise and Industrial IoT. The initiative’s goal is to simplify and standardize Industrial IoT Edge computing, while still allowing the ecosystem to add value.

Many use cases exist where using Edge is more efficient and cost effective to run real-time analytics and computation. With the passage of time, the workload requirement will only increase.

It requires a different degree of computation power for Edge devices capable of running sophisticated neural networks and computation.

Storage

A hard drive today takes about 100,000 atoms to store a single bit of data. IoT and edge computing will, in contrast, necessitate smaller, compact devices with incredible and dynamically increasing storage requirements. Autonomous vehicles represent a prime case study where millions of LIDAR data points have to be stored and crunched locally in the car for immediacy. That means atomic-level storage for edge computing devices. This is especially critical to store information for data-hungry AI to run neural networks.

Physical Infrastructure

GSMA’s 5G rollout promise up to 10GB mbps with early pilots scheduled for Panyangeon Winter Olympics and production by 2019/2020. But will this solve all IoT data communication needs? And what are the infrastructure capital outlay requirements over the next decade?

Infrastructure investment is one thing, but will enterprise, government, municipalities, industries, and consumers pay for the data costs? This goes back to the trend to shift more of the computation and data analytics to the Edge to lower mobile data costs.

Conclusion

IoT has great promise but not without inherent challenges. As IoT matures, we will chip away at some of these challenges but at some point the patch work of fixes and short-term workarounds won’t be able to keep pace with the growth of IoT. In order to future proof IoT, we need to consider a major leap in innovation.

Further in the series, I will explore how quantum computing solves some fundamental problems not addressed by current means.

 

Learn more: https://amyxinternetofthings.com/