9 MINS READ
Connected devices driven by 5G and the internet of things (IoT) are everywhere from autonomous vehicles, smart homes, healthcare to space exploration. Devices are becoming more intelligent. Massive amounts of data from multiple sources need to be processed quickly, securely, and in real time, having low latency. Cloud-based architecture may not fulfil these needs of futuristic AI-based systems, that require intelligence at the edge and the ability to process sparse events. As research continues, neuromorphic processors will advance edge computing capabilities and bring AI closer to the edge.
The world is becoming increasingly connected, from autonomous vehicles, smart homes, personal robotics to space exploration. These AI-powered applications rely on fast, autonomous, near real-time analysis of diverse data from multiple sources. They are pushing the boundaries of computing – taking it closer to the edge, the point where data is collected, and analysed. Neuromorphic computing is expected to play an important role in advancing edge computing capabilities by mimicking the human brain and its cognitive functions such as interpretation and autonomous adaptation. It is a high-performance, ultra-low power alternative to the von Neumann architecture that is based on traditional bus-connected CPU-memory peripherals. Due to the time and energy required to send information back and forth between the memory and the CPU, von Neumann machines do not have the ability to support the increasing computational power required by AI applications. At the same time, physical limitations in the size of transistor-based processor circuits impact energy efficiency.
Evolving neuromorphic processors, which are designed to replicate the human brain, eliminate the von Neumann bottleneck. Inspired by the brain’s adaptability and ability to support parallel computation, neuromorphic devices integrate processing and memory at higher speed, complexity, and better energy efficiency. This is critical for enabling intelligence at the edge and processing sparse events. This whitepaper highlights the evolution and importance of neuromorphic processing for enabling Edge AI and showcases applications that can change the landscape of edge computing. It discusses how a spiking neural network (SNN) model, deployed on neuromorphic hardware, can learn from minimal data, and offer real-time responses in an energy-efficient manner (estimated 0.001 times of conventional computing), especially for perception-cognition tasks.
Connected devices rely on sensors that continuously gather data from the surrounding environment and infrastructure. This necessitates intelligent processing of data for tasks, such as optimizing asset usage, monitoring health and safety, disaster management, surveillance, timely field inspection, remote sensing, etc. Intelligence needs to be embedded in systems closer to the sensors, i.e., on devices that are at the far edge of the network – such as drones, robots, wearables, small satellites (or nanosats), and autonomous/guided vehicle controllers. AI at the edge can enhance the performance of connected devices.
A 2020 study identified four major drivers for bringing AI to the edge–low latency, cost of bandwidth, reliability in case of critical operations, and privacy/security of sensor data. For example, autonomous cars rely on processing large volumes of data from within the vehicle, and outside such as weather, road conditions, and other vehicles. To improve safety, enhance efficiency, and reduce accidents, data needs to be processed securely in real-time for immediate action and reaction. Autonomous cars could require data transfer offload ranging from 383 GB an hour to 5.17 TB an hour.
Shifting from a cloud-based architecture to the edge will be vital to address latency issues, and achieve the vision of a truly intelligent and autonomous vehicle. However, the in-situ processing of sensor data within edge devices, comes with its own challenges, primarily related to the reduction in battery life due to additional processing requirements. With neuromorphic computing, since the processing is done locally, low latency real-time operations can be performed with significant reduction in energy costs.
Advances in neuromorphic computing promise to ease the energy concerns associated with Dennard Scaling, Moore’s Law, and von Neumann bottleneck. The term ‘neuromorphic computing’, originally coined by Carver Mead in 1990, refers to very large-scale integration (VLSI) systems with analog components mimicking biological neurons. In recent years, however, the term has been rechristened to encompass an evolving genre of novel bio-inspired processors that are architected as connections of millions of neurons and synapses. These highly connected and parallel processors, coupled with event-based Spiking Neural Networks (SNN) models which power them, have shown promise in terms of energy consumption, real time response, and capability to learn from sparse data.
The fundamental idea behind neuromorphic computing is built on the sensory perception capabilities of mammalian brains, where an input from a sensory organ triggers a series of electro-chemical reactions within the neuronal path. This results in the flow of a spike-train, chain of electrical impulses, through the connected neurons. This propagation of the stimuli alters the synaptic bonds among the connected neurons. The neural network is said to learn or forget an event based on whether the bond strengthens or weakens, which in Hebbian learning concepts is popularly interpreted as ‘neurons that fire together, wire together.’
As opposed to traditional artificial neural networks, SNNs use mathematical models of bio-plausible neurons, such as Izhikevich or Leaky Integrate and Fire (LIF). These, together with models of learning rules such as spike timing dependent plasticity (STDP) and spike driven synaptic plasticity (SDSP), address different levels of application requirements. As shown in Figure 1, input to SNNs are spikes instead of real-valued data, where a spike or event can be viewed as the simplest possible temporal message whose timing is critical for understanding the event. Typically, SNN models tend to use few spikes with high information content, such as the output of advanced dynamic vision sensors (or DVS camera), where only the pixelwise change in luminosity is recorded in an asynchronous manner, resulting in a series of sparse events that can be processed by an SNN.
Though neuromorphic platforms are still under development, research reveals that SNN models possess computational capabilities comparable to that of regular artificial neural networks (ANN) and tend to converge towards the appropriate solution faster– implying fewer computational steps. These factors, along with processing based on sparse data, result in the improved power efficiency of neuromorphic systems explained in Figure 2 and 3 (obtained from a benchmarking exercise). Keyword spotting was executed on different hardware including CPU, GPU, NVIDIA Jetson Nano, Intel Movidius Neural Compute Stick (NCS) and Intel Loihi which is a neuromorphic processor. The graphs show the actual power consumption (measured in watts) and the comparative efficiency (measures as energy consumed in Joules per inference task) respectively, for the same task on each hardware. The results conclusively prove the power advantage (power consumed per inference operation) of the neuromorphic platform.
The roadmap for neuromorphic processors is still evolving and a snapshot of this landscape is shown in Figure 4.
Like all nascent explorations, there are diverse platforms such as SpiNNaker and BrainScaleS which are large scale systems that aim to enable high-end brain-inspired supercomputing. These are, however, not available for commercial usage and do not fit into the scope of intelligent edge platforms. Intel views the Loihi processor as a possible candidate for adoption at a low-powered edge, as well as server infrastructure within data centers. Other processors that are likely to come out within the next couple of years, such as Zeroth, Akida etc., cater to edge applications.
The global neuromorphic computing market size is expected to reach $8.58 billion by 2030 from $0.26 billion in 2020, growing at a CAGR of 79% from 2021 to 2030. Target applications for neuromorphic computing are those where in-situ processing of sensed data is a prime requirement especially when there are device constraints. For example, it can enable visual recognition of gestures/actions and objects. Neuromorphic computing also supports simultaneous localization and mapping (SLAM) for modelling the surrounding environments by mobile robots from various sensory inputs (image, video, micro-doppler radar, sonar). Such functionalities will be crucial in domains such as manufacturing, mining, energy extraction, disaster management, elderly care etc., where the low-power neuromorphic approach reduces latency without compromising accuracy and real-time responses.
In space technology, large monolithic satellites are being replaced by smaller, low-cost satellites for commercial low earth orbit (LEO) missions that can lead to high temporal resolution for earth observation. The satellite constellations can be assigned for precision agriculture, weather monitoring, disaster monitoring etc. In the conventional approach data is sent back to the base station which can lead to latency and communication issues. Orbital edge computing (OEC), which supports onboard processing of data captured by the satellites, is emerging as a possible alternative. In addition, the smaller satellites, popularly known as CubeSats, are heavily in terms of energy. Neuromorphic processors are more energy efficient and generate real-time alerts.
Prototype neuromorphic edge applications are also being used for perception sensing – action/ gesture recognition, object identification (vision applications) and keyword spotting (audio applications). Time-series processing is another area where neuromorphic computing can prove to be a game changer, especially in healthcare. For wearables and implantable, apart from low-latency real-time requirements, in-situ processing is vital for preserving data privacy. It ensures that only encrypted alerts are sent over the open internet, instead of raw physiological signals. Moreover, additional processing based on the conventional approach can drain device batteries quickly. An SNN-based approach for processing vital body signals on the device itself helps resolve this issue as shown in Figure 5.
AI edge computing has immense potential to overcome cloud related challenges and is likely to garner more interest with the expanding 5G network footprint. The global AI edge computing industry is expected to generate $59.63 billion by 2030 growing at a CAGR of 21.2% between 2021 to 2030. However, there are challenges ahead, especially with respect to the fabrication mechanism and materials necessary for large scale commercial deployment of neuromorphic hardware. Scientists across the world are experimenting with various materials, including phase change and valence change memory, resistive RAM, ferroelectric devices, spintronics, and memristors for the creation of commercially viable neuromorphic devices. Such materials also hold the key to achieving synaptic plasticity, which is the capability of lifelong learning (formation of newer neuronal circuits) and unique to biological brains. Novel research in SNN has also increased considerably, especially in complex areas, such as adapting the back-propagation approach, optimal encoding of real-valued data into spikes and establishing new learning paradigms to suit newer applications. Such cutting-edge research in neuromorphic computing is going to pave the way for a variety of new and valuable services, applications and use cases for edge AI.