It’s no secret that today’s increasingly powerful artificial neural networks (ANNs) bring with them increasing powerful computational appetites. The Open AI paper AI and Compute estimates the compute used by 2018’s AlphaGo Zero was some 300,000 times higher than 2012’s AlexNet. Human brains meanwhile are much more efficient: Stanford Professor of Neurology and Neurosurgery Robert Sapolsky told ESPN that chess grandmasters can burn some 6,000 calories on a high-pressure competition day, which is only about three times the typical human requirement.
Power-efficient neuromorphic intelligence systems have been attracting attention in recent years as a possible way to reduce the versatility and efficiency gaps between ANNs and biological neural processing systems and open up the possibility of performing AI processing on smaller low-power devices at the network edge.
In a new paper, a team from the IEEE (Institute of Electrical and Electronics Engineers) provides a comprehensive overview of the bottom-up and top-down design approaches toward neuromorphic intelligence, highlighting the different levels of granularity present in existing silicon implementations and assessing the benefits of the different circuit design styles of neural processing systems.
Neuromorphic engineering encompasses the study of bio-inspired systems following biological organization principles and information representations, and represents a two-fold paradigm shift compared to conventional computer architectures. Brain organization principles rely on distributed computation that co-locates processing and memory. The first paradigm shift thus refers to the von-Neumann bottleneck in data communication between processing and memory and information representation as a way to encode data both in space and time with all-or-none binary spike events, while the second paradigm shift supports sparse event-driven processing for reduced power consumption.
The granularity at which these paradigm shifts can be realized in actual neuromorphic hardware depends largely on implementation choices and design strategies. Normally, there are two distinct design approaches: either bottom-up or top-down. The former involves basic research toward understanding natural intelligence backed by the design of experimentation platforms optimizing a versatility/efficiency tradeoff, while the latter is applied research aimed at building AI applications supported by the design of dedicated hardware accelerators to optimize accuracy/efficiency tradeoffs.
The IEEE researchers believe neuromorphic intelligence can form a unifying substrate toward the design of low-power bio-inspired neural processing systems. Their paper first reviews key design choices and implementation strategies and the tradeoffs introduced by time multiplexing and novel devices.
The team then surveys bottom-up design approaches from their building blocks to their silicon implementations, and top-down design approaches from the algorithms to their silicon implementations. A detailed comparative analysis is also conducted for both design approaches.
The team notes that different circuit design styles can be adopted for both approaches, so the first key question to explore is whether an analog or a digital circuit design style should be selected. The researchers conduct a principled analysis of digital circuit design selection based on different flavours with specific tradeoffs. They first summarize a qualitative overview, then analyze the tradeoffs related to analog, mixed-signal and digital design, as well as important aspects related to memory and computing co-location. They then highlight some of the key drivers behind each circuit design style.
For bottom-up approaches, the team focuses on the building blocks — the neurons — which carry out nonlinear transformations of their inputs and whose process be divided into three stages: the dendrites, which act as an input stage, the core computation in the soma, and outputs transmitted along the axons.
Small- to large-scale integrations in silico have already been achieved on the neuron, synapse, dendrite and axon building blocks. The team reviews these designs, first qualitatively to outline their applicative landscape, then quantitatively to assess the key versatility/efficiency tradeoffs that bottom-up designs aim at optimizing, and finally to highlight the challenges encountered by a purely bottom-up design approach when efficient scaling to real-world tasks is required.
Analysis of tradeoffs between accuracy, area and energy per classification on the MNIST dataset for SNNs, BNNs, ANNs and CNNs.
For top-up approaches, the researchers first cover the development of algorithms allowing for efficient spike-based on-chip training, then move to silicon implementations. The algorithms discussed include backpropagation of error (BP), direct feedback alignment (DFA), direct random target projection (DRPT), SNN, etc. For silicon implementations, the team reviews top-down designs qualitatively to illustrate their applicative landscape, and quantitatively assesses the key accuracy/efficiency tradeoffs that top-down designs optimize for their selected use cases.
Finally, the team provides concluding remarks and outlines the key synergies between both approaches and the perspectives toward on-chip neuromorphic intelligence. They show how on-chip learning can be a key feature to enable autonomous adaptation to users and environments, how keyword spotting embeds biological-time temporal data and may soon become a key driver for neuromorphic smart sensors, and how biological signals are suitable for neuromorphic processing at the edge in wearables.
Overall, the study does a great job of comprehensively surveying bottom-up and top-down neural processing system design and provides many valuable insights for researchers and developers in this rapidly emerging field.
The paper Bottom-Up and Top-Down Neural Processing Systems Design: Neuromorphic Intelligence as the Convergence of Natural and Artificial Intelligence is on arXiv.
Author: Hecate He | Editor: Michael Sarazen, Chain Zhang
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.