Site icon Synced

The Future of Computing: Neuromorphic

Prologue


The human brain has fascinated and inspired researchers for man years. How it can efficiently support our cognitive abilities with biological energy, with neurons as the basic firing units. Inspired by the brain’slow energy consumption and fast computational speed, neuromorphic chips is not a new topic in the computing world. However, due to the rapid development of sophisticated algorithms and architectures, power dissipation has become a big challenge. As a result, neuromorphic computing might become the stepping-stone for future explorations in exascale machines and artificial intelligence applications such as self-driving cars.

Neuromorphic Chip — A Brain of Silicon


“Somehow, the human brain — our own biology — has figured out how to make the human brain one million times more efficient in terms of delivered AI ops than we can build into a traditional supercomputer. Neuromorphic is an opportunity to try to come up with a CMOS-based architecture that mimics the brain and maintains that energy efficiency and cost performance benefit you get from a model of a human brain.”

— Mark Seager, Intel Fellow and CTO for the HPC ecosystem in the Scalable Datacenter Solutions Group


 

The original idea of neuromorphic chips could be dated back to a paper written by Caltech professor Carver Mead in 1990. In his paper, Mead suggested that analog chips, a type of chips that varies in their output (contrary to digital chips’ binary nature), could mimic the electrical activity of neurons and synapses in the brain. The significance of mimicking brain activities is that we could learn from them. Conventional chips keep every transmission at a fixed voltage. When it comes to complicated algorithms and architecture used in today’s machine learning tasks, power dissipation has become one of the biggest challenges for the silicon industry, as mentioned in the conversation with Mead in 2013.

In comparison, neuromorphic has a low level of energy consumption due to its biologically inspired nature. One of the reasons the human brain is very efficient is due to neural spikes charging only a small fraction of a neuron as they travel. the signal will pass on only when the accumulated charge goes over a set limit. This means neuromorphic chips are event-driven, and operates only when it needs to, resulting in a better operating environment and lower energy consumption.

Photo credit to Matt Grob in “Brain-Inspired Computing, Presented by Qualcomm”

Several companies have invested in researches of brain-inspired computing. Qualcomm, a wireless technology company, made impressive demonstrations of neuromorphic chip-implemented robot in 2014. The robot was able to perform tasks that normally would require a specially programmed computer, but this time with only a smartphone chip and modified software. IBM’s SyNAPSE chip, introduced in 2014, was also built with a brain-inspired computer architecture with an incredibly low energy consumption of 70mW during real-time operation. Recently, neuromorphic has once again raised interest in companies such as IBM and Intel. Different from their earlier intention to manufacture marketable products in 2013 or 2014, these companies now aim to explore this technology for research purposes.

In 2012, Intel proposed a design example of the spin-CMOS Hybrid ANN analogous to biological neural network as one of the first prototypes. In this design, neuron magnets constitute the firing site. The Magnetic Tunnel Junctions (MTJ) are analogous to the cell body of the neuron, and the Domain Wall Magnets (DWM) act as the synapse. Spin potential in the central region of the channel is equal to the electrochemical potential in the cell-body that controls the firing/non-firing state. The CMOS detection and transmission unit can be compared to axon of the biological neuron that transmits electrical signal to the recipient neuron (Figure 1).

Figure 1. A demonstration of Spin-CMOS imitation of biological neural network

Aside from the advantage of low power consumption, neuromorphic devices are better at tasks that need pattern matching over super-computing, such as self-driving and real-time sensor fed neural network. In other words, there are applications that require the imitation of brain thinking, or “cognitive computing”, but not simply a higher capacity of complex computing. As Mark Seager suggests, developments on neuromorphic should focus on architecture that have a great deal of floating point vector units, a high degree of parallelism, and can deal with deep memory hierarchies in a fairly uniform way. More specifically, for neural networks, the focal point of research is how to parallelize a machine learning task over an interconnect, such as OmniPath developed by Intel, to solve larger, more complicated neural network problems and scale out across multiple nodes. Currently, scalability is limited to tens to hundreds of nodes, putting a restriction on the potential of neuromorphic. However, it is reasonable to expect that as computational neural network algorithms and models advance, scalability could increase substantially, allowing more space for the advancements of neuromorphic.

Picture credit to Matt Grob in “Brain-Inspired Computing, Presented by Qualcomm”

Nevertheless, we have to admit that although neuromorphic is a promising direction for the future of computing, they are still at the theoretical level and have not been produced en mass. There are a few devices arguably as neuromorphic in the market, such as the noise suppressor produced by the Audienc. But they have not been subjected to the current, massive stimulation to obtain an proper evaluation of their performance. Ongoing research have shown progress in overcoming difficulties countered in neuromorphic implementations, and would one day usher in the dawn of neuromorphic in computing.

Experiments


“The architecture can solve a wide class of problems from vision, audition, and multi-sensory fusion, and has the potential to revolutionize the computer industry by integrating brain-like capability into devices where computation is constrained by power and speed.”

— Dharmendra Modha, IBM Fellow


Neuromorphic devices aim to draw key insights from neuroscience as inspirations of algorithms, to provide guidance on the future directions of development in neuromorphic computing architectures. However, converting our biological architecture into electrical devices of oscillators and semiconductors is not an easy task.

To obtain the advantages of neuromorphic, we require a large amount of oscillators to imitate the brain’s behavior. Today’s deep neural networks already have millions of nodes, not to mention that efforts are being made towards more complex neural networks with even more nodes. To reach equivalent capacity of the brain, billions of oscillators would be needed. Stimulating a large neural networks of this size in software would be very energy-intensive, whereas processing in hardware is a much better alternative. To fit all the nodes inside a thumb sized chip, nanoscale oscillators are a necessity.

There is one major problem with this, as nanoscale oscillators are highly susceptible to noise. These oscillators will also experience altered behaviors under thermal fluctuations, and their features might drift with time. Neuromorphic computing is not very good at dealing with noise inside the processing circuits, although it can tolerate unreliability in inputs. Take a classification task for example, the same classification of the neural network is required every time similar inputs are presented. Due to the noise problem, no demonstration, other than theoretical proposals, of nanoscale oscillators implemented neuromorphic chips exist. However, a recent article proposed a solution to overcome this difficulty, and successfully emulated the oscillatory behavior of collections of neurons using a specific kind of nanoscale magnetic oscillators.

Figure 2. Left: Schematic of a spin-torque nano-oscillator; Middle: Measured voltage emitted by the oscillator as a function of time; Right: Voltage amplitude as a function of current.

Researchers have found that perfect classification results can be obtained together with high signal-to-noise ratios using spin-torque oscillators under specific dynamical regimes. As shown in Figure 2, the spin-torque oscillators consist of a normal spacer sandwiched between two ferromagnetic layers, which is the exact same structure as current magnetic memory cells. As shown by the graph above, magnetization oscillations generated by charge current are converted to voltage oscillation. Experiments of spoken digit recognition showed that spin-torque oscillators can realize neuromorphic tasks with the state-of-the-art performance.

A simpler waveform recognition task was adopted to investigate the role of spin-torque oscillators in pattern recognition. Each sine or square wave was marked with 8 discrete red points, and the task was to discriminate sines from squares at each red point. Figure 3b shows that many non-linear neurons are required for creating the pathway illustrated in blue in spatial neural networks. The pathway could also be defined in term of time by the non-linear trajectory of the amplitude of a single oscillator, as shown by Figure 3c. Each input will trigger a specific trajectory of the oscillator’s amplitude, and a transient dynamical state will be generated if the time step of the sequence is set to a fraction of the oscillator’s relaxation time. That means, contrary to conventional neural networks with neurons separated in space, the single oscillator functions as a set of virtual neurons connected in time. This feature creates a memory pool of past events, and enable the oscillator to respond differently to identical inputs if the preceding inputs were different. A perfect separation between sine and square inputs is also possible due to the finite relaxation time of the oscillator.

Figure 3. Classification of sine and square waveforms

Iterative training of neural network emulated on hardware could also compensate for the anomalies in the processing. As we mentioned above, in the case of analog hardware, distortions may play a significant role in dynamics. Control over these abnormalities is important since performance of the network relies on training with precise parameters.

A spiking network on the BrainScaleS wafer-scale neuromorphic system transformed from a deep neural network trained in software was used to demonstrate the compensation offered by in-the-loop training. Then, an in-the-loop training followed with recording of activity at each training step. The network activity is first recorded in hardware and processed in software using backpropagation to update the parameters. Researchers found that parameter updates do not have to be precise in the training steps, and only need to approximately follow the correct gradient. Thus the computation of updates in this model can be simplified. This approach allows a quick learning with only several dozens of iterations before reaching an accuracy close to the ideal software-emulated prototype, despite the inherent variations of the analog substrate.

Figure.4 . Classification accuracy per batch as a function of the training step for the software model (left) and the in-the-loop iteration for the hardware implementation (right) for 130 runs.

Neuromorphic hardware implementations usually face another major challenge in system accuracy. Limited resolution of synaptic weights could degrade system accuracy, thus impeding the use of neuromorphic systems extensively.
Nanoscale oscillators should theoretically have continuous analog resistance, but could only achieve several stable resistance states in real devices. A recent work proposed three orthogonal methods to learn synapses with one-level precision:

  1. Distribution-aware quantization discretizes weights in different layers to different values. The method is proposed based on the observation that the weight distributions of a network is by layers. •
  2. Quantization regularization directly learns a network with discrete weights during the training process. The regularization can reduce the distance between a weight and its nearest quantization level with a constant gradient. •
  3. Bias tuning dynamically learns the best bias compensation to minimize the impact of quantization. It can also alleviate the impact of synaptic variation in memristor based neuromorphic systems.

These three methods allow the model to achieve an image classification accuracy comparable to the state-of-the-art. Experiments were performed using MLP and CNN structures on two datasets of MNIST and CIFAR-10.

Results from Table II show that compared with the baseline accuracy, there is a large accuracy increase when applying only one of three accuracy improvement methods (1.52%, 1.26%, 0.4%, respectively). When two or three methods were applied together, the accuracy is even higher, approaching ideal. Same improvements could also be observed in the case of CNN. Some combinations, such as QR+BT compared with only QR in Table II, do not improve performance. This is probably because that MNIST is a relatively simple database, and the effectiveness of these methods on accuracy improvement become saturated rapidly. In both multi-layer perception and convolutional neural networks, the accuracy drop were well controlled within 0.19% (5.53%) for MNIST (CIFAR-10) database, which is significantly lower compared to that of a system without applications of these methods.

Conclusion

As the algorithms and models of machine learning advance, with it would come a strong need for novel architectures. Considering the characteristics of low power consumption and fast computing speed with high parallelism, neuromorphic devices have huge potentials in artificial intelligence and cognitive applications. Although current investigation of neuromorphic is still at a theoretical level, ongoing researches have already shown promising work towards practical application and marketable products. It is a potential direction that could reform the computing world substantially.


“I was thinking about how you would make massively parallel systems, and the only examples we had were in the brains of animals. We built lots of systems. We did retinas, cochleas—a lot of things worked. But it’s a much bigger task than I had thought going in.”

— Marver Mead


References


https://web.stanford.edu/group/brainsinsilicon/documents/MeadNeuroMorphElectro.pdf
https://www.nextplatform.com/2017/02/11/intel-gets-serious-neuromorphic-cognitive-computing-future/
http://news.mit.edu/2011/brain-chip-1115
https://www.youtube.com/watch?v=cBJnVW42qL8 (Matt Grob: Brain-Inspired Computing, Presented by Qualcomm)
https://www.youtube.com/watch?v=_YQTp3jRMIs
https://arxiv.org/abs/1206.3227
https://arxiv.org/abs/1703.01909
https://arxiv.org/abs/1701.01791
https://arxiv.org/abs/1701.07715
https://www.technologyreview.com/s/526506/neuromorphic-chips/
https://science.energy.gov/~/media/bes/pdf/reports/2016/NCFMtSA_rpt.pdf

 


Author: Yuka Liu |Editor: Ian YangLocalized by Synced Global Team: Xiang Chen

Exit mobile version