AI Industrial AI Others Technology

Will Artificial Brain Synapses & Neuromorphic Computing Open the Next AI Hardware Frontier?

There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as General AI, it would probably require one trillion synapses.

When British mathematician Alan Turing proposed in the 1950s that computers would be the best devices for the study of artificial intelligence he contributed a seminal chapter to the history of a now-burgeoning field.

AI has taken off in recent years thanks to the increased computation power provided by graphics processing units (GPUs) to train artificial neural networks (ANNs), along with the increasingly wide availability of large datasets. However, data-intensive computing inevitably will face a pain point: “the speed and energy efficiency of silicon CMOS-based computing hardware is quickly approaching its theoretical limit,” explains the 2019 paper Bridging Biological and Artificial Neural Networks with Emerging Neuromorphic Devices: Fundamentals, Progress, and Challenges.

San Francisco AI company OpenAI has examined the compute used to train AI systems over the past decades and concluded that before 2012 it had generally followed Moore’s Law, with compute power doubling every two years. Since 2012, however, compute has been doubling every 3.4 months. Meanwhile, memory performance has lagged behind processor performance, slowing overall performance improvements. GPUs for example have limited memory for the weights of a neural network and so they have to constantly store and retrieve the weights in external DRAM (Dynamic random-access memory).

As the speed and energy efficiency of traditional silicon CMOS- based computing hardware approaches its limit and the use of state-of-the-art semiconductor circuits to simulate neural networks continues to require prohibitively massive amounts of memory and power, what might the future hold for AI hardware performance improvements?

MIT CSAIL has proposed that continued improvements in computer performance will require more efficient software, new algorithms and specialized hardware. Hardware progress can be achieved by using bigger neural networks, but this comes with the exponential growth of their weights. Microsoft had this to say about the supercomputer it built for OpenAI’s upgraded GPT-3 model language model, which has 175 billion parameters: Training massive AI models requires advanced supercomputing infrastructure, or clusters of state-of-the-art hardware connected by high-bandwidth networks. It also needs tools to train the models across these interconnected computers. The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.”

Dr. Geoffrey Hinton has commented on the heavy demands that tomorrow’s AI might place on today’s neural networks: “There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as General AI, it would probably require one trillion synapses.

A typical AI chip is wired with different types of ANNs that can be mapped onto crosspoint arrays of resistive switching elements. The artificial synapses and neurons can thus be “operated through conduction channels (filaments) that are associated with ion movements driven by electric field, Joule heating or electrochemical potential,” explains the Bridging Biological and Artificial Neural Networks paper.


The resistive switch is one of the core components of a neural network, and is where electronic conductance can be changed. This control emulates the strengthening and weakening of synapses in the brain. The fundamental design of ANNs to simulate how learning takes place in the brain was inspired and built based on such a strengthening or weakening of synapses.

Timeline of major discoveries and advances in intelligent computing, from the 1940s to the present

This subfield of brain-inspired computing is called neuromorphic computing, which, given its more faithful emulation of biological neurons and synapse, has emerged as a promising new computing paradigm for AI advancements.

  • Somehow, the human brain — our own biology — has figured out how to make the human brain one million times more efficient in terms of delivered AI ops than we can build into a traditional supercomputer. Neuromorphic is an opportunity to try to come up with a CMOS-based architecture that mimics the brain and maintains that energy efficiency and cost performance benefit you get from a model of a human brain.”
    • Mark Seager, Intel Fellow and CTO for the HPC ecosystem in the Scalable Datacenter Solutions Group

Below are several studies that may be of interest to readers. They focus on artificial neurons and synapses using innovative devices and materials, particularly with resistive switching characteristics.

  • Protonic solid-state electrochemical synapse for physical neural networks
    • A reason to read: This joint work by researchers from MIT and Brookhaven National Laboratory introduces a system that uses analog ionic-electronic devices to mimic synapses. It’s hoped the new ion-based system could one day offer compatibility with current semiconductor processing protocols and potentially implement highly energy-efficient analog neural networks.
  • A biohybrid synapse with neurotransmitter-mediated plasticity
    • A reason to read: A team of researchers from Stanford University, Istituto Italiano di Tecnologia, Università di Napoli Federico II, and Eindhoven University of Technology test the first biohybrid version of their artificial synapse, with results showed that it can communicate with living cells. The team directly coupled an organic neuromorphic device with dopaminergic cells to create a biohybrid synapse with neurotransmitter-mediated synaptic plasticity. Although the study is still in an early stage, it shows its potential for brain-inspired computers, brain-machine interfaces, medical devices, and new research tools for neuroscience.
  • Alloying conducting channels for reliable neuromorphic computing
    • A reason to read: “So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” says author Jeehwan Kim, associate professor of mechanical engineering at MIT. The study was conducted by a team of engineers and researchers from MIT, Tsinghua University, Lawrence Berkeley National Laboratory, IBM T. J. Watson Research Center and Pohang University of Science and Technology and introduces an ambitious new memristor design for neuromorphic devices. MIT News suggests “such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.

If Turing were alive today what would he think of modern AI technologies and hardware? Some clues come from the pioneer himself. “Most actual digital computers have only a finite store,” Turing said in reference to information storage, which he saw as one of three components of digital computers, “of course only a finite part can have been used at any one time. Likewise only a finite amount can have been constructed, but we can imagine more and more being added as required.”

Journalist: Fangyu Cai | Editor: Michael Sarazen

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

4 comments on “Will Artificial Brain Synapses & Neuromorphic Computing Open the Next AI Hardware Frontier?

%d bloggers like this: