Tsinghua University Paper Identifies AI Chip Trends & Bottlenecks

Tsinghua University and Beijing Innovation Center for Future Chips (BICFC) recently published their White Paper on AI Chip Technologies, “to inform readers about the competing technologies and the development trends of AI chips.”

As traditional processors struggle to meet the demands of compute-intensive artificial intelligence applications, dedicated AI chips are playing an increasingly important role in research, development, and on the cloud and edge. AI chips however remain beyond most people’s experience and understanding — you can’t browse TPUs on Amazon or Alibaba, and there isn’t some secret AI chip catalogue circulating among research labs. Despite the tremendous power and wide deployment of today’s AI chips, there has been no comprehensive overview of the tech’s products or trends. Until now.

Tsinghua University and Beijing Innovation Center for Future Chips (BICFC) recently published their White Paper on AI Chip Technologies, “to inform readers about the competing technologies and the development trends of AI chips.” The team comprises top-tier scholars and IEEE fellows from Tsinghua University, Stanford University, California University at Santa Barbara, Duke University, Hong Kong University of Science and Technology, etc.

The 56-page paper has eight chapters covering topics ranging from key attributes of AI chips to technology challenges, architecture design trends, and emerging computing technologies. Some takeaways:

The paper mainly discusses three types of AI chips: Universal chips that can support AI applications efficiently through hardware and software optimization, such as GPU; machine learning accelerators geared towards neural networks and deep learning, such as TPU; and emerging computing chips inspired by biological brains, such as neuromorphic chips.

AI chips are deployed mainly on the cloud and edge: Cloud-based AI chips such as Nvidia’s GPU and Google’s TPU feature high performance and large memory bandwidth. They mainly process computations, where requirements include accuracy, parallelism and data volume. The main priorities for edge-based AI chips are energy efficiency, response time, cost and privacy.

The paper notes that while the training of neural networks is still done on the cloud, their inferencing is increasingly being executed on the edge.

AI chip development is being constrained by two bottlenecks: The Von Neumann bottleneck refers to the significant latency and energy overhead when Von-Neumann-based chips transfer massive amounts of data between storage and memory. This is a growing problem as the data used in AI applications has increased by orders of magnitude.

The other bottleneck involves CMOS processes and devices. Moore’s Law is losing its pace, and future aggregative dimensional scalings of silicon CMOS are expected to reduce in effectiveness.

Today’s dominant memory and storage technologies — dynamic random-access memory (DRAM) and NAND flash — are chips which are independent from computing cores. The industry is exploring next-generation chip architectures such as on-chip memories or neuromorphic chips, to reduce the significant costs of data exchange.

AI chip architecture design trends: Cloud chips for training and inference target massive storage capacity, high processing power pushed to PetaFlops, and scalability. FPGA and ASIC chips will continue demonstrating advantages here. Edge devices will push efficiency to the extreme. TOPs/W is regarded as a key index to measure the efficiency of AI chip performance. The concept of “software-defined chips” will gain more traction as the development of reconfigurable computing technology continues.

Major memory technologies include AI friendly memories (emerging nonvolatile memory), near-chip memories (DRAM and NAND flash), and on-chip memories (static random-access memory). Major computing technologies include near-memory computing, in-memory computing, and neuromorphic computing.

The paper also touches on neuromorphic chips, a type of brain-inspired hardware with low power consumption, low latency, high-speed processing, and joint space-time representation. While neuromorphic chips remain in the research stage, the paper proposes that rapid development of machine learning algorithms along with advances in the study of our biological brains will open up new possibilities for neuromorphic chips.

Click the link to download the White Paper on AI Chip Technologies in English or Mandarin.

Journalist: Tony Peng | Editor: Michael Sarazen

0 comments on “Tsinghua University Paper Identifies AI Chip Trends & Bottlenecks

Leave a Reply

Your email address will not be published. Required fields are marked *