Brain-Inspired Computing
Pardeep Singh
| 21-06-2024
· Information Team
Cerebral intelligence has become a hot topic in the field of artificial intelligence.
Also known as neuromorphic computing, it mimics the operation of the human brain to enable efficient information processing in computer software and hardware. In contrast to traditional artificial intelligence, cerebral intelligence features low power consumption and high computational power.
The human brain, currently the most complex information processing system known, is remarkably efficient and concise. Experts in the field of artificial intelligence envision the development of more powerful AI by taking the brain as a prototype. The emergence of various large language models has sparked waves of enthusiasm for artificial intelligence worldwide.
Current mainstream applications of AI, represented by large models, are essentially model-driven. Program developers define the behavior and structure of the software, which is then trained continuously with data to form AI capable of interacting with humans.
Artificial intelligence has long been inspired by the human brain. For instance, deep learning in AI functions similarly to the biological neural brain, forming neural networks resembling the brain's "mind" through appropriate matrix quantities and layered organizational connections. The advent of deep learning has led to significant breakthroughs in perceptual problems such as image and speech recognition.
Undeniably, AI that simulates the mechanisms of the human brain has achieved unprecedented success. However, despite these advancements, the human brain still surpasses machines in many aspects. For example, the neurons in the human brain can fire action potentials at frequencies of up to gigahertz, exceeding computers by a factor of a million.
Additionally, the brain operates in a highly parallel manner, while computers function linearly. The brain issues signals flexibly and gently, providing a significant advantage in signal processing. Moreover, the brain is energy-efficient, whereas computers consume a substantial amount of energy—training AlphaGo, for instance, requires more energy than sustaining an adult's active thinking for ten years.
Considering the various advantages of the human brain over computers, scientists have been inspired to explore the idea of using brain-like organs directly to simulate computing. The low power consumption, high fault tolerance, and creativity of the human brain are capabilities that machines aspire to achieve. The goal is to make machines think like humans, and this is precisely the problem that neuromorphic computing aims to solve.
Neuromorphic computing, as a novel computing paradigm inspired by biological principles and modern neuroscience, seeks to imitate the brain's operation from transistors to architecture design, algorithms, and software. In 2017, Intel took a significant step in the development of neuromorphic hardware with the introduction of Loihi, the first self-learning neuromorphic chip.
Loihi integrates training and inference on a single chip and achieves the integration of computing and storage functions. The chip features 128 small cores, each containing 1000 neural elements in a hardware design architecture simulating multiple "logical neurons" and supporting various learning modes.
However, the use of Loihi revealed several limitations, such as differences in neuron model design, occasional sacrifice of accuracy when using binary spike pulses, lack of support for the latest algorithms, performance bottlenecks caused by chip structure congestion, insufficient commercial applicability due to resource limitations, and integration challenges with traditional computing devices.
To address these issues, Intel, in collaboration with numerous partners, introduced the second-generation neuromorphic algorithm chip, Loihi 2, in October of this year. Building on the first-generation Loihi, Intel and other collaborators made significant improvements to Loihi 2. It boasts clear technological advantages, reducing the network scale required to support the same level of capability, resulting in faster processing speeds and lower power consumption.