A supercomputer scheduled to go online in April 2024 will rival the estimated rate of operations in the human brain, according to researchers in Australia. The machine, called DeepSouth, is capable of performing 228 trillion operations per second.
It’s the world’s first supercomputer capable of simulating networks of neurons and synapses (key biological structures that make up our nervous system) at the scale of the human brain.
DeepSouth belongs to an approach known as neuromorphic computing, which aims to mimic the biological processes of the human brain. It will be run from the International Centre for Neuromorphic Systems at Western Sydney University.
Our brain is the most amazing computing machine we know. By distributing its
computing power to billions of small units (neurons) that interact through trillions of connections (synapses), the brain can rival the most powerful supercomputers in the world, while requiring only the same power used by a fridge lamp bulb.
Supercomputers, meanwhile, generally take up lots of space and need large amounts of electrical power to run. The world’s most powerful supercomputer, the Hewlett Packard Enterprise Frontier, can perform just over one quintillion operations per second. It covers 680 square metres (7,300 sq ft) and requires 22.7 megawatts (MW) to run.
Our brains can perform the same number of operations per second with just 20 watts of power, while weighing just 1.3kg-1.4kg. Among other things, neuromorphic computing aims to unlock the secrets of this amazing efficiency.
Transistors at the limits
On June 30 1945, the mathematician and physicist John von Neumann described the design of a new machine, the Electronic Discrete Variable Automatic Computer (Edvac). This effectively defined the modern electronic computer as we know it.
My smartphone, the laptop I am using to write this article and the most powerful supercomputer in the world all share the same fundamental structure introduced by von Neumann almost 80 years ago. These all have distinct processing and memory units, where data and instructions are stored in the memory and computed by a processor.
For decades, the number of transistors on a microchip doubled approximately every two years, an observation known as Moore’s Law. This allowed us to have smaller and cheaper computers.
However, transistor sizes are now approaching the atomic scale. At these tiny sizes, excessive heat generation is a problem, as is a phenomenon called quantum tunnelling, which interferes with the functioning of the transistors. This is slowing down and will eventually halt transistor miniaturisation.
To overcome this issue, scientists are exploring new approaches to
computing, starting from the powerful computer we all have hidden in our heads, the human brain. Our brains do not work according to John von Neumann’s model of the computer. They don’t have separate computing and memory areas.
They instead work by connecting billions of nerve cells that communicate information in the form of electrical impulses. Information can be passed from one neuron to the next through a junction called a synapse. The organisation of neurons and synapses in the brain is flexible, scalable and efficient.
So in the brain – and unlike in a computer – memory and computation are governed by the same neurons and synapses. Since the late 1980s, scientists have been studying this model with the intention of importing it to computing.
Imitation of life
Neuromorphic computers are based on intricate networks of simple, elementary processors (which act like the brain’s neurons and synapses). The main advantage of this is that these machines are inherently “parallel”.
This means that, as with neurons and synapses, virtually all the processors in a computer can potentially be operating simultaneously, communicating in tandem.
In addition, because the computations performed by individual neurons and synapses are very simple compared with traditional computers, the energy consumption is orders of magnitude smaller. Although neurons are sometimes thought of as processing units, and synapses as memory units, they contribute to both processing and storage. In other words, data is already located where the computation requires it.
This speeds up the brain’s computing in general because there is no separation between memory and processor, which in classical (von Neumann) machines causes a slowdown. But it also avoids the need to perform a specific task of accessing data from a main memory component, as happens in conventional computing systems and consumes a considerable amount of energy.
The principles we have just described are the main inspiration for DeepSouth. This is not the only neuromorphic system currently active. It is worth mentioning the Human Brain Project (HBP), funded under an EU initiative. The HBP was operational from 2013 to 2023, and led to BrainScaleS, a machine located in Heidelberg, in Germany, that emulates the way that neurons and synapses work.
BrainScaleS can simulate the way that neurons “spike”, the way that an electrical impulse travels along a neuron in our brains. This would make BrainScaleS an ideal candidate to investigate the mechanics of cognitive processes and, in future, mechanisms underlying serious neurological and neurodegenerative diseases.
Because they are engineered to mimic actual brains, neuromorphic computers could be the beginning of a turning point. Offering sustainable and affordable computing power and allowing researchers to evaluate models of neurological systems, they are an ideal platform for a range of applications. They have the potential to both advance our understanding of the brain and offer new approaches to artificial intelligence.
Domenico Vicinanza does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.