Neuromorphic computing has the potential to alter the technology sector, affecting everything from hardware to programming languages. In this post, we’ll look into the science underlying this technology and where the state of the art stands right now.
What is Neuromorphic Computing?
Neuromorphic computing, as the name implies, employs a model inspired by the workings of the brain. The brain is an extremely appealing computing model: unlike most supercomputers, which take up entire rooms, the brain is tiny, fitting neatly inside the skull.
Supercomputers can perform precise calculations very quickly, but the brain excels at adaptation. It is capable of writing poetry, quickly identifying a familiar face in a crowd, operating a vehicle, picking up a new language, making good and terrible decisions, and many other things.
So, with traditional computing models struggling and reaching their limits, utilizing brain-inspired methodologies may hold the key to future computers that are ten times more powerful.
How do Neuromorphic systems differ from traditional supercomputers ?
The von Neumann architecture, which separates memory and computation, is the foundation of most modern electronics. The von Neumann bottleneck is a problem with von Neumann chips that results from the constant information transfer between the memory and CPU, which wastes time (computations are slowed down by the speed of the bus between the compute and memory) and energy.
Chipmakers have long been able to increase the amount of processing power on a chip by packing more transistors onto these von Neumann processors, in accordance with Moore’s Law. However, without a change in the principles of chips, this won’t be possible for very long because of the impossibility of shrinking transistors any further, their energy consumption, and the heat they produce.
Hence, Von Neumann designs will make it more difficult as time goes on to provide the needed increases in processing power.
A new non-von Neumann architecture will therefore be required to keep up with the continuously rising demands for computational power. Both neuromorphic systems and quantum computing have been proposed as solutions, although neuromorphic computing—or brain-inspired computing—is more likely to become a reality first.
The next generation of artificial intelligence is expected to be able to handle a few more brain-like issues, such as constraint fulfillment, which requires a system to discover the best solution to a problem with numerous constraints. Which are issues where neuromorphic computing has a lot of potential.
Example: Intel Loihi neuromorphic chips
New performance benchmarks from Intel for the Loihi neuromorphic computing processor show gains in efficiency and power usage. According to Intel, reliable probabilistic computing is a key component of neuromorphic computing, which focuses on simulating the human brain.
The most recent Loihi performance evaluations from Intel focus on voice command recognition, gesture identification, picture retrieval, search capabilities, and robotics areas that partner companies testing the most recent AI technologies.