Researchers a step closer to chips that mimic the brain

6 mins read

The human brain is the most efficient computer there is and creating a hardware equivalent has long been close to the top of many technological wish lists for decades. Yet, despite the efforts of researchers from all corners of the globe, that goal has yet to be achieved.

However, that's not to say that progress hasn't been made. Although scientists have been applying themselves to modelling the brain since the early 1950s – originally under the general heading of artificial intelligence – it's only in the last five years or so that there has been demonstrable progress. Partly, this has been accomplished through the not insignificant investment in the topic by IBM Research and by DARPA (the US Defense Advanced Research Projects Agency), but other work is in progress, including Professor Steve Furber's group at the University of Manchester.

Now, Europe is applying its weight to the challenge in the form of the Human Brain Project (HBP). According to the HBP, 'understanding the human brain is one of the greatest challenges facing 21st Century science'. The project believes that if we can gain profound insights into what makes us human then we should be able to build revolutionary new computing technologies, amongst other things. And it says that 'for the first time, modern ICT has brought these goals within sight'.

The HBP is an EU Flagship initiative in which more than 80 partners will work together to realise an 'ICT accelerated' vision for brain research and its applications. The HBP – which will receive €1billion in funding and will run for a decade – will develop six ICT platforms, addressing: neuroinformatics; brain simulation; high performance computing; medical informatics; neuromorphic computing; and neurorobotics.

There are two strands to what's going on in research institutions around the world. Firstly, we still don't really know how the brain works. We know the brain contains around 100billion nerve cells, or neurons, each linked by around 10,000 synapses. Then there are dendrites, which receive messages from other neurons, and axons, which transmit those messages. But there remains much to be discovered.

A subset of this work which is seeing a lot of activity is spiking neural networks (SNN). Building on the concept of neurons and synapses, SNNs add in a time factor and the idea of threshold levels. Neurons are believed to 'spike' – or demonstrate sudden increases in voltage – and the timing of these spikes is thought to encode information.

The other strand is taking this understanding and trying to replicate it using electronic devices. This comes under the broad heading of neuroinformatics; part of which is the development of computational models of the nervous system and its neural processes. It is here that IBM Research, Manchester University et al appear to be making significant progress, with the focus more on cognitive computing, an approach which attempts to bring more elements together alongside artificial intelligence.

Although it had already been working on cognitive computing, IBM Research announced a formal approach to the topic in 2008, assembling a team of researchers, including leading academics from leading universities. At the time, the company said 'ultimately, the team hopes to rival the brain's low power consumption and small size by using nanoscale devices for synapses and neurons. This technology stands to bring about entirely new computing architectures and programming paradigms'.

The importance of the general field of research was underlined by the interest of DARPA, which instigated the SyNAPSE initiative (see box). IBM's team received an initial $4.9million from DARPA, but that has grown to more than $50m as various phases of the research are completed.

While IBM's avowed long term goal is to build a system integrating the equivalent of 10billion neurons and 100trillion synapses that consumes 1kW and occupies less than 2litres, developing chips was the first task and its initial designs were unveiled in 2011. At the time, principal investigator Dr Dharmendra Modha said: "These chips are another significant step in the evolution of computers from calculators to learning systems, signalling the beginning of a new generation of computers and their applications in business, science and government."

Two prototype designs were shown, both with 256 neurons and fabricated using 45nm silicon on insulator cmos process. One core featured 262,144 programmable synapses, the other 65,536 learning synapses. Using the chips, the team demonstrated simple applications, including machine vision and pattern recognition.

Dr Modha said: "Each core integrates computation (neurons), memory (synapses) and intracore communication (axons). Each core is event driven (as opposed to clock driven), reconfigurable, compact and consumes ultra low levels of power.

"Cores are distributed modules that operate in parallel and send unidirectional messages to one another; neurons on a source core send spikes to axons on a target core. One can think of cores as 'grey matter' canonical cortical microcircuits and intercore connectivity as long distance 'white matter'. Like the cerebral cortex, the architecture is highly scalable in terms of number of cores. Modularity means that, while capability increases with scale, design complexity does not."

The cores' design obviates the need for sequential operation demanded by von Neumann style architectures. Instead, they are intended for use in distributed, highly interconnected, asynchronous, parallel, large scale cognitive computing architectures.

Building such chips is one thing; programming them is another and that has been IBM's focus for the last two years, culminating in the launch in August 2013 of the means to accomplish this.

Dr Modha explained the framework: "Architectures and programs are closely intertwined and a new architecture necessitates a new programming paradigm. We are working to create a FORTRAN for synaptic computing chips. While complementing today's computers, this will bring forth a fundamentally new technological capability in terms of programming and applying emerging learning systems."

What does the solution comprise? There are five elements to what IBM calls an 'ecosystem': simulator; neuron model; programming model; library; and laboratory.

The neuron model is described as a 'simple, digital and highly parameterised'. Dr Modha noted: "The neuron model supports a variety of computational functions and neural codes and can qualitatively replicate the 20 biologically relevant behaviours of a dynamical neuron mode."

Programming such devices is based on building blocks called 'corelets'. Each corelet represents a network of neurosynaptic cores that specifies a given function. Because only the inputs and outputs are exposed, programmers can concentrate on the corelet's functionality. Corelets can be combined to produce new corelets that are larger, more complex, or have added functionality.

"Corelet programming that permits construction of complex cognitive algorithms and applications while being efficient for our cognitive computing chip architecture and effective for programmer productivity," Dr Modha explained. "Corelet programming is an entirely new way of thinking."

Meanwhile, in Manchester, Professor Steve Furber said the SpiNNaker Project is continuing to grow – the project's name draws on spiking neural networks. "We have working hardware at various scales," he explained, "although we haven't got to the big scale as quickly as we thought we might. But we are getting there."

The project currently has 48 channel boards, each featuring 864 processor cores, but work is underway on a second iteration as a production version. "Then we have to make lots of boards," he added, "and assemble them in a big box to create the big machine."

Each board features 48 processors and each processor features 18 ARM968 cores. "One of these cores will run an OS type application," Prof Furber explained, "while another 16 will run application code." The 18th core is a spare, which allows for failures.

Like the IBM project, SpiNNaker has seen hardware arrive in advance of software. "Hardware is working well," Prof Furber noted, "and software development is ongoing. We have collaborators around the world playing with the technology and feedback says they are finding it quite usable. Where groups are working on this kind of project, real time spiking technology seems to be the technology of choice."

Despite an apparent rush to create neural computers, Prof Furber said there hasn't been too much pressure to scale hardware quickly. "Many users are finding it a challenge to use what we can already provide. If people were asking for bigger machines, we'd respond."

A slower development path for the hardware has also helped to improve reliability. "We discovered an interesting supply decoupling issue," Prof Furber admitted. "We found that if all the processor cores came out of sleep mode at the same time, the supply current would spike to 20A within 5ns. Because our dc/dc converters had a time constant of 10ns, we've had to add a lot of big capacitors."

SpiNNaker chips have been available for two years and have worked reliably, although further work has been undertaken. "One area was to improve board to board linkages," said Prof Furber. "We are using high speed fpgas to link boards, but although the links work reliably, they don't run at the speed we require."

Prof Furber is reviewing the technology and looking to build a second generation 'from the chips upwards'. "The 968 is a simple and power efficient core," he said, "but we would probably move to a more modern core. We are also looking at whether we can benefit from vectorisation of algorithms; that should be effective in speeding processing and improving energy efficiency."

Although SpiNNaker's funding from EPSRC ends in 2014, Prof Furber said the technology will play a significant role – initially for 30 months, but possibly for 10 years – in the Human Brain Project and that a further five years of funding is available from the European Research Council. "We're focused on making SpiNNaker available as a European platform," he concluded, "and the technology is well placed to make a difference."

SyNAPSE

Systems of Neuromorphic Adaptive Plastic Scalable Electronics – or SyNAPSE – is a programme aimed at developing electronic neuromorphic machine technology that scales to biological levels.

It supports an unprecedented multidisciplinary approach, coordinating aggressive technology development in hardware, architecture, simulation and environment.

In its initial phase, SyNAPSE developed nanometre scale electronic synaptic components capable of adapting the connection strength between two neurons. In the future, work will focus on hardware, with microcircuit development, fabrication process development, single chip system development and multichip system development.

Fortran

Fortran (FORmula TRANslator), created in 1954 and released commercially in 1957, was the first computer language standard and, according to IBM, 'helped open the door to modern computing'.

Fortran was the first step in the process of abstracting software from hardware; previously, programs had to be written in machine code for a specific computer. A Fortran program could run on any system with a Fortran compiler.