“If you look to the human brain for inspiration, it’s very impressive”
Devices that mimic synapses –the junctions between neurons – could help us to produce more powerful computers. Physicist Dr Mike Schneider describes how he’s building them
What is the idea behind creating an artificial synapse?
When you have a connection between two neurons, whether or not one triggers the next is determined by the synapse. This mechanism is believed to be responsible for things like memory. Lots of neurons are connected and the strength of their connection is varied by synapses. We wanted to see if we could make physical devices that match that, as opposed to the transistors and switches used in traditional computing architecture. If you look to the human brain for inspiration for computing, it’s very impressive: you have 100 billion neurons and 100 trillion synapses, and yet it consumes just 20 Watts of power. And it excels at tasks that our modern computers, which are fantastic at multiplying and dividing numbers, don’t do very well.
How did you build an artificial synapse?
The structures we have are based on niobium, a metal, with the synapse itself made from silicon and nano- clusters of manganese. We’re running everything at 4 Kelvin [-269°C], the temperature of liquid helium. When you get niobium cold, it becomes superconducting so has zero resistance to electric current.
How closely does this mimic the human brain?
Our system is based on something called a ‘Josephson junction’. These are made by taking a superconductor and making a break in it using an electrical insulator. There are all kinds of interesting properties about them, but people have proposed that they could be used as an artificial neuron element because they produce a voltage surge that looks like the spike at a synapse, except it’s much faster and in lower energy. These artificial synapses could be put into machines modelled after the brain.
How could such ‘neuromorphic’ computers be used?
We are living in very exciting times where computing is concerned, with artificial intelligence and machine learning. Within the latter, you have algorithms written in software starting to solve problems that have traditionally been very difficult, like image recognition or language translation. These have a large ‘state space’ – the number of possible solutions to a problem. For image recognition, that’s roughly the
number of all possible pixel configurations, which is far too large to calculate explicitly. Over the past few years, deep ‘neural networks’ have made huge in-roads. What if we could make hardware that could run these algorithms sort of natively? The operations in the algorithm map well to neurons and synapses, so if you make a more efficient implementation, you can attack more complex problems.
Nerve synapses at work in the human brain
Algorithms can make disordered artificial synapses function in a more orderly fashion