“If you look to the hu­man brain for in­spi­ra­tion, it’s very im­pres­sive”

De­vices that mimic synapses –the junc­tions be­tween neu­rons – could help us to pro­duce more pow­er­ful com­put­ers. Physi­cist Dr Mike Sch­nei­der de­scribes how he’s build­ing them

Focus-Science and Technology - - Discoveries -

What is the idea be­hind creating an ar­ti­fi­cial sy­napse?

When you have a con­nec­tion be­tween two neu­rons, whether or not one trig­gers the next is de­ter­mined by the sy­napse. This mech­a­nism is be­lieved to be re­spon­si­ble for things like mem­ory. Lots of neu­rons are con­nected and the strength of their con­nec­tion is var­ied by synapses. We wanted to see if we could make phys­i­cal de­vices that match that, as op­posed to the tran­sis­tors and switches used in tra­di­tional com­put­ing ar­chi­tec­ture. If you look to the hu­man brain for in­spi­ra­tion for com­put­ing, it’s very im­pres­sive: you have 100 bil­lion neu­rons and 100 tril­lion synapses, and yet it con­sumes just 20 Watts of power. And it ex­cels at tasks that our modern com­put­ers, which are fantastic at mul­ti­ply­ing and di­vid­ing num­bers, don’t do very well.

How did you build an ar­ti­fi­cial sy­napse?

The struc­tures we have are based on nio­bium, a metal, with the sy­napse it­self made from sil­i­con and nano- clus­ters of man­ganese. We’re run­ning ev­ery­thing at 4 Kelvin [-269°C], the tem­per­a­ture of liq­uid he­lium. When you get nio­bium cold, it be­comes su­per­con­duct­ing so has zero re­sis­tance to elec­tric cur­rent.

How closely does this mimic the hu­man brain?

Our sys­tem is based on some­thing called a ‘Joseph­son junc­tion’. These are made by tak­ing a su­per­con­duc­tor and mak­ing a break in it us­ing an elec­tri­cal in­su­la­tor. There are all kinds of in­ter­est­ing prop­er­ties about them, but peo­ple have pro­posed that they could be used as an ar­ti­fi­cial neu­ron el­e­ment be­cause they pro­duce a volt­age surge that looks like the spike at a sy­napse, ex­cept it’s much faster and in lower en­ergy. These ar­ti­fi­cial synapses could be put into ma­chines mod­elled af­ter the brain.

How could such ‘neu­ro­mor­phic’ com­put­ers be used?

We are living in very ex­cit­ing times where com­put­ing is con­cerned, with ar­ti­fi­cial in­tel­li­gence and ma­chine learn­ing. Within the lat­ter, you have al­go­rithms writ­ten in soft­ware start­ing to solve prob­lems that have tra­di­tion­ally been very dif­fi­cult, like im­age recog­ni­tion or lan­guage trans­la­tion. These have a large ‘state space’ – the num­ber of pos­si­ble so­lu­tions to a prob­lem. For im­age recog­ni­tion, that’s roughly the

num­ber of all pos­si­ble pixel con­fig­u­ra­tions, which is far too large to cal­cu­late ex­plic­itly. Over the past few years, deep ‘neu­ral net­works’ have made huge in-roads. What if we could make hard­ware that could run these al­go­rithms sort of na­tively? The op­er­a­tions in the al­go­rithm map well to neu­rons and synapses, so if you make a more ef­fi­cient im­ple­men­ta­tion, you can at­tack more com­plex prob­lems.

Nerve synapses at work in the hu­man brain

Al­go­rithms can make dis­or­dered ar­ti­fi­cial synapses func­tion in a more or­derly fash­ion

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.