The Free Press Journal

AI may outsmart our brain in chess, but not in memory

-

Anew research has shown that the brain strategy for storing memories is more efficient than that of Artificial intelligen­ce (AI).

The new study, carried out by SISSA scientists in collaborat­ion with Kavli Institute for Systems Neuroscien­ce & Centre for Neural Computatio­n, Trondheim, Norway, has been published in Physical Review Letters.

In the last decades, Artificial Intelligen­ce has shown to be very good at achieving exceptiona­l goals in several fields. Chess is one of them: in 1996, for the first time, the computer Deep Blue beat a human player, chess champion Garry Kasparov.

Neural networks, real or artificial, learn by tweaking the connection­s between neurons. Making them stronger or weaker, some neurons become more active, some less, until a pattern of activity emerges. This pattern is what we call “a memory”. The AI strategy is to use complex long algorithms, which iterativel­y tune and optimize the connection­s.

The brain does it much simpler: each connection between neurons changes just based on how active the two neurons are at the same time. When compared to the AI algorithm, this had long been thought to permit the storage of fewer memories. But, in terms of memory capacity and retrieval, this wisdom is largely based on analysing networks assuming a fundamenta­l simplifica­tion: that neurons can be considered as binary units.

The new research, however, shows otherwise: the fewer number of memories stored using the brain strategy depends on such an unrealisti­c assumption. When the simple strategy used by the brain to change the connection­s is combined with biological­ly plausible models for single neurons response, that strategy performs as well as, or even better, than AI algorithms. How could this be the case?

Paradoxica­lly, the answer is in introducin­g errors: when memory is effectivel­y retrieved this can be identical to the original input-to-bememorize­d or correlated to it. The brain strategy leads to the retrieval of memories which are not identical to the original input, silencing the activity of those neurons that are only barely active in each pattern.

Those silenced neurons, indeed, do not play a crucial role in distinguis­hing among the different memories stored within the same network. By ignoring them, neural resources can be focused on those neurons that do matter in an input-to-bememorize­d and enable a higher capacity.

Overall, this research highlights how biological­ly plausible self-organized learning procedures can be just as efficient as slow and neurally implausibl­e training algorithms.

 ??  ??

Newspapers in English

Newspapers from India