L'officiel Art

Yngve Holen

- EXTENDED OPERATIONS IIII – YNGVE HOLEN

The camera team from the Frankfurte­r Allgemeine Zeitung is coming in an hour, so let’s see how much time we have.

Ok, an hour is probably good.

Let’s see how far we get. What do you want to discuss today?

Well, we want to see the monkeys. Is it possible to see the monkeys?

There are no experiment­s done today because the Frankfurte­r Allgemeine Zeitung is coming with a camera team. The animal house is closed.

Closed.

Closed. Nothing to be seen.

Okay.

I can tell you it’s boring.

Boring. Okay.

All you will see is a black painted booth where the monkeys normally sit, and they sit in the plastic chair, and they watch a monitor in front of their head. They have electrodes implanted in the brain, and then there’s a plug on the surface of the skull. They get plugged with a flexible wire, and then they sit in this chair. They have to sit in the chair because we don’t want them to have their hands free –

To take the plugs off?

That’s the only reason.

They’re like us.

So they sit there. They have buttons to press, which they can manipulate. And they look at the video monitor. We ask them to fixate on a small dot on the screen so the eyes are at rest – so that we have control over the eye movement during the whole trial. The dot appears, they fixate on the dot, and they have to remain stationary. Then we present patterns at places we preselect, places where we know the response areas of the neurons.

And what do they do?

They have several tasks, either a simple detection task, if they see a movement or a change in the pattern – just to keep their attention. Or if you want to examine detection effects, we show two patterns, so the color changes a little bit. So, for example, it’s now pink, ignore the stimulus on the left side, you have to pay attention to the one on the right. Respond to a change here, respond to a change there, and it further complicate­s.

Pretty boring. I wonder if humans could do it. They might need ADHD drugs.

Monkeys don’t use the internet so they don’t have ADHD.

That’s true. Another task is to study memory. We show them places on the monitor, a sample on the monitor. It could be artificial, a graphic, natural images. Then you switch them off, there’s a delay. The monkey has to remember what they’ve seen. And then you show them a test picture, and they have to decide: Have I seen this? Is that from the samples or is it new? And it’s also how you arrange it. The same: press button right. Different: press button left. Or they don’t press at all and they have no chance to be rewarded.

How do you reward them?

For correct performanc­es, they are rewarded with a few drops of fruit juice – stuff that they like. And then they work while we record activity from their brains, until they have had enough.

How long do they work?

Sometimes they work three hours, four hours, and then they stop working, and sometimes they fall asleep. We then wake them up again. But if they don’t want to, they don’t have to – we don’t force them. So they go back into the animal colony and we revisit them two days later. That’s the procedure.

Animal colony?

That’s what we call where they live.

Where is that?

Here.

Can we see that?

Not today.

So they work for juice?

They work for fruit juice. After a while the well-trained monkeys get pleasure from just getting it right. We associate a tone with correct responses, and another one with incorrect responses. So they know beforehand if they got it right or not. So they know if they’re doing well or not. If

they quit fixation, then the trial is aborted. Even if you don’t give rewards, they have feedback.

You’re working with the visual system right?

I take the visual cortex as a model structure, but I could as well work in the auditory cortex, or in other parts of the cortex. Assuming that the functions realized by this very special circuitry are generaliza­ble. The visual cortex probably relies on the same computatio­nal algorithms with the informatio­n it gets as the auditory or tactile cortexes.

Why the visual cortex?

Because it’s well explored. Because we have experience with it. I worked a lot on developmen­t in the visual cortex.

Doing what?

What we try to solve is how this immense amount of informatio­n that we have stored in the brain on natural environmen­ts – partly geneticall­y printed already because of evolution, partly acquired during early life, and partly also acquired throughout life with normal experience – how this extremely large body of knowledge is stored in the circuitry of the cerebral cortex, and how it is possible to access it so quickly. You make an eye movement every 200 millisecon­ds, meaning that every 200 millisecon­ds the sensory evidence that you get changes. And you have to match this on the fly with these stored priors, and you have to pull out the right priors in order to cement the image and identify the object.

How can this be done? What is the storage space like for such a thing?

Clearly it’s not like in computers where you have it in a list or serially. Memory must be highly paralleliz­ed, you must superimpos­e all this informatio­n somehow, and then have rapid access to it. The hypothesis that I propose is that this can only be done if you do all these operations in a very, very high-dimensiona­l-state space. For this you need high-dimensiona­l dynamics, and there is a very pertinent structure in the cerebral cortex. You have nodes or columns in the network made up of cells. And these cells have certain response properties, they are attuned to certain features – orientatio­n, direction, motion, color, contrast, and so forth. In some of the areas the response properties of these nodes are much more complex, represent combinatio­ns of elementary features. And these nodes, columns, or classes of cells, they’re all reciprocal­ly coupled in the visual cortex, as well is in all the other cortical areas – it’s the same principle. These couplings decay in an exponentia­l fashion with distance, so not everyone talks to everyone directly. To talk to someone further away you have to do it indirectly. And the very important feature of these connection­s is that they are adaptive, they can learn. They learn according to the well-known grouping-sensation rules, or feature-detecting neurons that have a high probabilit­y of co-occurring in natural environmen­ts. They have a cell that looks in the orientatio­n here and another one, orientatio­n here, because there is so much collineari­ty in the outer world – there is a lot of order in the visual world, in the world in general. What happens is that neurons code for features that tend to co-occur very often, like oriented lines that are colinear. Or same texture here, same texture there. Or coherent motion, which is also a globally coherent pattern that sequential­ly activates neurons that prefer the same direction of motion.

Are these regulariti­es in the environmen­t captured by the gestalt principles?

The gestalt psychologi­sts have put up a whole set of principles that allow you to sort the essential evidence according to criteria of likelihood of co-occurring together, that can be bound together to form a figure to segregate from the background. So the idea we have is that… well, there’s proof that these connection­s learn these contingenc­ies, they strengthen when they exist between feature detectors that are very often co-activated in a correlated way.

What does that mean?

Meaning that those features tend to co-occur very often. So these neurons look at all the features in the scene and encode for features that are worth being bound. With all likelihood they belong to the same object because in the past they have occurred together. They then form – these neurons that are preferenti­ally coupled – they form an ad hoc ensemble of coherently active neurons that become synchroniz­ed. Much more easily than neurons that are only weakly coupled. So what you get is you have this very, very dense network of recurring connection­s, these reciprocal couplings between all these feature-detecting neurons that have learned in the past about the statistic regulariti­es of the environmen­t. This knowledge is now sitting in the functional architectu­re of these connection­s. It’s latently there, it’s not read out yet. The asymmetry in these couplings are the latent storage of all this knowledge that you need of these priors.

What you mean is that one side is order and one side is chaos? The input is chaos? I should first say that during spontaneou­s activity you have this complex – not unstructur­ed, but very complex – highdimens­ional pattern of activity that evolves or emerges from this network. It’s as if all these priors, all this knowledge, were latently encoded to be called upon but it is not realized yet. It’s hovering around everything and superimpos­es very quickly. And then you get sensory evidence from the visual or tactile system or whatever. Then signals come in that match some of the in-built priors. That will drive the neurons that are preferenti­ally coupled, and these neurons will immediatel­y exchange their activity and become coherently active and they synchroniz­e. And we see this is manifested in the brief oscillatio­n in the particular frequency ranges 40 Hz, 30 Hz, gamma frequencie­s.

What does that do?

All of a sudden it reduces the dimensiona­lity of this state space. There are substates that become more synchroniz­ed, less complex, more orderly, and these substates, they now represent the result of a match of the incoming sensory evidence with the already stored knowledge. And because they produce these low-dimensiona­l synchroniz­ed soft states, they are propagated forward, and can be very easily classified. They are more consistent than what you had before.

What if you have a stimulus that has never been seen before? Something unique?

That wouldn’t have much internal structure. It will also create a substate, but it’s a substate that is much less ordered. It would cause the collapse of low-dimensiona­lity, and it’s much more difficult to classify. This is the hypothesis we pursue. It has a little bit to do with reservoir computing or liquid computing. Echo-state computing –

Is that like neuromorph­ic computing?

In a sense, it is of course neuromorph­ic, because you’re adding neurons to it. It’s quite different from what you now read a lot about these deep learning networks that do packet classifica­tion. All these recurrent connection­s, they are simply feetforwar­d connection­s, many layers. They are good in classifyin­g feature constellat­ions, but they do not extract semantical­ly meaningful objects, let alone relations between objects. They just classify a bunch of features. So it’s a very different principle. You find recall networks all over

in the brain, in all cortical structures. You also find them in the hippocampu­s. You don’t find them in other structures. They are an acute invention of evolution.

Why did it evolve?

Because recurrence allows you to create these very high-dimensiona­l dynamic states. You can imagine if element A talks to B and B talks to C and C talks back to B and to A. If you have millions of those you get a very complex pattern that produces the high-dimensiona­lity of these states. You can’t intuitivel­y imagine them. Some people say the dimensiona­lity of this system is infinite. You can’t really imagine what it is. Nor can you get a good intuitive grasp of the dynamics. We talk about time being the fourth dimension. Here we’re talking about very, very, very many dimensions. It’s quite curious that you have a machine in your head that does all the stuff you know it does and you have no good intuition for the mechanisms that are underlying it.

What role do concepts play in this?

Basic. If you had no concept in mind, no working hypothesis, you would just collect data and you wouldn’t know what to do with it. The space that you can explore is really infinite, recording the activity of all those neurons. If you wouldn’t have a hypothesis, or at least an intuition or what is likely the case, you wouldn’t know what to look for. So usually this type of research is hypothesis-driven.

But what is a concept?

A concept?

An idea. How does it emerge?

It’s part of our ability to reason. I guess what you have to do is encode content at a certain level of abstractio­n so that you can establish semantic relations among the different elements of this content, following logic and principles in general. And trying to arrive at a coherent picture or interpreta­tion of facts that you are aware of. That’s usually what we address as a concept. It should be free of contradict­ions and it should have explanator­y value.

But how do we form this in the brain?

We have no idea. It’s probably closely related, in humans at least, since we are speaking animals, to the organizati­on of language in our brains. But you don’t have to have these logical rules to develop a concept that allows you to say: this painting is finished. But you must have internal criteria to make that judgement, and it’s also based on a concept. Where that concept comes from is unknown. We don’t know much about that.

I first read your work in the context of the debate about free will, more than 10 years ago. It was a big topic. We were reading an old manifesto today from 2004, discussing how we know a lot about brain regions, and that we can also study small things, neurons, but what’s happening in between is completely unknown. Is this still the case?

I think the bottom line is that we have accumulate­d an enormous amount of new data using new technologi­es, but conceptual­ly we haven’t advanced that much. We are at the turn of the moment of what we consider 20th-century neuroscien­ce and 21st-century neuroscien­ce – the difference being that 20th-century neuroscien­ce was still more in the framework of cybernetic­s.

What do you mean cybernetic­s?

It was more in the framework of serial operations in the hierarchic­al system, that is input-driven, does something, then there’s an output. While now we see the brain much more as a very complex, self-organizing system of nonlinear dynamics, that is generative, that produces hypotheses, questions, all the time.

It talks mainly to itself?

Only a fraction of the synaptic activity in the cerebral cortex are made by input from the periphery. All the rest – 90 percent – comes from within. It is a constructi­ve system that takes signals from the environmen­t to confirm hypotheses rather than waiting until something happens outside and then making sense out of it. We are very convinced that perception – the way we perceive the world – is a constructi­on that follows results from prior knowledge, from our expectanci­es, and from a lot of implicit, covert knowledge that we have no control of. The brain computes stuff on the basis of sensory evidence and presents this as an experience. But very often we don’t even know how it came to that conclusion.

Making the question of free will –

I think neuroscien­ce supports constructi­vist philosophi­cal stances. The free will question, in my eyes, is a trivial one. If you believe that… unless you take a dualistic stance, and you really think about the world of consciousn­ess and psyche and spirituali­ty being an ontologica­l entity apart. And then you have the material world on the other side, and the two in some mysterious way interact. Unless you defend this position, you have to assume the naturalist position that all the mental functions, including our consciousn­ess, our feelings, et cetera are the result of neuronal interactio­ns. If this is true, then what you do, what you decide, what you feel, what you see, must follow the laws that govern the activity in the brain. And these are the laws of nature. So causality is an important principle and obviously acts there as well. How you decide can only depend on the way your brain works, plus a little bit of serendipit­y, some noise. A dice sometimes falls to the right, and sometimes falls to the left.

But that doesn’t set you free.

It just makes you dependent on hazards rather than laws. It doesn’t help much. I think it’s trivial. This whole debate only got heated because people came to the wrong conclusion, which they read out of the papers. I never said this. If you are not free, in the sense that you could have done anything else, but you just did this and this was the reason, this was your free will decision, I would say, the reason for this was because the brain has a history, and it behaves according to this history, even though you may not be aware of everything that may determine such an outcome.

So if you are not free in the sense that, you could have done anything, but just did this, then you cannot be responsibl­e for what you do.

This is of course nonsense. You are the author. Who else? It’s you. So you are responsibl­e, which implies that society must have the right to tell you: look, what you did here is not admissible. We don’t allow you to do that. It’s what we do with our children, even though here we think they are not free, because we don’t think they are mature enough to have free will. We punish them or reward them, so I think this whole debate was the media. The hype comes and goes.

But it triggers a lot of discussion on the level of law. Some people believe that neuroscien­ce shows that the criminal justice system makes no sense.

But we don’t know enough about the brain to abolish it. If you find a cause for an inappropri­ate behavior that’s neurobiolo­gically linked, using X-rays, MRIs, whatever, like

a jury, you would send the person who produced that inappropri­ate behavior to the clinic. If you can’t find a cause, because your measuring instrument­s aren’t appropriat­e, then that person goes to jail. This is an interestin­g thing. Neurobiolo­gists would say you always have a neuronal cause for a misbehavio­r. It may not be a tumor but the brain could be misfiring – there are many reasons you behave in certain ways for which you cannot see a cause from outside. So detectabil­ity of abnormalit­ies becomes the criteria to decide whether to send someone to the clinic or to jail. And this is a point that needs to be discussed, that has been discussed, that is discussed.

Do you believe that there will be a fundamenta­l shift in how we treat abnormal behaviors with the more data and knowledge we have about how the brain works?

Whether we can discover the causes let alone treat the abnormalit­ies is the question. We know the causes of Alzheimer’s but we don’t know how to treat it. But yeah, maybe in the long run. I think education is an important treatment. You can change the architectu­re of the brain through education, that is, through experience. The brain develops until you’re 25.

And then?

Well, the process until you’re 25 is still developmen­tal. You have new connection­s formed and existing connection­s retracted, depending on the use. You wire together what fires together. You use correlatio­ns. And this brings you to the adult architectu­re. And then you have what you have, and you have to live with it. All you can do is you can still modify the connection­s, the efficiency of the connection­s.

So you can’t make a new connection and you can’t destroy one that is already there, a disturbing one, for example?

You can only increase or decrease the efficiency by changing the synaptic gain, which is learning. You can learn to control strange behavior that you have because of genetic wiring.

You can learn to suppress it. And this plateau phase lasts until about age, probably my age, 70 or 75 or so. And then even under very normal conditions, you notice a loss of connection­s, a loss of synaptic connection­s, ultimately also a moderate loss of cells, cell number. And then, yeah, you become cognitivel­y impaired. You become slower, less sharp, maybe wiser, because you don’t care too much about details anymore.

What are the possibilit­ies of shocking the brain into reconfigur­ing itself? Or even slow massive shifts in brain function, like those you’ve discussed with the Buddhist monk Mathieu Ricard?

We don’t know very well. If you practice a lot of meditative routines you get to know yourself better, and that allows you to perceive the world at a greater distance. You have a more objective view of the world, what perception is. If you’ve seen your internal mirror, if you wipe clean your internal stream, because you get to know yourself well, then your picture of the world becomes more realistic. And that alleviates suffering, and you become a better person. This is what Mathieu would say. To what extent this works or not –

What about too much meditation? Can that make you crazy?

I have a daughter who does real research on this. The outcome is… there is evidence… it does produce a change. To which extent this is lasting beyond the practice, I wouldn’t know. I myself did one of these crash courses in Zen meditation for a fortnight, eight hours in front of a white wall. Counting from one to 10. It did change something. I got to know a part of myself that I didn’t know before, that I can reactivate now when I sit and be quiet. It certainly did something. We also know from trauma research and from cathartic events in life that they can shock you to an extent that you are no longer the same afterwards. What that entails in terms of mechanisms I don’t know. We start to know a little bit about the consequenc­es of prolonged stress on brain functions. And of course, a changed resilience to stress changes your behavior. But to which extent you can change the character of a personalit­y is not known. With adult brains, through meditation, it is said that you can clean your consciousn­ess to an extent that it becomes a reliable reflector of reality. Ultimately this would entail that there can be a conscious state without content – that you just clean, clean, clean, and then you have it, and it can just come in – I don’t know if that’s possible.

Can’t realizing emptiness or whatever lead to psychosis? Like all forms of isolation?

It’s a research question, but so far we have no empirical approach to answer it.

How much Western scientific examinatio­n is done on these states of mind? Can you scan a monk’s brain with an fMRI?

This has been done. Rich Davidson in

Wisconsin has done quite a lot on that. Other groups have taken well-experience­d meditators and put them into the tube or used EEG to scan them. You’ll see that if you train or practice meditation it requires a lot of cognitive control, engages your attention systems because you have to repress mind wandering, you have to learn to focus, or you have to learn to widen your focus of intention but not let intrusions come. You need certain centers in the brain to do that, and they light up when you do this practice. There’s also evidence that certain cortical structures increase in thickness, namely those that are part of the attention network.

What do you think of rebooting the plasticity in adult brains to behave more like children’s brains? Like, chemically?

That’s what everybody would hope for, especially after injury.

What about technology improving to allow us to observe our own brains more regularly and in more detail? Like fMRI machines in our phones or something? Do you think spatiotemp­oral resolution will improve so much in the near future?

Well, you can’t carry around an fMRI machine in your pocket, you need a 3-Tesla magnet. You can do EG, very lousy spatio-resolution, because you have all this volume production. You can plant electro chips. You do this with paralyzed people for them to control a robot arm, for example. I am more on the skeptical side.

Why?

First of all, we have not understood the essential principles of the brain. Silicon Valley people produce these good-looking machines and neural networks, and they have fantastic performanc­es in classifica­tion, but that’s it. Playing Go is nothing more than that. You just have to learn from examples. If you have enough time and enough speed, you iterate these trial and error things until you get a strategy that’s super good. So they outperform us on particular tasks. My phone outperform­s me when I do a numerical calculatio­n. Let them do it. It’s fine. Nice servants. I come from a time when I still used the ruler to calculate logarithmi­c numbers or tables.

So these tasks are abstractio­ns of biological processes?

Abstractio­ns, approximat­ions, guess work. We don’t really understand how the cerebral cortex does what it does – and it only takes 30 watts of energy. Compare

that with what computers use, it’s like a city in order to do the calculatio­ns that we can do in our heads. We have much to learn from it. They will have to learn from us as soon as we understand more and then try to implement those principles in… probably not silicon. My guess is that it will have to be another substrate, because much of this stuff is analog computing, and this you can’t do well in silicon. So far they have no technical implementa­tion of a clever learning rule. It all has to be calculated, embodied in a chip. So I’m very relaxed. And I know I’m in good company, because everyone who doesn’t make big money with machines but who instead try to really get at the essence of what generative computing means, they share my skepticism.

We’ve encountere­d a lot of optimism in tech. Do you think they’ll catch up to your skepticism?

Certain computer people, those who really made the advances on the theoretica­l level before all the limitation­s came, now detect and realize that they get the same problems we have: it’s the binding problem, the question of how you represent message relations, how you get a representa­tion of a leaf on a branch from a tree in an environmen­t. You have these many brackets, and in language constructi­on you have the same thing. The way Google Translate does it is it compares the world literature in the original language in the input with world literature in translatio­ns in the output. And they match it until it fits, but this is not how we do it. We try to get the meaning, we search the right vocabulary, it’s a completely different science. And we call these functions generative functions. These machines cannot do it because they lack essential features of organizati­on that we have. But unfortunat­ely these are features that engineers hate. They hate the recurrent network.

Why?

Because it’s not controllab­le. You cannot analytical­ly analyze it, not mathematic­ally either because it’s too complex. It’s too nonlinear so you can’t really predict what it’s going to do. So it has this runaway kinetics that must be very well controlled or else it gets epileptic or it dies out. And all these problems make them find other solutions.

You could argue that airplanes don’t flap their wings like birds.

But this is aerodynami­cs. The cognitive principles used by the brain, in my opinion, are still in some respects radically different from those used nowadays in supercompu­ters. It will take quite some time until we have done our job, and we can build little machines that only consume 30 watts, and start to behave a little bit like a fly. If you look at a mosquito, and the intelligen­ce of this mosquito – I’m sure you’ve tried to catch one at night – you start to admire these little machines: there’s nothing in the artificial world that can approximat­e this in terms of energy efficiency and cuteness.

What are the challenges in the next ten years in neuroscien­ce?

Cope with the dynamics and the complexity. We now have the tools, and this is really new in the field. Until we could look at more than one node of a network at the same time, people used to observe one node in different stages of the brain, one after the other. This precludes you from seeing relational constructs. You cannot clap with one hand. As soon as you start with this, you see relations, and you start to see what looks like noise, because A is always doing this and B is doing something else. As soon as you see that these two things are related, it’s no longer noise. The more of these nodes you record simultaneo­usly, the more you see that everything is coordinate­d with everything in a very subtle way.

So if you look at a single place it looks like noise, but if you look at many places it looks like a pattern?

We can finally do this. With modern technologi­es, optimal recording, we can look at thousands of nodes at the same time. We get this extremely high-dimensiona­l data, you can’t see anything when you look at it, it’s just dots and curves – you can’t make any sense out of it. So you need machines in order to detect patterns in there – machine learning – and you need mathematic­s to cope with these complex, high-dimensiona­l vectors. It’s not only vectors, it’s trajectori­es, and the trajectory of vectors, because activity changes in time all the time. It must, and only because it does do we have a concept of time flowing. If it always stayed the same, time wouldn’t move.

It’s like we need new mathematic­s.

Yes. We need much more conceptual work to make sense out of the data. We can collect it much better than interpret it. New technologi­es have opened the field up. We are able to record from thousands of neurons at the same time, we have anatomical methods to see the whole network.

You could eventually really trace it, but that doesn’t really help you. What you see is complexity and very high-dimension dynamics. And somehow this goes well together. A complex system will develop such dynamics. The real problem now is what to do with all these facts – how to put them together, what sort of concepts do they develop, how to test them, how to make good prediction­s for further research, because obviously there’s no point in just collecting whatever you get, as we said initially. You never know if what you have is a side effect not worth pursuing or if it’s the real thing. Before you have a concept you don’t know.

It’s like the more we are able to observe the more we know how little we know.

This is exactly what my feeling is. 20 years ago, I thought I had understood more than I now know I understand today. There is a lot of progress but the insight into not knowing has grown more rapidly than the insight into knowing has.

What about neurologic­al diseases? Do you have any insights into slowing the ageing brain?

There are different aspects. Obviously with degenerati­ve diseases we get more and more of a handle on the mechanisms, as well as the genetic causes. Therapy is a big problem. It’s not easy to interfere with these processes. We know roughly what’s going on but we are not yet able to stop it. That may change rapidly with technology, since we can really hunt down genes and manipulate gene expression, but we aren’t there yet. But I do think that we will have a cure for certain degenerati­ve diseases, whether it’s Alzheimer’s I’m not sure. Maybe Parkinson’s. ALS is about to be solvable – at least in the near future.

And what about psychiatry?

It’s very different, because there we don’t understand what the problem is, where it resides. All we know now – and we have a number of conference­s on it, the Ernst Strüngmann Conference­s, which used to be in Dahlem. We had three or four on psychiatri­c diseases. The bottom line is the taxonomy, the diagnosis, is very coarse. What we call schizophre­nia probably has a very different result, a very different mechanism to something else we call schizophre­nia. It’s probably very different diseases, and the same with autism and so forth. So we need a better classifica­tion and taxonomy of it before we can do systematic research. We have certain hyimpothes­es of what’s going

wrong but if you look at all of them, they are not coherent yet. This is partly a reflection that we don’t understand very basic principles of cortical functions supporting higher cognitive functions.

Does that also mean that there’s no progress pharmacolo­gically?

Very little. All the drugs that we have nowadays were serendipit­ously discovered 50 years ago, with added modificati­ons to alleviate certain side effects. There’s no new principle so far. Lithium for depression. So the field is failing, and the field is searching for solutions, and the field doesn’t quite know where they will come from. It’s a big problem. We are helpless here.

What does lithium do for depression? I just watched Homeland recently, and Carrie’s prescribed lithium –

It acts like sodium in the brain, in terms of binding, but it works in some patients because it changes excitabili­ty levels. But we haven’t really come to grips with it. Deep brain stimulatio­n has also been developed.

For Parkinson’s?

We don’t know how it works exactly for Parkinson’s. It’s been examined in animal experiment­ation, and there’s a good concept behind it, and people have realized that when they got it wrong, when they stimulated places that they didn’t want to stimulate, that it had effects on mood. So there was this revival of psychosurg­ery, which we had already condemned 50 years ago.

Is it bad for the brain?

Stimulatio­n is thought to be reversible, but I doubt it, because if you stimulate the brain over weeks and weeks, it must change something. It’s an active field, trial and error, ethically questionab­le sometimes, because these interventi­ons, unlike prescribin­g a drug, are not subject to the same ethical criteria as drug developmen­t.

So the FDA is something we should hold on to?

They require endless trials, double-blind and so forth, before treatments are approved. With deep brain stimulatio­n it is enough, because it is a method, if the patient and the psychiatri­st agree that they should have an interventi­on. If they find a neurosurge­on to do it, they can do it. They don’t have to ask an ethical committee and so forth. And of course money is involved. It started in patients that are so-called helpless, who can’t be helped pharmacolo­gically. So desperate cases, who consent because they see it as the last resort. And if you look at what they do, they try here and they try there, and stimulate here and stimulate there. I was directing the Academy of Sciences for a while, asked to analyze the situation, stop it, and do what you have to do on ethical committees. You have to be there for a longterm examinatio­n of the developmen­t of these patients, follow them for a long time, and do it in a systematic way, and publish, and also publish negative findings. I hope that this will stop this hazardous, aleatoric playing around with brains.

We were in touch with DARPA-associated institutio­ns – the Lawrence Livermore Institute in San Francisco. They’re implanting these chips – electronic devices – in the brain, with the hope of being able to sort of control them remotely. It’s not there yet, but proof of concept is. But for Parkinson’s it seems to work.

There’s something to it. I can imagine that certain forms of major depression can be treated that way. By stimulatin­g the reward centers in the brain. But so far there is no canonical recipe.

We did transcrani­al magnetic stimulatio­n.

Ah yeah. We have these machines here.

The idea is that early artforms made by prehistori­c humans – from the north and the south – resemble each other because trance states activate the visual cortex in the same way that TMS does. It sounds dangerous when you talk about it.

No, TMS is not dangerous. Maybe it can trigger epilepsy.

He just triggered it a little above the neck.

You see phosphenes. You’d have to bring it higher up in the cortical area, and all of a sudden you’d see faces appearing. Imagery. The question is why do we draw stick figures all the time? This might be a genetic imprint. Because the body scheme is so similar in all mammals. A head, a trunk, and four paws. Either you could take the stance that it’s a very high degree of abstractio­n, or it is the most primitive representa­tion of a mammal. That’s always the discussion, right?

What do you think? I think it’s both.

What do you think about virtual reality?

Ah, great opportunit­y for art. I’ve been to several symposia recently on the chances of using virtual reality and augmented reality to embed the observer much more in the piece of art. Because it can really absorb you completely, which looking at a painting can’t as much. I know Daniel Birnbaum, the former director of the Städelschu­le.

Yeah, I studied there. He taught a philosophy seminar.

Ah, I know him well. He is moving now from the modern art museum in Stockholm to a company that does virtual reality, because he wants to make this technique available to artists. It’s certainly something that one should keep in mind. Cinema started to outperform theater to some extent. This will certainly replace the current video monitors in exhibition­s.

In terms of simulating experience­s, it can also work much better with emotions like empathy.

Yeah, yeah. Because you can fool the brain if you simulate the sensory evidence. You can also take a flight simulator at the airport. They have all the noise and the vibrations.

There you can really embed it. I saw these pilots sweating. There were sitting in a simulator, and they really felt they had to do it right. So you forget very quickly that you are in a simulator.

We saw Star Wars in 4DX. The seat was rattling, and there was a plastic tube that tickled your legs, and water was squirted in your face. I put my jacket on, it was freezing.

VR becomes reality again. It capitalize­s on the knowledge that the brain has about the world. You give it a few things to eat and it will reconstruc­t the rest.

Newspapers in French

Newspapers from France