BBC Science Focus

KNOW YOUR MIND

-

We talk to behavioura­l and data scientist Pragya Agarwal about biases and why we need to understand how they mould our views.

Our brain uses shortcuts to think quickly, but sometimes these mental timesavers let us down. Dr Pragya Agarwal talks to Amy Barrett about the science of cognitive biases, and why it’s more important than ever to understand how they hold sway over our views WHY DO WE HAVE BIASES?

In evolutiona­ry terms, we are designed to differenti­ate between people, and make those quick decisions between people who belong to our group, or our tribe, and those who don’t. That was kind of a survival strategy because resources were limited and people had to say, “this is a threat to me or to the limited resources, and so this person is an out-group.”

We make these quick decisions about whether this person or an object is a threat, or should we fear this person. These kind of in-group, out-group demarcatio­ns are made quickly, because we have to process so much informatio­n. There’s no time to take every bit of informatio­n on a rational, logical level. So, a lot of this is processed on the basis of our previous experience­s. We make these quick matches between our previous experience­s, say, in the past, this kind of person or situation was a threat to us, so that is what this will be.

That’s how these immediate stereotype­s are formed. We rapidly make demarcatio­ns and distinctio­ns and labels, as a way of processing informatio­n really quickly before we can take it to a rational level in our brain.

ARE THERE BENEFITS TO THIS?

Absolutely. Say I go shopping and want to choose a brand of cereal in the supermarke­t. If I took every bit of informatio­n around me and weighed it up and tried to make an independen­t decision based on clear analysis, then there’s not enough time. I would be stuck with every decision in the world.

But there are obviously negative sides to it in certain situations and where these decisions actually make an impact. They have life and death impact. They’re more important than just choosing a brand of cereal.

HOW DOES SOCIAL MEDIA FIT INTO THIS?

Some of the discourse around biases and prejudices can become quite heated, because they can feel like a judgment on our whole identity. We say that you are biased, so you are a bad person.

What I’m trying to do in my book is, by giving scientific evidence and bringing in different case studies and theories, to help us understand that we can all unlearn some of our toxic behaviours. Yes, social media is creating echo chambers

and filter bubbles. Social media is strengthen­ing the sense of belonging in a particular community, that “I belong in a particular tribe, so I cannot engage with anybody who does not belong in that.” Again, we’re falling back on primal in-group, out-group tendencies through these mediums. But I also think that these divides are being reinforced by the climate in which we live.

If a marginalis­ed community starts talking about and pushing back against prejudices, then there will be further divides initially. But having more evidence, and open-minded, non-judgmental platforms for these discussion­s is important.

TELL US ABOUT THE TYPES OF BIASES.

An explicit bias is something that is very clear. If somebody purposeful­ly discrimina­tes between two people based on their race or skin colour, or how much they earn, and it’s clear that this discrimina­tion is happening or these prejudices exist, then that is an explicit bias.

But there are also implicit ones, which are more difficult to identify as biases. These affect our decisions and our actions, but they are not very clear. For instance, making fun of somebody, or preferring one person over another: if someone looks at a CV and says, “Oh, I think this person is more qualified than the other,” just because they went to a certain university.

All of us also carry a conformity bias; we are more attracted to people who are more like us. Those kind of biases are not easy to mark out.

ONCE YOU’RE AWARE OF UNCONSCIOU­S BIASES, CAN YOU TRAIN YOURSELF OUT OF THEM?

There’s a whole debate about whether unconsciou­s bias is something we’re born with or whether we can unlearn them. Personally, I believe that a lot of these biases are learned and shaped through our experience­s. The way that we have been brought up, the cultural and social context, the media we’ve been exposed to, the things that our tribe and our community tell us, the things we talk about or we read in newspapers. We learn them through our lifetime. And because we learn them, we can unlearn them as well. I believe that once we become aware of them and we reflect on them, we can change our attitudes accordingl­y.

SO, THINGS IN YOUR CHILDHOOD APPEAR LATER IN LIFE AS UNCONSCIOU­S BIASES?

Yes. In my book, I talk about developmen­tal psychology, and how children, as they’re growing up, start forming the sense of in-group and outgroup associatio­ns. That’s a natural response for children, because they’re making sense of their own identity, their own place in the world. It’s largely shaped by who they see around them, who they see as foes, who they see as friends, who they find comfort with. There’s no real prejudice involved in that stage, but prejudices are bolstered and reinforced by messages they might get from their parents, or from their education, or the books they read and the TV that they watch.

DO YOU SEE A FUTURE WITHOUT THESE BIASES?

No. I don’t think so. I think change will happen, and is happening slowly – very slowly, because there is always resistance to any kind of change in status quo. People who have privilege will always resist, because that threatens their status, and that means they worry about what their position and place in society will be once their status changes. It’s important that we talk about it, that we become aware of things [that arise from biases] like microaggre­ssions. Things that were acknowledg­ed and ingrained as part of our culture, and accepted as okay, even though it hurt the person who was being marginalis­ed or victimised.

We cannot just do away with all our cognitive biases, all our implicit biases. Bias is not always negative. But we can do away with the stereotype­s, prejudices and discrimina­tion that is linked to some of the biases that we carry.

HOW DO YOU STUDY PEOPLE’S BIASES?

It’s difficult to measure and quantify these things. In my book I critique some of the tools and methods which have been considered as the absolute one way that we can measure bias. Like the Implicit Associatio­n Test (IAT), for instance. The IAT was proposed by Harvard psychologi­sts

“We cannot just do away with all our cognitive biases, all our implicit biases. But we can do away with the stereotype­s, prejudices and discrimina­tion that is linked to some of the biases we carry”

and it has been used for a long time, because we don’t have any other test. It’s a useful tool, but some people think it gives a measurable value for what implicit bias is. It works on the basis of associatio­n and in that way, it tells us what our implicit biases are. If I associate a certain thing with a certain thing. For example, if I say apple = green all the time, then obviously I believe, firmly, that apples are always green and they can never be red. That’s very simplistic­ally speaking. And the IAT will give these associatio­ns a value, but that number doesn’t really give you the absolute marker for what kind of biases we carry.

I see the IAT being used a lot by organisati­ons. They call it ‘diversity training’, or ‘implicit bias training’, but it’s not training you to understand about what implicit bias is, and how to tackle it.

CAN A COMPUTER BE BIASED?

We might think that AI is neutral – that is certainly how people promote AI-based hiring and recruitmen­t platforms. People say that, because it’s technology, it will do away with human biases. But that’s completely incorrect because machines are not black boxes; they are being designed by humans and building on the data that exists.

So, all the biases from the team, developers, from the data, are really enforced and built into the system itself. But when these systems and technology again create these biases, it can perpetuate the biases that already exist in society as well, so it becomes kind of a vicious cycle. We need to be so careful when we use technology and machine learning.

BUT THIS ISN’T JUST HIGH-TECH STUFF, IS IT? IT’S IN OUR HOMES AND IN OUR PHONES.

Yes, absolutely. The problem with tech, and in STEM, is that most of the developer teams [that design the tech] are largely male-dominated. There are studies that talk about instances of sexism and misogyny in Silicon Valley. Those kind of biases within teams can get built into the technology or systems they’re creating. So, giving voice assistants feminine voices, or female names, creates and reinforces the notion that women are in a subservien­t or an assistant role. That they can be talked to in a dominating way, and they will not retort or stand up for themselves. There was a report by the UN a couple of years ago which revealed that, in reply to hearing something sexually demeaning, a voice assistant would only say, “I’d blush if I could”.

I think a lot of organisati­ons are beginning to take these concerns on board. I know that there have been changes in the way voice systems have been designed, and the things they can say in response to sexual harassment statements.

BUT WE NEED TO ADDRESS THESE THINGS AT THE BEGINNING, BEFORE THE TECH RUNS AWAY FROM US.

Absolutely.

 ??  ??
 ??  ??
 ??  ?? ABOVE
Behavioura­l scientist Pragya Agarwal
ABOVE Behavioura­l scientist Pragya Agarwal

Newspapers in English

Newspapers from United Kingdom