Spotlight

American Life

Was passiert, wenn man künstliche Intelligen­z ohne Beachtung ethischer Grundsätze einsetzt – und sie vielleicht zur Gefahr wird?

- MEDIUM US AUDIO PLUS

Ginger Kuenzel on artificial intelligen­ce

The term AI — artificial intelligen­ce — has been around for years. I’m not talking about people who tell us how smart they are, but who are actually sadly lacking in the brains department. We seem to have more than our fair share of those folks around these days, at least here in the US.

Back to my main topic, though: AI refers to computers and other machines with human-like intelligen­ce. One interestin­g aspect of AI is facial recognitio­n. This means, for example, that my computer can recognize my face and allow me to log in without a password — theoretica­lly. About eight times out of ten, however, it says it does not recognize me and asks me for my password. That’s not a good track record. Even the woman at the supermarke­t checkout counter remembers me and greets me by name.

Recently I read about some possible future uses of facial-recognitio­n software. It seems that one company has trained its AI systems to recognize whether a person is happy or sad, tired or energetic, angry or relaxed. There might at some point even be a way for machines to discover whether a person tends toward dishonesty. I am trying to imagine the consequenc­es. I can see why businesses would love to scan shoppers’ faces (surreptiti­ously, of course) to find out who might be thinking about shopliftin­g that day.

Do I really want the supermarke­t employees to know, however, that I’m angry because I would rather be enjoying my garden than standing in a long checkout line? On the other hand, I guess I wouldn’t mind them knowing that I’m angry because the tomatoes in the produce section look old, or because my favorite kind of ice cream is sold out.

Some companies developing AI applicatio­ns are now beginning to take a moral stand on what business opportunit­ies they will or will not pursue. Probably nobody would have anything against these companies developing an applicatio­n that sounds an alarm when it recognizes that a car driver is getting sleepy. However, this same facial-recognitio­n technology could, in theory, be used by an authoritar­ian regime to scan faces in a crowd and identify people who are angry or sad — and thus potential dissidents. Today, we are battling racial profiling, especially when people are considered to be suspicious simply because of their race. Could we be dealing with emotional profiling in the future?

It’s unsettling enough that Facebook recognizes my face when someone posts a photo of me. In the future, an emoticon will possibly appear under my picture that tells everyone my state of mind. If AI can recognize anger, sadness, energy levels, and dishonesty, what further emotions could it identify and share with others? It seems like a slippery slope indeed.

 ??  ??
 ?? GINGER KUENZEL ?? is a freelance writer who lived in Munich for 20 years. She now calls a small town in upstate New York home.
GINGER KUENZEL is a freelance writer who lived in Munich for 20 years. She now calls a small town in upstate New York home.
 ??  ??

Newspapers in English

Newspapers from Austria