American Life
Was passiert, wenn man künstliche Intelligenz ohne Beachtung ethischer Grundsätze einsetzt – und sie vielleicht zur Gefahr wird?
Ginger Kuenzel on artificial intelligence
The term AI — artificial intelligence — has been around for years. I’m not talking about people who tell us how smart they are, but who are actually sadly lacking in the brains department. We seem to have more than our fair share of those folks around these days, at least here in the US.
Back to my main topic, though: AI refers to computers and other machines with human-like intelligence. One interesting aspect of AI is facial recognition. This means, for example, that my computer can recognize my face and allow me to log in without a password — theoretically. About eight times out of ten, however, it says it does not recognize me and asks me for my password. That’s not a good track record. Even the woman at the supermarket checkout counter remembers me and greets me by name.
Recently I read about some possible future uses of facial-recognition software. It seems that one company has trained its AI systems to recognize whether a person is happy or sad, tired or energetic, angry or relaxed. There might at some point even be a way for machines to discover whether a person tends toward dishonesty. I am trying to imagine the consequences. I can see why businesses would love to scan shoppers’ faces (surreptitiously, of course) to find out who might be thinking about shoplifting that day.
Do I really want the supermarket employees to know, however, that I’m angry because I would rather be enjoying my garden than standing in a long checkout line? On the other hand, I guess I wouldn’t mind them knowing that I’m angry because the tomatoes in the produce section look old, or because my favorite kind of ice cream is sold out.
Some companies developing AI applications are now beginning to take a moral stand on what business opportunities they will or will not pursue. Probably nobody would have anything against these companies developing an application that sounds an alarm when it recognizes that a car driver is getting sleepy. However, this same facial-recognition technology could, in theory, be used by an authoritarian regime to scan faces in a crowd and identify people who are angry or sad — and thus potential dissidents. Today, we are battling racial profiling, especially when people are considered to be suspicious simply because of their race. Could we be dealing with emotional profiling in the future?
It’s unsettling enough that Facebook recognizes my face when someone posts a photo of me. In the future, an emoticon will possibly appear under my picture that tells everyone my state of mind. If AI can recognize anger, sadness, energy levels, and dishonesty, what further emotions could it identify and share with others? It seems like a slippery slope indeed.