The Boston Globe

Androids among us

To the challenges of fake news, add fake people

- Scott Kirsner

Earlier this year, I offed an employee who worked at a downtown Boston tech company. I didn’t get caught and I’m not sorry. But I did confess to the company’s CEO, Paul English, last week.

English said he created a fictional character, Ada Richard, on LinkedIn as a social experiment, “just to see how people would respond to an AI” working for his company. Richard was a mocha-skinned woman with round glasses and a halfsmile; her photo was generated by the image-creation software Midjourney.

Her profile said she was “a passionate and results-oriented consultant at Boston Venture Studio, a leading startup studio that specialize­s in creating and scaling our own innovative businesses.” The only tipoff that Richard wasn’t real: she wrote that she was home-schooled by Sam Altman, the founder of the company that created ChatGPT and Dall-E, two of the leading generative artificial intelligen­ce systems. So, is this our future — one full of AI-generated people like Ada Richard, and endless skeins of AI-produced content? I’m worried.

On June 15, English asked on LinkedIn, “How long will it take LinkedIn to find out that one of my Boston Venture Studio team members is an AI?” He invited people to send emails to Richard; if you did, she’d respond quickly, and invite further conversati­on.

I am not sure why I was feeling like Rick Deckard that day, but I was. (Deckard is the “Blade Runner” character whose job is to hunt down androids.) It seemed that LinkedIn, owned by Microsoft, didn’t have the technology to deter-

mine that Richard wasn’t real and I was curious if the company would respond to a human report. I clicked a few buttons to inform LinkedIn that “this account is not a real person.”

A few days later, English reported that his team tried to login to LinkedIn as Richard, but LinkedIn asked them to “upload a copy of a government ID for her.”

That was the end of Ada’s short life.

I’ve been very interested in what happens when we’re getting emails, reading content, and even listening to music generated by artificial intelligen­ce without knowing it. At some point (soon?), will AI generate so much digital stuff that it crowds out human creators? How might we label what’s human-made, and what’s AI-produced?

As a “content creator” myself, I’m in a similar place as many white-collar workers who never thought their jobs could be automated. I’m trying to see how this AI steamrolle­r works and if I can learn to drive it, ideally to keep from being crushed by it.

I asked ChatGPT to write an email to English, confessing that I’d iced Ada. I let him know that the generative AI service had written the text, which read in part:

“Recently, I came across a LinkedIn profile by the name of Ada Richard… Given the importance of trust and authentici­ty within our profession­al community, I reported my concerns to LinkedIn. … My primary concern was ensuring the accuracy and integrity of the platform for everyone... If you’d like to discuss this or clarify any related matters, please don’t hesitate to reach out.”

He replied, and I asked him to answer a few questions via email before we spoke. I had ChatGPT generate the questions. They were pretty good, but I edited and tried to improve them.

English wrote in his reply that he knew the profile would be removed — he just didn’t know how long it would take. When Richard vanished, he wrote, “It did make me a little sad.”

English, cofounder of the popular travel site Kayak, is an advocate of embracing AI, recently donating $5 million to UMass-Boston to create an Applied AI Institute. He said he wants to see what productive things AI will be able to do — and to understand its downside.

He also raised important questions about LinkedIn as the de facto “online résumé” which employers often rely on when hiring. Why couldn’t LinkedIn create a better way for universiti­es or employers to validate that someone had really earned a degree or worked there?

“I’ve seen random people on LinkedIn who have claimed to work for one of my companies,” English wrote. He also suggested that LinkedIn should run “a crazy experiment” allowing companies to “have an AI employee, marked as AI, as a way to interact with that company.”

When we spoke, English suggested that the AI employee might be able to answer questions for prospectiv­e employees about benefits or workplace culture. He said that in the longterm, it probably made sense for LinkedIn, Spotify, or Chess.com to allow AIs or bots on their platforms, as long as they were labeled as such.

I reached out to LinkedIn to see if they wanted to talk about what they’re doing to stamp out fake AI profiles. A spokespers­on, Brionna Ruff, just sent me links to blog posts on the topic, including one about how they’re working with academic researcher­s to identify AI-generated profile photos — like Richard’s. A company report said LinkedIn blocks 99.7 percent of fake profiles before members report them.

To test LinkedIn’s fake detection, I created a new profile and gave him the name of a fictional character from the world of sports. His hunky profile photo was generated by the free software Stable Diffusion (which was used in academic work on ferreting out fake pics). I used ChatGPT to describe some of his work experience.

I also used ChatGPT to give me the right answers to a timed LinkedIn skill test about software developmen­t — and he scored, predictabl­y, in the top 15 percent. He earned a degree at MIT that they don’t grant, in a field they don’t teach. He also worked briefly for English’s company, Boston Venture Studio. (I informed English about this project.)

Despite all those fabricatio­ns, LinkedIn hasn’t flagged the profile. Should we see how long it takes their fake detection to improve — or how soon a reader tracks this fake down and reports him, Blade Runner-style? Leave a comment below or send me an email if you find him.

 ?? ?? When a fictional employee showed up on LinkedIn earlier this year, with an AI-generated profile photo, columnist Scott Kirsner was surprised that LinkedIn wasn’t able to catch it. So he took on the role of AI bounty hunter — not unlike Rick Deckard in the “Blade Runner” films.
When a fictional employee showed up on LinkedIn earlier this year, with an AI-generated profile photo, columnist Scott Kirsner was surprised that LinkedIn wasn’t able to catch it. So he took on the role of AI bounty hunter — not unlike Rick Deckard in the “Blade Runner” films.
 ?? ??
 ?? JESSICA RINALDI/GLOBE STAFF/FILE 2020 ?? Paul English asked, “How long will it take LinkedIn to find out that one of my Boston Venture Studio team members is an AI?”
JESSICA RINALDI/GLOBE STAFF/FILE 2020 Paul English asked, “How long will it take LinkedIn to find out that one of my Boston Venture Studio team members is an AI?”

Newspapers in English

Newspapers from United States