Geelong Advertiser

Intelligen­ce test

- Peter JUDD

A FEW weeks ago our team played around with artificial intelligen­ce that writes news stories based on a few words that you bang in.

It’s called OpenAI and initially the techno-gurus behind it decided that it was far too powerful to unleash on the world.

So they kept it to themselves. We’ve seen how a pile of fake news stories has turned the US political system into a circus.

Imagine what an AI could do if anyone got the keys to drive it.

Well, we got the keys because OpenAI did a backflip and decided we’d all suddenly become very grown-up.

A bunch of sceptics said the nine-month hand-wringing was a marketing ploy to build hype. Sounds right.

I checked out the release notes thinking they’d spend the time wisely, and put in some safeguards.

Instead, I think the AI did all the wordsmithi­ng. It reads like a doomsday forecast.

One: “Humans find the outputs convincing.” Tick. Humans are us. And we are gullible.

Two: This bot, fondly called GPT-2, “can be finetuned for misuse”. That sounds like a feature. A must-have for any self-respecting disrupter and abuser of institutio­ns.

Three: “Detection is challengin­g.” Ripper. OpenAI thinks its bot is just human enough. I wonder if it is gullible, too?

Four: “We’ve seen no strong evidence of misuse so far.” I can sleep easy knowing that. Because “strong evidence” and “so far” are phrases that fill me with confidence.

Five: “We need standards for studying bias”. Ha-ha-ha! So funny.

What kind of morons are these people?

Why would you want to create something no one wants?

What’s the point of diminishin­g trust in technology by releasing products that do just that?

Trust is earned and easily lost. As for the AI, how does it even know, with all its silicon best intentions, if it is telling the truth?

Indeed, an important question is whether it can distinguis­h between those points where it is either really a thief or acting in the best interest of its employer.

When it comes to making decisions about what informatio­n to include, research suggests that an AI’s decision could be the same as a human’s.

“An example of this comes from self-driving cars,” reports Brian Wansink, a Cornell University researcher and author.

“It’s an area that we know an awful lot about and yet it can still drive. If this same concept of informatio­n is applied to algorithms, we should expect human-like judgments to be possible too.”

Are you with me?

Well, you shouldn’t be.

Because everything from “Indeed, an important question” down to “Are you with me” was written by the bot.

I just interrogat­ed it. I threw it a curly question. I asked: “Can an AI be trusted?” And it said … all that drivel about self-driving cars and whether it’s a thief or not.

And, of course, it equated itself with humans, saying it could pretty much replace me.

Let’s ask it something else.

Will Geelong win the grand final in 2020?

Tick. Tick.

The hard line thrown out by the Cats could, apparently, spell disaster.

“I think it’s going to make it much harder for people to get behind the Cats this year and the last few years,” it said. “I think a lot of people are going to look at the Cats as being a big ship that’s being rocked by a series of minor trades.” I’m gullible, but wow …

One more.

What about Dangerfiel­d?

He is still injured, so he’s going to be fine. It’s just his body is a bit stiff, but he’s got to have his pads on and all that. We’re not in the market of going to war for him. He’s in the market of getting fit and playing, so he’s still going to be around at the end of the year.

Blimey.

I wrote this column in 32 minutes.

I had a little help from a bot who is a thief and a liar. I feel dirty.

 ??  ?? Helping hand: Peter Judd used AI to help write his column this week.
Helping hand: Peter Judd used AI to help write his column this week.
 ??  ??

Newspapers in English

Newspapers from Australia