Neu­ral net­works

Neu­ral net­works can raise a smile, but do they have a sense of hu­mour? Ni­cole Ko­bie re­veals the world of AI-cre­ated paint, Poké­mon and recipes

PC & Tech Authority - - LOGIN -

Let com­put­ers think for them­selves and they’re fun­nier than peo­ple ................

tummy Beige. Dorkwood. Sindis Poop. Turdly. These are not paints you’d choose to slather on the walls of your front room, but it’s what Janelle Shane’s neu­ral net­work spat out af­ter be­ing trained on 7,700 Sher­win-Wil­liams colours.

Ar­ti­fi­cial in­tel­li­gence (AI) is se­ri­ous busi­ness. It’s threat­en­ing to take our jobs and change our lives beyond all recog­ni­tion, from self-driv­ing cars to polic­ing by robot and beyond, po­ten­tially one day over­tak­ing our own brains and leav­ing our bi­o­log­i­cal in­tel­li­gence in the dust. How­ever, if you read Shane’s Tum­blr blog show­ing the re­sults of her re­search, you won’t be quiv­er­ing with fear but laugh­ter.

MAK­ING “SUD­DEN PINE”

Shane isn’t an AI re­searcher by trade – she works in op­tics – but plays with neu­ral net­works be­cause they crack up her and her blog’s many read­ers (lewisandquark. tum­blr.com). Us­ing an open-source frame­work called char-rnn and de­vel­oped by An­drej Karpa­thy two years ago, she feeds in a dataset to train the neu­ral net­work, even­tu­ally let­ting it make up its own paint names, Poké­mon char­ac­ters and more. Af­ter study­ing the data file, it con­structs words by guess­ing what’s likely to be the next char­ac­ter, aim­ing to make words that match the orig­i­nal train­ing text.

“It’s look­ing at a se­quence of a cer­tain length… try­ing to pre­dict what the next char­ac­ter should be,” she told PC&TA. “When it’s done that, it moves to the next one.” Not only has her neu­ral net­work cre­ated paint colours such as “Sud­den Pine” and “Green­wa­ter Chami­weed”, but 1980s ac­tion fig­ures (“Bat­tle Com­mand Mas­ter Cramp”), Dun­geons & Dragons spells (“Gland Growth”), and even Doc­tor Who episode ti­tles (“The Dalek of the Daleks”).

While a more se­ri­ous re­searcher would be look­ing to get an ac­cu­rate re­sult from a neu­ral net­work, Shane is out for laughs. “I’m not ex­per­i­ment­ing in a sys­tem­atic way, as you would if you were op­ti­mis­ing or solv­ing a prob­lem, [aim­ing to get] the clos­est match [to the orig­i­nal text],” she ex­plained. “In my case, hav­ing the neu­ral net­work work not en­tirely well

can be good for comedic e ect. I will some­times stop the evo­lu­tion early if I’m get­ting more in­ter­est­ing re­sults.”

It all started when she stum­bled on a col­lec­tion of neu­ral net­work-writ­ten recipes. “I thought they were hi­lar­i­ous and read through them all, and then they were done and the only way to get more was to do them my­self,” Shane said, adding that she had never worked with neu­ral net­works pre­vi­ously.

Bend­ing neu­ral net­works to goofy hu­mour is cer­tainly one way to learn. “Be­ing able to ex­plore the neu­ral net­work for a sim­ple dataset gives me a real ap­pre­ci­a­tion for the more com­pli­cated prob­lems and more so­phis­ti­cated neu­ral net­works that mod­ern re­search is us­ing,” she ex­plained. “I am def­i­nitely learn­ing things about neu­ral net­work size and about dropout and all these vari­ables that are tra­di­tional in neu­ral net­works, but I’m learn­ing them through ex­per­i­men­ta­tion rather than sys­temic study.”

AR­TI­FI­CIAL LAUGHS

While Shane was seek­ing a gig­gle, none of this means co­me­di­ans are at risk of los­ing their jobs to AI. Ju­lia Tay­lor Rayz, as­sis­tant pro­fes­sor in the com­puter and IT de­part­ment at Pur­due Uni­ver­sity, said it’s pos­si­ble to teach com­put­ers to cre­ate jokes, but the re­sults are lim­ited.

There are two ways to train AI to make jokes. “One is we ex­plain to a com­puter the rules that jokes are based on,” Rayz said. “There are quite a few the­o­ries of hu­mour… about what makes a joke a joke.” On top of those rules of hu­mour, you’ll also have to give the sys­tem a “knowl­edge of the world” so it has some ma­te­rial to work with.

The sec­ond tech­nique is to tell the AI a lot of jokes, let­ting it learn by ex­am­ple. “You’re feed­ing a com­puter as many jokes as you can, and you hope it will find pat­terns or logic or fea­tures that will let it di er­en­ti­ate jokes from non-jokes,” she said. “If you’re do­ing that, the re­sult is go­ing to be very much de­pen­dent on how well you se­lect your train­ing cor­pus, how well you se­lect the jokes [and non-jokes] you are feed­ing it.” Keep­ing it niche will o er the best re­sults, she added.

Of course, Shane’s neu­ral net­work isn’t try­ing to be funny – she’s let­ting it loose and col­lect­ing the re­sults of its un­in­ten­tional com­edy. “Com­put­ers def­i­nitely make mis­takes that are hi­lar­i­ous,” Rayz said. “For a com­puter to pur­pose­fully be­come a stand-up co­me­dian – writ­ing its own script and de­liv­er­ing it such that peo­ple will find it amus­ing – that is a lit­tle bit fur­ther away.”

Some­times in­ten­tional and un­in­ten­tional com­edy can com­bine. Shane tasked the neu­ral net­work with cre­at­ing knock-knock jokes, a niche for­mat that ac­cord­ing to Rayz should be rel­a­tively suc­cess­ful. Its first at­tempts were in the right struc­ture, but the words were non­sen­si­cal – it read one ex­am­ple with a punch­line of a cow moo­ing, and took a while to get vari­ants of moo­ing as an an­swer out of its sys­tem.

Af­ter many tries, Shane was shocked that the net­work man­aged a com­pletely orig­i­nal joke. Here it is: “Knock Knock.” “Who’s There?” “Alec.” “Alec who?” “Alec- Knock Knock jokes.”

We didn’t say it was that funny. In­deed, Shane said she was sur­prised how many peo­ple find her neu­ral net­work’s out­put amus­ing. “I thought I had a weird or di er­ent sense of hu­mour, but it turns out there are a lot of peo­ple who find this equally as funny,” she said. “It may be that it’s tap­ping into some­thing that’s pretty com­mon to a lot of peo­ple, the same way that kids’ say­ings or draw­ings can be funny.”

CRE­ATIVE CRE­ATIONS

Few hu­mans would come up with paint colours of “Queen Slime” or “Porcht­in­gle Grey”. Does that mean Shane’s neu­ral net­work is show­ing real cre­ativ­ity?

“I think the neu­ral net­work is a tool… in the same way that us­ing splat­ter­paint tech­nique would give you an un­pre­dictable pat­tern,” Shane said, adding she’s in truth the main cre­ative force, as she chooses the dataset with the most po­ten­tial for hi­lar­ity and ma­nip­u­lates it to the right de­gree for com­edy.

But hu­mour is just one po­ten­tial artis­tic out­put of Shane’s neu­ral net­work fid­dling. “I have been con­tacted now by peo­ple who are artists, paint­ing and so forth, and they want to use the neu­ral net­work as a tool for their art­work... tak­ing it from paint to text to paint again,” she said.

Here’s hop­ing a clever artist com­bines their own cre­ativ­ity with Shane’s goofy paint cre­ations – we can’t wait to see what the first Jack­son Pollock of the AI world man­ages to pro­duce with colours such as “Flumfy Gray” and the blue “Pester Pink”.

Janelle Shane was in­spired to be­gin the project af­ter read­ing hi­lar­i­ous recipes gen­er­ated by a neu­ral net­work

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.