The Sweet Spot Be­tween Idiot and Ex­pert

Si­mon Bird takes an aca­demic look at the hype sur­round­ing emerg­ing tech­nolo­gies.

New Zealand Marketing - - Contents - Si­mon Bird is PHD’S strat­egy di­rec­tor.

We’ve all read var­i­ous ar­ti­cles and heard chat­ter about the move­ment of agency ser­vices go­ing ‘in-house’, mov­ing into a con­sult­ing firm and/or be­ing au­to­mated. Some of these changes are ob­vi­ously jus­ti­fied but there’s some ev­i­dence that com­pa­nies that move agency func­tions out of agen­cies may well end up pro­duc­ing worse work rather than bet­ter work. Ac­cord­ing to Si­mon Bird, it sug­gests there’s a sweet spot some­where be­tween idiot and global ex­pert where open mind­ed­ness and cu­rios­ity is max­imised.

You may be fa­mil­iar with the Gart­ner Hype Cy­cle, if only for the hu­mor­ously cyn­i­cal and philo­soph­i­cal names of the var­i­ous stages, such as “the peak of in­flated ex­pec­ta­tions”, “the trough of dis­il­lu­sion­ment”, and “the slope of en­light­en­ment”, names that sound more like chap­ters in a new Tony Rob­bins book than those of a tech­nol­ogy clas­si­fi­ca­tion chart.

The Hype Cy­cle is now over 20 years old. Given this time frame largely cov­ers the en­tire rise of dig­i­tal mar­ket­ing, tak­ing a lit­tle wan­der down in­no­va­tion his­tory lane is rather in­for­ma­tive. It turns out not so many tech­nolo­gies have pro­gressed smoothly along the adop­tion jour­ney.

Aside from some now out­dated lan­guage, the first Hype Cy­cle from 1995 ac­tu­ally looks like a pretty good pre­dic­tion of tech adop­tion. Emer­gent Com­pu­ta­tion is, ap­par­ently, a fore­fa­ther to neu­ral net­work-based ma­chine learn­ing, so whilst the ter­mi­nol­ogy might seem un­fa­mil­iar, the tech­nol­ogy is still very rel­e­vant in 2018 in ar­eas such as ma­chine learn­ing.

But look­ing a lit­tle more closely at the many Hype Cy­cles since 1995, it’s clear over the years that there have been more than a few tech­nolo­gies that turned out to be far more hype than help and ended up slip­ping right off the cy­cle truth ver­i­fi­ca­tion (2004), 3D TV (2010), so­cial TV (from 2011), Vol­u­met­ric and holo­graphic dis­plays (2012), to name but a few.

As Michael Mul­lany says in his ar­ti­cle about this sub­ject from De­cem­ber 2016, the tech in­dus­try (like most in­dus­tries) is not very good at mak­ing pre­dic­tions and also not good at look­ing back­wards af­ter the fact to see what it got right and what it didn’t.

This is no slight against the tech in­dus­try, al­most all in­dus­tries fall into this type of think­ing - some­thing Nasim Taleb pointed out some years ago in his book The Black Swan. For rea­sons of cog­ni­tive ef­fi­ciency hu­mans are rather lazy thinkers, so we tend to re­mem­ber the easy to re­call suc­cess­ful pre­dic­tions and con­ve­niently, mostly sub­con­sciously, we for­get the hard to re­call fail­ures. This, of course, cre­ates a ter­ri­bly in­ac­cu­rate feed­back loop.

Mul­lany also goes on to men­tion a cou­ple of other rea­sons why there are so few tech­nolo­gies that flow nicely through the tech­nol­ogy adop­tion cy­cle; many tech­nolo­gies are sim­ply flashes in the in­no­va­tion pan so to speak (like the ones men­tioned above, although truth ver­i­fi­ca­tion sounds like it may have caught on if it was launched now) and a good few other tech­nolo­gies are con­stant pres­ences in the Hype Cy­cle be­cause their main­stream adop­tion con­tin­ues to get fur­ther into the fu­ture each year rather than closer e.g. quan­tum com­put­ing and brain/ma­chine in­ter­faces.

Ob­vi­ously the world of mar­ket­ing and ad­ver­tis­ing is be­com­ing in­creas­ingly more tech­nol­ogy based. If all the ex­perts at Gart­ner keep mak­ing con­sid­er­able er­rors in their pre­dic­tions about tech­nol­ogy and its up­take, what hope does our in­dus­try have with a lot less tech­no­log­i­cal ex­per­tise at our fin­ger­tips?

Well, it turns out that there is good rea­son to be hope­ful; some­times know­ing less than the lead­ing ex­perts can ac­tu­ally be a good thing, as long as it’s not too much less. This sweet spot of knowl­edge cre­ates just the right amount of doubt in a point of view, which in turn helps cre­ate more open-mind­ed­ness to pos­si­ble fu­ture out­comes and thereby results in bet­ter pre­dic­tions than the afore­men­tioned ex­perts.

It’s based on the Dun­ning-kruger Ef­fect, which is more com­monly used to ex­plain why peo­ple with lim­ited tal­ent man­age to be so over­con­fi­dent i.e. peo­ple at Karaoke who think they can sing like rock stars but sound tone deaf. It’s es­sen­tially an ig­no­rance bias whereby peo­ple with lim­ited knowl­edge don’t know enough to know what they don’t know. It’s com­monly rep­re­sented by the neigh­bour­ing graph.

Apart from the ini­tial burst of con­fi­dence from the ig­no­rant, which no­tably is at a level even an ex­pert fails to ever at­tain, the chart makes in­tu­itive sense, as we learn more we be­come less con­fi­dent (know what we don’t know) un­til we ap­proach ex­pert level and be­come in­creas­ingly con­fi­dent (know that we know).

How­ever, whilst con­fi­dence amongst ex­perts is clearly more de­sir­able than con­fi­dence amongst id­iots, it still re­mains prob­lem­atic, an area Berke­ley psy­chol­o­gist Philip Tet­lock stud­ied ex­ten­sively in the early 2000s.

He re­cruited 284 peo­ple who made their liv­ing pro­vid­ing ex­pert pre­dic­tions in the ar­eas of pol­i­tics and eco­nom­ics, i.e. hu­man be­hav­ior, and had them an­swer var­i­ous ques­tions along the lines of ‘Will Canada break up?’ or ‘Will the US go to war in the Per­sian Gulf?’ etc. In all, he col­lected over 82,000 ex­pert pre­dic­tions.

His results are in some re­spects the in­verse shape of the Dun­ning-kruger chart. Know­ing some­thing about a sub­ject def­i­nitely im­proves the re­li­a­bil­ity of a pre­dic­tion, how­ever, be­yond a cer­tain point know­ing more seems to make pre­dic­tions less re­li­able.

To quote Tet­lock him­self “we reach the point of di­min­ish­ing mar­ginal pre­dic­tive re­turns for knowl­edge dis­con­cert­ingly quickly, in this age of hy­per­spe­cial­i­sa­tion there is no rea­son for sup­pos­ing that con­trib­u­tors to top aca­demic jour­nals – dis­tin­guished po­lit­i­cal sci­en­tists, area study spe­cial­ists, econ­o­mists and so on – are any bet­ter than jour­nal­ists or at­ten­tive read­ers of re­spected pub­li­ca­tions, such as the New York Times, in ‘read­ing’ emerg­ing sit­u­a­tions”. He also con­cluded that in many sit­u­a­tions, the more fa­mous or ex­pert the per­son do­ing the pre­dict­ing was, the less ac­cu­rate the pre­dic­tion.

The key is­sue be­ing the ex­pert’s knowl­edge and ex­per­tise com­bined with their high level of con­fi­dence pre­vents them from en­ter­tain­ing less likely but still highly pos­si­ble out­comes. Their ex­per­tise leads them to be­come some­what close-minded; they drink their own Kool Aid so to speak. Whereas the merely knowl­edge­able, who are less con­fi­dent in them­selves and their pre­dic­tions, are far more likely to as­sess al­ter­na­tive out­comes, thus mak­ing their view points and pre­dic­tions more ac­cu­rate.

This po­si­tion of know­ing a rea­son­able amount is the nat­u­ral place for agen­cies. Our ‘ex­per­tise’ is more an ac­cu­mu­la­tion of many ar­eas, none of which we are specif­i­cally ex­pert in. We know a fair bit about hu­man be­hav­iour, a fair bit about mar­ket­ing, a fair bit about tech­nol­ogy, a fair bit about me­dia chan­nels and pop­u­lar cul­ture and a fair bit about our clients’ busi­nesses. We know less about each in­di­vid­ual area than a sin­gle dis­ci­pline ex­pert or global au­thor­ity but our ‘ex­per­tise’ is in blend­ing our work­ing knowl­edge of each area to­gether. In to­day’s world this po­si­tion­ing is in­valu­able and not one eas­ily copied by a tech firm, a con­sult­ing firm or by clients them­selves – they’re all deep ex­perts in their own fields mak­ing them less open­minded to­ward non-typ­i­cal out­comes or new ideas and in­no­va­tions.

How­ever, this is per­haps a po­si­tion we have not al­ways em­ployed as well as we could. Whilst Gart­ner has clearly over-hyped more than a few tech­nolo­gies, the world of mar­ket­ing and ad­ver­tis­ing has also been re­spon­si­ble for a num­ber of un­nec­es­sary web­sites, (my per­sonal favourite is still bid­for­surgery.com) apps and VR/AR games.

That isn’t to say there haven’t been some fan­tas­tic ap­pli­ca­tions of these tech­nolo­gies, more that we haven’t al­ways re­spon­si­bly ap­plied our ‘rea­son­ably knowl­edge­able’ po­si­tion­ing. This sweet spot of open-mind­ed­ness and knowl­edge should al­low us to be bet­ter at putting new tech­nol­ogy into con­text than tech com­pa­nies whose ex­perts typ­i­cally place too much im­por­tance on their own area of spe­cialty. But to max­imise our po­si­tion we need to stop us­ing tech­nol­ogy be­cause it’s new or be­cause it makes us ap­pear in­no­va­tive.

Our most cur­rent in­dus­try ob­ses­sion seems to be AI, which in­ci­den­tally Gart­ner cur­rently has at the top of the peak of in­flated ex­pec­ta­tion. Many of the head­lines talk about AI tak­ing jobs, killing brands, tak­ing over mar­ket­ing, mak­ing ads and get­ting smarter than us and be­com­ing ex­is­ten­tially danger­ous. To be fair, it is do­ing some amaz­ing things both in terms of mar­ket­ing and other ar­eas of life, real-time lan­guage trans­la­tion and can­cer spot­ting to name just two (it’s also ob­vi­ously worth­while think­ing about avoid­ing be­ing wiped out by our own in­ven­tions). How­ever, to avoid mis­ap­ply­ing new tech­nol­ogy in some of the ways we have in the past we must bal­ance the above head­lines with some less dra­matic points of view such as;

“It would be more help­ful to de­scribe the de­vel­op­ments of the past few years as hav­ing oc­curred in 'com­pu­ta­tional sta­tis­tics' rather than in AI” - Patrick Win­ston, pro­fes­sor of AI and com­puter sci­ence at MIT.

"Neu­ral nets are just thought­less fuzzy pat­tern recog­nis­ers, and as use­ful as fuzzy pat­tern recog­nis­ers can be” Ge­of­frey Hin­ton, cog­ni­tive psy­chol­o­gist and com­puter sci­en­tist Google / Uni­ver­sity of Toronto.

“The claims that we will go from one mil­lion grounds and main­te­nance work­ers in the U.S. to only 50,000 in 10 to 20 years, be­cause ro­bots will take over those jobs are lu­di­crous.

"How many ro­bots are cur­rently op­er­a­tional in those jobs? Zero.

How many re­al­is­tic demon­stra­tions have there been of ro­bots work­ing in this arena? Zero.” - Rod­ney Brooks Aus­tralian roboti­cist, Fel­low of the Aus­tralian Academy of Sci­ence and for­mer Pana­sonic Pro­fes­sor of Robotics at MIT.

None of this is to sug­gest that we should stop us­ing AI or em­ploy­ing the lat­est tech­nolo­gies, just that to ex­ploit the ‘rea­son­ably knowl­edge­able’ po­si­tion we must be bal­anced in our as­sess­ment of new in­no­va­tions.

The open-mind­ed­ness and ob­jec­tiv­ity of sit­ting in the mid­dle of the id­iots and ex­perts is a place that is only be­com­ing ever more valu­able as the world gets more com­plex and filled with more bril­liant, but close minded, ex­perts. The great thing for us is that it’s not a po­si­tion that is eas­ily copied and if we ex­ploit it well it should help us com­pete against con­sult­ing firms, client ‘in-hous­ing’ and some ar­eas of au­to­ma­tion.

And at the very least it should help us avoid sound­ing lu­di­crous, some­thing that is go­ing to be rather tricky when we start talk­ing about smart dust, the most re­cent en­trant in the lat­est Gart­ner Hype Cy­cle.

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.