Pol­icy De­sign for Hu­mans

Rotman Management Magazine - - FRONT PAGE - by Dilip So­man, Katie Chen and Neil Ben­dle

Gov­ern­ments from Canada to Sin­ga­pore are em­brac­ing find­ings from Be­havioural Eco­nomics to im­prove the lives of their cit­i­zens.

FROM REV­ENUE COL­LEC­TION to trade pol­icy to in­fra­struc­ture in­vest­ment, the var­ied man­dates of gov­ern­ment are com­pli­cated and di­verse. Yet at its core, the role of gov­ern­ment is sim­ple: To max­i­mize the wel­fare of cit­i­zens by im­prov­ing their well­be­ing, cre­at­ing fair and ef­fi­cient mar­ket­places and plan­ning for the fu­ture.

Gov­ern­ments at­tempt to ac­com­plish this by help­ing cit­i­zens, or­ga­ni­za­tions, their own agen­cies and lo­cal busi­nesses make good choices. As such, the gov­ern­ment — just like ev­ery other or­ga­ni­za­tion that ex­ists — is in the busi­ness of be­hav­iour change. In­deed, al­most ev­ery ac­tiv­ity that a gov­ern­ment un­der­takes boils down to the need to en­cour­age or dis­cour­age cer­tain be­hav­iours. There are four par­tic­u­lar types of be­hav­iour change that pol­i­cy­mak­ers fo­cus on:

1. COM­PLI­ANCE. Get­ting peo­ple and busi­nesses to be­have in ac­cor­dance with pre­scribed stan­dards and by cer­tain dead­lines.

2. CHOICE SWITCH­ING. En­cour­ag­ing cit­i­zens to per­form cer­tain tasks on­line or to save for the fu­ture.

3. CON­SUMP­TION. Pro­mot­ing con­sump­tion takes many forms — from get­ting se­niors to con­sume their med­i­ca­tions to get­ting young peo­ple to eat healthily.

4. AC­CEL­ER­A­TION OF DE­CI­SIONS. In many sit­u­a­tions, of­fi­cials want to ac­cel­er­ate de­ci­sion mak­ing in im­por­tant ar­eas — e.g., for busi­nesses to start im­ple­ment­ing en­vi­ron­men­tally-friendly poli­cies.

Given the cen­tral­ity of be­hav­iour change to vir­tu­ally ev­ery­thing gov­ern­ments do, it is a mat­ter of great sur­prise that un­til re­cently, most gov­ern­ments had scarce ca­pa­bil­i­ties in the sci­ence of be­hav­iour — or, as it is re­ferred to nowa­days, Be­havioural Eco­nomics. Sure, ev­ery gov­ern­ment has a chief eco­nomic ad­vi­sor and cadres of tra­di­tion­ally-trained econ­o­mists who de­velop and im­ple­ment pol­icy; but since the ul­ti­mate goal of ev­ery gov­ern­men­tal pol­icy

is to in­flu­ence be­hav­iour, we are sur­prised at how few gov­ern­ments have in­vested in hir­ing a chief be­havioural sci­en­tist.

The rea­son for the dom­i­nance of tra­di­tional Eco­nomics in gov­ern­ment can be ex­plained by us­ing the lan­guage that Richard Thaler and Cass Sun­stein in­tro­duced in their 2008 book, Nudge. In it, they make a dis­tinc­tion be­tween two com­pletely dif­fer­ent types of agents: ‘Econs’ and hu­mans. Econs are highly-so­phis­ti­cated de­ci­sion mak­ers who con­sume vast quan­ti­ties of in­for­ma­tion with ease and have in­fi­nite com­put­ing abil­i­ties — much like the robot on the cover of this is­sue. They also max­i­mize self-in­ter­est, are for­ward-look­ing and con­sider the fu­ture im­pact of ev­ery de­ci­sion they make — never let­ting emo­tions get in the way. In short, they obey all of the laws of Eco­nomics.

In con­trast, an abun­dance of re­search shows that hu­mans are emo­tional, im­pul­sive, cog­ni­tively lazy, and have dif­fi­culty deal­ing with large quan­ti­ties of in­for­ma­tion or choice op­tions. As a re­sult, a num­ber of com­men­ta­tors have re­ferred to Econs as ‘ra­tio­nal’ and to hu­mans as ‘ir­ra­tional’ — as if to sug­gest that hu­man de­ci­sion mak­ing is in­her­ently flawed.

Our view is slightly dif­fer­ent. As one of us ( Dilip So­man) noted in his 2015 book [ The Last Mile: Cre­at­ing So­cial and Eco­nomic Value from Be­havioural In­sights], the fact that hu­mans do not obey the laws of Eco­nomics is not a sur­prise. Hu­mans were never de­signed to solve com­plex in­ter-tem­po­ral max­i­miza­tion prob­lems or to sift, cu­rate, an­a­lyze and act on large vol­umes of data. The very as­sump­tion that hu­mans ac­tu­ally be­have like Econs is it­self an ex­am­ple of ir­ra­tional­ity.

If cit­i­zens were in­deed robot-like Econs, the task of be­hav­iour change for gov­ern­ments would be rel­a­tively easy and could in­volve three sim­ple in­stru­ments:

1. RE­STRIC­TIONS. Bans, le­gal re­stric­tions and other forms of reg­u­la­tion limit ac­cess to cer­tain op­tions, thereby cre­at­ing a be­havioural shift to­wards the de­sired al­ter­na­tive.

2. IN­CEN­TIVES. These can be ei­ther pos­i­tive in­cen­tives in the IN­CEN­TIVES. form of ‘car­rots’ (i.e. sub­si­dies or fee waivers) or neg­a­tive in­cen­tives in the form of ‘sticks’ (i.e. sur­charges or penal­ties).

3. IN­CREASED IN­FOR­MA­TION. The pro­vi­sion of ad­di­tional in­for­ma­tion and some­times, more op­tions, is widely be­lieved to im­prove de­ci­sion mak­ing.

The prob­lem is, gov­ern­ments strug­gle with mak­ing pol­icy de­ci­sions work be­cause these three tools are de­signed for Econs rather than hu­mans — and the re­search is filled with ex­am­ples of the prob­lems that re­sult. For ex­am­ple, the Canada Learn­ing Bond — a wel­fare pro­gram that sup­ported chil­dren’s ed­u­ca­tion with ‘free money’ — gar­nered a take-up rate of only 16 per cent in the two years af­ter its launch; and in the U.S., sev­eral wel­fare pro­grams have suf­fered from sim­i­larly-low take-up rates.

Else­where, at­tempts to get cit­i­zens to pay their taxes on­line — or to get flu shots, do­nate or­gans, eat more veg­eta­bles, or read pri­vacy poli­cies de­signed to safe­guard their on­line in­for­ma­tion — have all fallen on seem­ingly-deaf ears, de­spite large ex­pen­di­tures on ad­ver­tis­ing and com­mu­ni­ca­tion. The rea­son is sim­ple: The vast ma­jor­ity of these poli­cies and pro­grams are de­signed for Econs, rather than for hu­mans who are for­get­ful, emo­tional and im­pul­sive; in­flu­enced by their peers; con­fused by too much choice; and loathe to con­sume too much in­for­ma­tion.

The best pol­icy de­sign, then, would as­sume that peo­ple will likely for­get, ig­nore, gloss over or mis­un­der­stand crit­i­cal pieces of in­for­ma­tion — and build safe­guards into the sys­tem against such be­hav­iour. For­tu­nately, re­cent ad­vances in the world of be­havioural in­sights have pro­vided gov­ern­ments with a new tool­kit to achieve this, and it is be­ing em­braced by gov­ern­ments the world over.

The Ba­sics of Choice Ar­chi­tec­ture

The term ‘choice ar­chi­tec­ture’ made its de­but in Nudge, where Thaler and Sun­stein ar­gued that since we know from Psy­chol­ogy that con­text in­flu­ences choice, it should be pos­si­ble to de­sign con­texts to steer choices to a de­sired out­come. Choice ar­chi­tec­ture there­fore refers to the con­scious and care­ful pre­sen­ta­tion of dif­fer­ent op­tions avail­able to a de­ci­sion-maker, and in­ter­ven­tions to change the man­ner of op­tion pre­sen­ta­tion are called ‘nudges’.

Choice ar­chi­tec­ture draws upon find­ings from be­havioural sci­ence to de­sign en­vi­ron­ments in which hu­mans make de­ci­sions. For ex­am­ple, ev­ery pol­icy ini­tia­tive comes to the at­ten­tion of cit­i­zens with a pre-cho­sen de­fault sta­tus: you ei­ther check the box to do­nate your or­gans, or you don’t. And stud­ies have shown that chang­ing that de­fault has sig­nif­i­cant ef­fects on be­hav­iour.

En­roll­ment in 401(k) pen­sion plans in the U.S. is a prime ex­am­ple. Sign­ing up for a 401(k) can be a has­sle, and re­tire­ment seems far off in time for many peo­ple. By us­ing a de­fault ‘opt-in’

Like busi­nesses, gov­ern­ments and non-prof­its should be con­stantly it­er­at­ing on their ser­vice of­fer­ings.

en­roll­ment, em­ploy­ees have been au­to­mat­i­cally en­rolled and par­tic­i­pa­tion rates have in­creased sig­nif­i­cantly. Be­tween 2010 and 2014, the num­ber of com­pa­nies with an 80 per cent par­tic­i­pa­tion rate or higher rose by 14 per cent.

Else­where, the UK gov­ern­ment sought to ap­ply choice ar­chi­tec­ture to the de­ci­sion its cit­i­zens made re­gard­ing or­gan-donor reg­is­tra­tion. The idea was that tweaks to pro­cesses and lan­guage — in­formed by be­havioural sci­ence and tested for ef­fec­tive­ness — could sig­nif­i­cantly im­prove par­tic­i­pa­tion rates. This work was spear­headed by the UK’S Be­havioural In­sights Team (BIT), the world’s first be­havioural in­sights unit within gov­ern­ment. The most suc­cess­ful mes­sage out of those tested was the fol­low­ing: “If you needed an or­gan trans­plant, would you have one?” By in­vok­ing the con­cept of rec­i­proc­ity, this sim­ple ques­tion en­cour­ages po­ten­tial donors to think a bit more about the de­ci­sion. The re­sult: BIT es­ti­mated that it would be able to add 100,000 names to the donor registry an­nu­ally.

The Cana­dian prov­ince of On­tario has also suc­ceeded in in­creas­ing or­gan-donor reg­is­tra­tion rates by har­ness­ing two sim­ple be­havioural in­sights to de­sign nudges: First, as seen in the UK, a mes­sage that evokes em­pa­thy was used to get po­ten­tial donors to think a bit more about the de­ci­sion; and sec­ond, sim­pli­fy­ing the ap­pli­ca­tion form it­self in­creased the like­li­hood that this greater thought would be con­verted to ac­tion.

The Role of Test­ing

The gold stan­dard of ap­ply­ing in­sights from be­havioural sci­ence in­volves the use of Ran­dom­ized Con­trolled Tri­als (RCTS). While the name might sound in­tim­i­dat­ing, RCTS are no dif­fer­ent from the tri­als used in the world of Medicine to test for the ef­fi­cacy of new drugs, or the A/B tests used by on­line busi­nesses to test lay­outs of web­pages.

With an RCT, var­i­ous op­tions de­signed to en­cour­age cer­tain be­hav­iours are tested amongst a sam­ple pop­u­la­tion. This of­ten en­tails very sub­tle changes to ma­te­ri­als or to the con­text, such as cre­at­ing mul­ti­ple ver­sions of an in­ter­ven­tion (say, an ap­pli­ca­tion form, a brochure and an ap­pli­ca­tion process) and then try­ing all ver­sions si­mul­ta­ne­ously.

One of the key strengths of ap­ply­ing be­havioural in­sights is the abil­ity to test nudges on a sam­ple of real-life users, prior to the full im­ple­men­ta­tion of a pro­gram. This al­lows an or­ga­ni­za­tion to re­ceive valu­able feed­back on the ef­fec­tive­ness of its pro­posed changes and to gauge po­ten­tial im­pact be­fore wide­spread im­ple­men­ta­tion.

Much like busi­nesses, gov­ern­ments and non-prof­its should con­stantly it­er­ate on their ser­vice of­fer­ings and pro­ce­dures. Test­ing dif­fer­ent nudges pro­vides an out­let to re­view the sta­tus quo and look for new ways to im­prove in­ter­ac­tions with the pub­lic. Few would ar­gue with this logic of con­tin­u­ous im­prove­ment, but if this is the case, why have so few gov­ern­ments em­braced this ap­proach?

The an­swer is likely in­er­tia and the need to change mind­sets. Given that many pol­i­cy­mak­ers have been con­di­tioned to think about cit­i­zens as Econs, they are also con­di­tioned to think that eco­nomic the­ory can pre­dict the best way of cre­at­ing be­hav­iour change. Once a pol­icy or pro­gram has been ap­proved, the thought of hav­ing to test it for ef­fec­tive­ness in the field and de­sign­ing a sci­en­tific ex­per­i­ment to do so may seem daunt­ing, un­nec­es­sary or threat­en­ing.

The fact is, us­ing be­havioural sci­ence to un­cover pol­icy in­sights re­quires a cer­tain de­gree of hu­mil­ity. Gov­ern­ments are of­ten di­vided into si­los, with sub­ject ex­perts op­er­at­ing in each area. The sta­tus quo ex­pec­ta­tion is that gov­ern­ment branches in­her­ently know how to im­prove or im­ple­ment new pro­grams be­cause of their past ex­pe­ri­ence, but when work­ing for so many cit­i­zens — all of whom be­have dif­fer­ently in dif­fer­ent con­texts — past ex­pe­ri­ence does not nec­es­sar­ily pre­dict fu­ture out­comes.

As a re­sult, the dan­gers of not test­ing are sig­nif­i­cant. An ex­am­ple is the Scared Straight pro­gram of the 1970s in the U.S., whereby young peo­ple com­mit­ting mi­nor of­fences were taken to pris­ons and in­tro­duced to in­mates, in hopes that the ex­pe­ri­ence would scare them from com­mit­ting fu­ture crimes. Lit­tle test­ing was con­ducted on the ef­fec­tive­ness of the pro­gram — which in hind­sight, seems to have only nor­mal­ized the idea of a life of crime with some of the young peo­ple. The re­sult of im­ple­ment­ing a flawed pol­icy was dis­as­trously costly: the Wash­ing­ton State In­sti­tute for Pub­lic Pol­icy es­ti­mated in 2004 that ev­ery dol­lar spent on Scared Straight pro­grams in­curred a fur­ther crime cost of $203.51.

Chal­lenges (and Solutions) in Con­duct­ing RCTS

Al­though RCTS are vastly ben­e­fi­cial in un­cov­er­ing the ef­fec­tive­ness of pro­posed be­havioural nudges, gov­ern­ments may face tech­ni­cal con­straints, such as the avail­abil­ity of data.

Test­ing dif­fer­ent nudges pro­vides an out­let to re­view the sta­tus quo and im­prove in­ter­ac­tions with the pub­lic.

Hasti Rah­bar, re­search ad­vi­sor at Em­ploy­ment and So­cial De­vel­op­ment Canada (EDSC) told us that of­ten, the data re­quired for de­sign­ing an ap­pro­pri­ate nudge for a par­tic­u­lar prob­lem is not read­ily avail­able — or is not even be­ing com­piled.

In Bri­tish Columbia, where the pro­vin­cial gov­ern­ment re­cently launched its own be­havioural unit, this was one of the key chal­lenges it faced as it started on its ini­tial ros­ter of projects. For un­der­stand­able pri­vacy rea­sons, data is of­ten held sep­a­rately and se­curely, and this means that “the process to ac­quire data can take a longer than an­tic­i­pated,” says Heather Devine, Head of BC’S Be­havioural In­sights Group.

Gov­ern­ments may also strug­gle with the ex­is­tence of ‘touch­points’, or points of con­tact be­tween a gov­ern­ment and its cit­i­zens, which can in­clude mail, phone and face-to­face. Be­haviourally-in­formed ap­proaches can most eas­ily be im­ple­mented at these touch­points. At the fed­eral and pro­vin­cial lev­els, there are lim­i­ta­tions to the num­ber and va­ri­ety of touch­points with cit­i­zens. Some­times, the re­sults of a pro­posed be­havioural in­ter­ven­tion can­not be an­a­lyzed sim­ply be­cause the touch­points are not there.

De­spite these lim­i­ta­tions, the world of be­havioural in­sights and choice ar­chi­tec­ture de­sign of­fers a num­ber of other av­enues to test. If an RCT is not pos­si­ble, per­haps a lab­o­ra­tory ex­per­i­ment, a se­ries of de­sign work­shops or a nat­u­ral ex­per­i­ment might be pos­si­ble. As long as data is col­lected to com­pare mul­ti­ple nudges with the sta­tus-quo (i.e con­trol) con­di­tion, gov­ern­ments can learn, it­er­ate, adapt and launch tested in­ter­ven­tions.

Even though gov­ern­ments the world over have started to em­brace the power of ap­ply­ing be­havioural in­sights to pol­icy

and the im­por­tance of test­ing, much more can be done to en­able progress in this space. Two key ar­eas of best prac­tice are:

• Col­lab­o­ra­tion and joint ini­tia­tives be­tween be­havioural

units; and

• Re­search by and con­sul­ta­tion with aca­demics.

In many cases, the prob­lems en­coun­tered in gov­ern­ment are not unique to a sin­gle level or branch of gov­ern­ment, so col­lab­o­ra­tion on projects can lead to shared learn­ing and greater over­all im­prove­ment. In Canada, hubs at the pro­vin­cial level are work­ing on projects in tan­dem with hubs at the fed­eral level, pool­ing their re­sources and knowl­edge. There is also vast po­ten­tial in es­tab­lish­ing hubs at the mu­nic­i­pal level: Mu­nic­i­pal­i­ties have ac­cess to many more read­ily-avail­able touch­points, open­ing up a wide va­ri­ety of op­por­tu­ni­ties to in­cor­po­rate and test be­havioural in­sights as they re­late to pol­icy im­prove­ment.

Another trend world­wide is the cen­tral role that aca­demic in­sti­tu­tions can play. Be­havioural units in the UK, U.S. and else­where have tapped into the ex­per­tise of the aca­demic com­mu­nity to iden­tify and de­velop a frame­work for prob­lems, to de­sign tri­als and to an­a­lyze, in­ter­pret and it­er­ate on the learn­ings. In Canada, Be­havioural Eco­nomics in Ac­tion at Rot­man (BEAR) col­lab­o­rates with the On­tario gov­ern­ment; Rot­man Pro­fes­sor Nina Mažar was ap­pointed as a be­havioural sci­en­tist at the World Bank; and she and one of the au­thors [Prof. So­man] serve as ad­vi­sory to the fed­eral gov­ern­ment’s In­no­va­tion Hub at the Privy Coun­cil Of­fice.

The bot­tom line is this: In­sights from Be­havioural Eco­nomics can sim­plify pro­ce­dures for cit­i­zens and bet­ter clar­ify what they are be­ing asked to do and why they should do it. As the world be­comes in­creas­ingly dig­i­tal, gov­ern­ments could seek to add an ad­di­tional chan­nel of com­mu­ni­ca­tion through mo­bile tech­nolo­gies, such as SMS. Be­havioural in­sights can also help sig­nif­i­cantly in press­ing pol­icy ar­eas such as poverty al­le­vi­a­tion, ed­u­ca­tion and pub­lic safety. In the end, by us­ing ap­proaches tai­lored to how cit­i­zens ac­tu­ally think and act — not how pol­i­cy­mak­ers be­lieve they should think and act — gov­ern­ments can pro­vide bet­ter ser­vices at a lower cost.

In clos­ing

Be­havioural hubs in gov­ern­ment are prov­ing that in­no­va­tion isn’t re­served for Sil­i­con Val­ley or For­tune 500 com­pa­nies. Along with bet­ter data and an im­proved abil­ity to test, be­havioural in­sights will play an in­creased role in im­prov­ing pol­icy and ser­vices to en­sure a bet­ter life for ev­ery global cit­i­zen.

Dilip So­man is the Corus Chair in Com­mu­ni­ca­tion Strat­egy and Pro­fes­sor of Mar­ket­ing at the Rot­man School of Man­age­ment and co-di­rec­tor of Be­havioural Eco­nomics in Ac­tion at Rot­man (BEAR). He is the au­thor of The Last Mile: Cre­at­ing So­cial and Eco­nomic Value from Be­havioural In­sights (Rot­man-utp Pub­lish­ing 2015). Katie Chen is a re­search as­sis­tant at BEAR and a stu­dent at West­ern Univer­sity. Neil Ben­dle is an As­sis­tant Pro­fes­sor of Mar­ket­ing at West­ern Univer­sity’s Ivey School of Busi­ness. This ar­ti­cle is based on a longer re­port en­ti­tled “Pol­icy by De­sign,” avail­able for down­load at rot­man.utoronto.ca/fac­ultyan­dresearch/re­search­cen­tres/bear

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.