Thought Leader In­ter­view:

Richard Thaler

Rotman Management Magazine - - CONTENT - by Karen Chris­tensen

You have de­scribed the goal of your re­search over the last 40 years as fol­lows: “To in­tro­duce hu­mans into Eco­nomics.” Please ex­plain.

The truth is, the peo­ple who pop­u­late Eco­nomics text­books bear very lit­tle re­sem­blance to the hu­mans we in­ter­act with on a daily ba­sis. Stan­dard eco­nomic mod­els de­scribe peo­ple who are as smart as the smartest econ­o­mist, who are not af­fected by emo­tion, and who have no is­sues with self-con­trol. That’s ‘Homo Eco­nomi­cus’; I call them ‘Econs’ for short — and I truly don’t know any­body like that. In re­al­ity, we do not have per­fect willpower and we don’t al­ways choose what is best for us — which is why obe­sity and in­suf­fi­cient re­tire­ment sav­ings are so com­mon to­day.

You have said that Daniel Kah­ne­man and Amos Tver­sky’s Prospect The­ory pro­vides a ‘tem­plate’ for the type of the­o­ries we need to­day. How so?

As in­di­cated, we strug­gle to de­ter­mine what is best for us in the long run — and then, we strug­gle to have the willpower to im­ple­ment that choice — es­pe­cially if it en­tails de­layed grat­i­fi­ca­tion. We sorely need eco­nomic the­o­ries that ac­count for this, and the first such the­ory was Prospect The­ory.

Tra­di­tion­ally, econ­o­mists be­lieved that ev­ery time we make a choice, the net ef­fect of the gains and losses in­volved in that choice are some­how com­bined in our head to cal­cu­late whether a par­tic­u­lar choice is de­sir­able or not. How­ever, Prospect The­ory states that losses and gains are val­ued very dif­fer­ently by peo­ple — and this af­fects our de­ci­sions.

Sim­ply put, we give losses far more weight than gains. So, if you were to gain $100 from one trans­ac­tion but lose $80 in another, you would end up feel­ing worse off, even though you are $20 ahead. When it came out [in 1979], the great thing about Prospect The­ory was that it proved that you could take a sci­en­tific ap­proach to hu­man be­hav­iour.

De­scribe how Daniel Kah­ne­man — who went on to be named a No­bel Lau­re­ate in Eco­nomic Sci­ences — be­came your men­tor.

I first came across his work with the late Amos Tver­sky back in 1975. At the time, they were liv­ing in Is­rael, but I found out that

they were plan­ning to spend a year at Stan­ford in 1977-78 — so I made it my busi­ness to get a post­ing there. I begged and pleaded, and some­how man­aged to get a re­search grant to fund my visit.

When Daniel and Amos ar­rived, I was al­ready there — ready to pester them. Kah­ne­man’s of­fice was just up the hill from mine, so I made it a habit to wan­der up there to chat with him, and would of­ten find Tver­sky there, as well. We had a glo­ri­ous year to­gether: They taught me Psy­chol­ogy and I taught them Eco­nomics, and it turned into a 40-year friend­ship. Sadly, Amos passed away in 1996.

Since Kah­ne­man and Tver­sky’s work on bi­ases like ‘avail­abil­ity’, ‘rep­re­sen­ta­tive­ness’ and ‘an­chor­ing’, a long list of other bi­ases has been iden­ti­fied (see page 17 for a sam­pling). You have called this “both a bless­ing and a curse”. Please ex­plain.

It is a bless­ing in that ev­ery bias pro­vides a small glimpse into how our minds work. To be clear, it was never Amos and Danny’s in­ten­tion to sug­gest that peo­ple are stupid: They al­ways said that they liked to study er­rors in judg­ment for the same rea­son some peo­ple study op­ti­cal il­lu­sions: Be­cause they teach us some­thing about hu­man per­cep­tion.

The curse in hav­ing such a long list of bi­ases is that it leads some peo­ple to think they can ex­plain any­thing by ‘cherry-pick­ing’ a bias to fit the facts. But that is not what be­havioural sci­ence is about. It is a sci­ence: It’s about mak­ing pre­dic­tions and test­ing them.

What is a ‘choice ar­chi­tect’?

A choice ar­chi­tect is any­one who has re­spon­si­bil­ity for or­ga­niz­ing the con­text in which peo­ple make de­ci­sions. They cre­ate an en­vi­ron­ment that pro­vides in­di­vid­u­als the free­dom to choose, but they still in­flu­ence peo­ples’ be­hav­iour. For in­stance, we ad­vise gov­ern­ments on how to use choice ar­chi­tec­ture to help make cit­i­zen’s lives health­ier or bet­ter in some way. But, of course, choice ar­chi­tec­ture can also be used to take ad­van­tage of peo­ple. No mat­ter what in­dus­try you work in, if you in­di­rectly in­flu­ence the choices other peo­ple make, you are a choice ar­chi­tect.

De­scribe the on­go­ing bat­tle be­tween our two selves: The Plan­ner and the Doer.

As Kah­ne­man de­scribes in Think­ing, Fast and Slow, hu­mans be­have as if there were two dis­tinct sys­tems in their brains: One is au­to­matic and the other is re­flec­tive. I have used a sim­i­lar frame­work to de­scribe how peo­ple deal with prob­lems of self-con­trol. In my model there is a long-sighted, re­flec­tive ‘Plan­ner’ and a short-sighted, im­pul­sive ‘Doer’. When he sees some­thing he likes, he grabs it. The Plan­ner, on the other hand, is the part of you that thinks ahead and bud­gets your re­sources ap­pro­pri­ately. To-do-lists, gro­cery lists and alarm clocks are ex­am­ples of our Plan­ner tak­ing steps to con­trol the ac­tions of our Doer.

Un­for­tu­nately, the Plan­ner doesn’t al­ways win. One the goals of my work has been to teach peo­ple how to al­ter their en­vi­ron­ments to give them­selves the best chance to make good de­ci­sions for the long run.

What is your favourite ex­am­ple of suc­cess­ful nudg­ing?

In my view, the do­main in which nudg­ing — and Be­havioural Eco­nomics in gen­eral — has had the great­est im­pact to date is in the de­sign of de­fined-con­tri­bu­tion sav­ings plans. It used to be that peo­ple had to fill out a pile of forms, but with de­fault en­roll­ments, they only have to fill out a form if they do not want to en­roll. This has ba­si­cally solved the en­roll­ment prob­lem: Opt-out rates are now very low — around 10 per cent.

When this started to hap­pen, it was great, but we found that the plans were auto-en­rolling peo­ple at very low sav­ings rates — in the U.S., of­ten at a rate of just three per cent. As a way to nudge peo­ple to in­crease their sav­ings rates, Shlomo Be­nartzi and I in­tro­duced a plan called ‘Save More To­mor­row’. Un­der this plan, work­ers are of­fered the op­tion to in­crease their sav­ings rate at a later date — ide­ally, when they get their next raise. As such, once an em­ployee en­rolls in the plan, her sav­ings rate con­tin­ues to in­crease un­til she reaches some cap — or opts out. In our first study of this ap­proach, sav­ings rates more than tripled in three years.

The Save More To­mor­row plan is a col­lec­tion of what I like

to call ‘sup­pos­edly-ir­rel­e­vant fac­tors’ (SIFS) — things that stan­dard eco­nomic the­ory says should not in­flu­ence choices: It should not mat­ter that the sav­ings rate is in­creased in a few months rather than right now; nor that the in­creases are linked to pay in­creases; nor that the de­fault is to stay in the plan; but of course, all of these fea­tures mat­ter. Putting off the in­crease in sav­ing to the fu­ture helps those who are present-bi­ased; link­ing to in­creases in pay mit­i­gates loss aver­sion; and mak­ing stay­ing in the plan the de­fault puts sta­tus quo bias to good use. More than half of large plans around the world now use au­to­matic en­roll­ment and au­to­matic es­ca­la­tion.

When are nudges most needed?

Peo­ple need a nudge most when a choice and its con­se­quences are sep­a­rated in time, and as a re­sult, both in­vest­ment goods and sin­ful goods are prime can­di­dates for nudges. ‘In­vest­ment goods’ in­clude di­et­ing, ex­er­cise and floss­ing your teeth. In each case, the costs are borne im­me­di­ately, but the ben­e­fits are de­layed, and as a re­sult, peo­ple tend to err on the side of do­ing too lit­tle. ‘Sin­ful goods’ in­clude things like cig­a­rettes, dough­nuts and al­co­hol; ba­si­cally, you get the plea­sure now and suf­fer the con­se­quences later. Nudges are also needed most when de­ci­sions are dif­fi­cult or rare — es­pe­cially when there is no prompt feed­back.

My num­ber-one mantra for choice ar­chi­tects is, ‘Make it easy’. If you want to get some­one to do some­thing, make it easy to do that thing. For in­stance, if you want them to eat health­ier food, put health­ier op­tions in your cafe­te­ria, make them eas­ier to find, and make them taste bet­ter.

Many peo­ple be­lieve that the fi­nan­cial mar­kets are the most ef­fi­cient of all mar­kets. Are they?

I would agree — but they will never be per­fectly ef­fi­cient, be­cause like all mar­kets, hu­mans are in­volved. As a re­sult, there are pe­ri­ods where they get overly ex­cited and pe­ri­ods when they get overly de­pressed. Also, the lack of pre­dictabil­ity in stock mar­ket re­turns does not im­ply that stock prices are ‘cor­rect’. The in­fer­ence that ‘un­pre­dictabil­ity im­plies ra­tio­nal prices’ is what Robert Shiller once called “one of the most re­mark­able er­rors in the his­tory of eco­nomic thought.”

What about the Ef­fi­cient Mar­ket Hy­poth­e­sis (EMH)?

EMH has been es­sen­tial to the his­tory of re­search on fi­nan­cial mar­kets, but the dan­ger it presents is when peo­ple con­sider it to be lit­er­ally true. If pol­i­cy­mak­ers be­lieve bub­bles are im­pos­si­ble, for in­stance, they will fail to take ap­pro­pri­ate steps to dampen them. Look­ing back at what was hap­pen­ing in 2007, it would have been ap­pro­pri­ate to raise mort­gage-lend­ing re­quire­ments

in cities where price-to-rental ra­tios seemed most frothy; in­stead, we saw a pe­riod in which lend­ing re­quire­ments were un­usu­ally lax.

Of course, in the face of pre­dictable hu­man er­ror, a firm can take one of two ap­proaches: It can try to teach con­sumers about the costs of the er­ror, or it can de­vise a strat­egy to ex­ploit the er­ror. The lat­ter will al­most al­ways be more prof­itable. No one has ever got­ten rich con­vinc­ing peo­ple not to take out an un­wise mort­gage. My hope is that peo­ple ‘nudge for good’ — as I write in my book when­ever some­one asks me to sign it.

How do you de­fine ‘men­tal ac­count­ing’, and what are some of your key find­ings in this area?

Sim­ply put, men­tal ac­count­ing is the study of how peo­ple spend money. For me, this has in­volved ob­serv­ing the way peo­ple han­dle their fi­nan­cial af­fairs — and notic­ing all the var­i­ous ways that they don’t do it like a pro­fes­sional would. For ex­am­ple, peo­ple of­ten buy things sim­ply be­cause they seem to be a great deal — not be­cause the prod­uct is go­ing to pro­vide enor­mous sat­is­fac­tion. Deep in our clos­ets, many of us have stuff that we bought at 50 per cent off — that we prob­a­bly shouldn’t have bought even if it were free.

Another key find­ing is that peo­ple put their money into dif­fer­ent cat­e­gories; and then, they are of­ten re­luc­tant to spend money from one ‘pot’ when they need it for another pot. We be­have dif­fer­ently in dif­fer­ent cir­cum­stances: When we are on va­ca­tion, we eas­ily spend money, even though it is the same money that will be scarce when we get home. Or, we might use dif­fer­ent monthly bud­gets for gro­ceries and eat­ing at restau­rants, for ex­am­ple, and con­strain one kind of pur­chase when the bud­get runs out, while not con­strain­ing the other — even though both draw upon the same re­source (i.e. your in­come). We also found that gro­cery shop­pers spend less when pay­ing with cash than with their debit or credit cards.

A re­cent study of CFOS found that they had no abil­ity to pre­dict stock mar­ket re­turns, and that they also had no self­aware­ness about this lack of pre­dic­tive skill. What are the im­pli­ca­tions of this find­ing?

Sadly, it’s not clear what can be done about it: Over­con­fi­dence is a fact of life. That par­tic­u­lar study asked the CFOS to fore­cast stock mar­ket re­turns and give an 80 per cent con­fi­dence limit — mean­ing a range that the cor­rect an­swer would lie be­tween the two num­bers 80 per cent of the time. What they found is, their fore­cast in­cluded the right an­swer about one third of the time.

We see over­con­fi­dent fore­casts all the time. In last fall’s Amer­i­can elec­tion, some fore­cast­ers were say­ing that Hil­lary Clin­ton had a 99 per cent chance of win­ning — which, even with­out the ben­e­fit of hind­sight, was a ridicu­lous fore­cast.

What is your mes­sage for the crit­ics who feel that nudg­ing is heavy-handed and un­nec­es­sary?

I would tell them that whether they like it or not, there is no avoid­ing nudg­ing — or choice ar­chi­tec­ture. Take a school cafe­te­ria, for ex­am­ple: Some­one has to ar­range and dis­play the food; it can’t just be pre­sented ran­domly — un­less you want the kids to spend their en­tire time look­ing for some­thing to eat. That is choice ar­chi­tec­ture in ac­tion, and it ap­plies to just about ev­ery­thing in the econ­omy and in so­ci­ety.

When nudges are used in the busi­ness arena, I do be­lieve that com­pa­nies need to be sure the choice ar­chi­tec­ture they use is trans­par­ent, and not a de­lib­er­ate at­tempt to in­duce cus­tomers to make a poor choice. A key fea­ture of re­spon­si­ble nudg­ing is to make sure that all de­fault op­tions are eas­ily re­versible. If you are sug­gest­ing peo­ple en­roll in a pen­sion plan be­cause you be­lieve that they would do so if they had the knowl­edge and willpower to make a good choice — and if they can get out of it with one mouse click — then, lit­tle harm done. But if the user has to make three phone calls and then walk across town to find the of­fice where they have to fill out a long form in or­der to undo some­thing, that is not ac­cept­able.

My Nudge co-au­thor Cass Sun­stein and I re­ally hope that an un­der­stand­ing of choice ar­chi­tec­ture and the power of nudges will lead peo­ple to think of creative ways to im­prove hu­man lives in all sorts of do­mains: Work­places, cor­po­rate boards, uni­ver­si­ties and even fam­i­lies might be able to use, and ben­e­fit from, small ex­er­cises in what we call ‘lib­er­tar­ian pa­ter­nal­ism’.

Look­ing ahead, you have said ‘Be­havioural Macroe­co­nomics’ is at the top of your wish list. Please ex­plain.

In my view, Macroe­co­nomics is stuck where Eco­nomics was 30 years ago, with mod­els of Econs, even though the poli­cies that emerge from it af­fect hu­mans. For­tu­nately, this is be­gin­ning to change, and if we con­tinue to ap­ply Be­havioural Eco­nomics tools to the study of Macroe­co­nomics, we might be able to pre­vent the next global fi­nan­cial cri­sis. Or, at the very least, have a bet­ter sense of how to deal with it, when it hap­pens.

You have also said that the term Be­havioural Eco­nomics will likely van­ish from our lex­i­con one day. Why is that?

Choice ar­chi­tec­ture ap­plies to just about ev­ery­thing in the econ­omy and in so­ci­ety.

I have the hope­ful thought that Eco­nomics will soon be­come as be­havioural as it needs to be. There is still plenty of room for mod­els of ra­tio­nal ac­tors: We still need to fig­ure out, if we want to max­i­mize prof­its, what is the best way to do it? Tra­di­tional Eco­nomics is good at find­ing op­ti­mal solutions to such prob­lems. But, in lots of other sit­u­a­tions, a more be­havioural ap­proach is re­quired. I am look­ing for­ward to the day where econ­o­mists just use the tools that seem best suited for the job at hand, and stop treat­ing Be­havioural Eco­nomics as a sep­a­rate field.

Richard Thaler is the Charles R. Wal­green Dis­tin­guished Ser­vice Pro­fes­sor of Be­havioural Sci­ence and Eco­nomics at the Univer­sity of Chicago’s Booth School of Busi­ness. He is the au­thor of Mis­be­hav­ing: The Story of Be­hav­ioral Eco­nomics (W.W. Nor­ton & Com­pany, 2015) and the co-au­thor of Nudge: Im­prov­ing De­ci­sions on Health, Wealth, and Hap­pi­ness (Yale Univer­sity Press, 2008).

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.