Break ranks to dif­fer­en­ti­ate skill

Finweek English Edition - - Property Compass - Ron­ald Surz PPCA Inc

A RE­CENT, OFT-CITED STUDY found that con­sul­tants are ac­tu­ally worse at pick­ing man­agers than do-ity­our­self in­vestors. Bergstresser, Chalmers and Tu­fano [2006], pro­fes­sors at Har­vard Busi­ness School and the Uni­ver­sity of Ore­gon, doc­u­mented that “fi­nan­cial in­ter­me­di­aries do a lousy job of allocating client as­sets to mu­tual funds”. Sim­i­larly, the press fre­quently ob­serves the av­er­age fund-ofhedge-funds con­sis­tently un­der­per­forms the av­er­age hedge fund and that un­der­per­for­mance isn’t due solely to fees. Sim­ply stated, out­side ob­servers find pro­fes­sion­als haven’t de­liv­ered on their prom­ise of find­ing skil­ful man­agers. The pro­fes­sion should heed that fail­ure and take steps to change what’s clearly been a los­ing game.

When data con­tra­dicts the­ory there’s ex­cite­ment about the po­ten­tial to im­prove the the­ory. In this case it’s tra­di­tional bench­mark the­ory that needs im­prove­ment. The data shows in­dices and peer groups haven’t suc­ceeded in dif­fer­en­ti­at­ing be­tween win­ners and losers and we show why in this ar­ti­cle. But we don’t stop there: the lit­er­a­ture is rife with doc­u­men­ta­tion of the de­fi­cien­cies of those bench­marks. This ar­ti­cle de­scribes how ac­cu­rate bench­marks can be con­structed from in­dices and how peer group bi­ases can be over­come. Ac­cu­rate bench­mark­ing en­tails a lot of work but it’s well worth the ef­fort. If the bench­mark is wrong all of the an­a­lyt­ics are wrong ‒ so losers are hired and win­ners are fired. It’s time to break away from this loser’s game.

In­dices

A bench­mark es­tab­lishes a goal for the in­vest­ment man­ager. A rea­son­able goal is to earn a re­turn that ex­ceeds a low-cost, pas­sive im­ple­men­ta­tion of the man­ager’s in­vest­ment ap­proach, be­cause the in­vestor al­ways has the choice of ac­tive or pas­sive man­age­ment. It’s im­por­tant to recog­nise the dis­tinc­tion be­tween in­dices and bench­marks. In­dices are barom­e­ters of price changes in seg­ments of the mar­ket. Bench­marks are pas­sive al­ter­na­tives to ac­tive man­age­ment. His­tor­i­cally, com­mon prac­tice has been to use in­dices as bench­marks but re­turns-based style analy­ses (RBSA) have shown most man­agers are best bench­marked as blends of styles that may not al­ways be ap­par­ent in the in­dex.

The user of RBSA must trust the “black box” ‒ be­cause the re­gres­sion can’t ex­plain why that par­tic­u­lar style blend is the best so­lu­tion. In his ar­ti­cle that in­tro­duced RBSA, No­bel lau­re­ate William Sharpe [1988] set forth rec­om­men­da­tions for the style in­dices used in RBSA, known as the “style pal­ette”: “It’s de­sir­able that the se­lected as­set classes be: • Mu­tu­ally exclusive ( no class should over­lap with an­other). Ex­haus­tive (all se­cu­ri­ties should fit in the set of as­set classes). In­vestable (it should be pos­si­ble to repli­cate the re­turn of each class at rel­a­tively low cost). Macro-con­sis­tent (the per­for­mance of the en­tire set should be repli­ca­ble with some com­bi­na­tion of as­set classes).” The mu­tu­ally exclusive cri­te­rion ad­dresses a sta­tis­ti­cal prob­lem called mul­ti­collinear­ity and the other cri­te­ria pro­vide solid re­gres­sors for the style match. Be­cause the com­monly used style pal­ettes fail to meet those cri­te­ria the re­sults can’t be re­lied upon. In other words, the way we typ­i­cally use this ex­cel­lent tool is flawed. Us­ing in­dices that don’t meet Sharpe’s cri­te­ria is like us­ing low oc­tane fuel in your high-per­for­mance car.

Though custom bench­marks de­vel­oped through RBSA are more ac­cu­rate than off-the-shelf indi- ces, statis­ti­cians es­ti­mate it takes decades to de­velop con­fi­dence in a man­ager’s suc­cess at beat­ing the bench­mark, even one that’s cus­tomised. That’s be­cause when custom bench­marks are used, our as­sess­ments about man­ager skill are con­ducted across time. An al­ter­na­tive is to per­form that test in the cross-sec­tion of other ac­tive man­agers, which is the role of peer group com­par­isons.

Peer groups

Peer groups place per­for­mance into per­spec­tive by “rank­ing” it against sim­i­lar port­fo­lios. Ac­cord­ingly, per­for­mance for even a short pe­riod can be ad­judged sig­nif­i­cantly if it ranks in the top of the dis­tri­bu­tion. When tra­di­tional peer groups are used, ”man­ager skill” is tested by com­par­ing per­for­mance with that of a group of port­fo­lios that are pre­sum­ably man­aged in a man­ner sim­i­lar to the port­fo­lio be­ing eval­u­ated, so the hy­poth­e­sis is tested rel­a­tive to the stock picks of sim­i­lar pro­fes­sion­als. That makes sense ‒ ex­cept that some­one has to de­fine “sim­i­lar” and then col­lect data on the funds that fit that par­tic­u­lar def­i­ni­tion of sim­i­lar.

Each peer group provider has its own def­i­ni­tions and its own col­lec­tion of funds, so each provider has a dif­fer­ent sam­ple for the same in­vest­ment man­date. “Large cap growth” is one set of funds in one provider’s peer group and an­other set of funds in the next provider’s peer group. Those sam­pling idio­syn­cra­sies are the source of well-doc­u­mented peer group bi­ases, in­clud­ing com­po­si­tion, clas­si­fi­ca­tion and sur­vivor bi­ases. For a detailed dis­cus­sion of those bi­ases, see Surz [2006].

Be­cause of those bi­ases peer group com­par­isons are more likely to mis­lead than to in­form and there­fore they should be avoided. Given the com­mon use of peer

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.