Tech­nol­ogy

It’s highly de­bat­able whether the ar­ti­fi­cial in­tel­li­gence en­gines that on­line lenders typ­i­cally use are ca­pa­ble of mak­ing credit de­ci­sions with­out in­ad­ver­tent prej­u­dices.

National Mortgage News - - Front Page - BY PENNY CROSMAN

THERE ARE ALL SORTS OF LE­GAL and tech­ni­cal is­sues about how lend­ing rules ap­ply to the new breed of on­line lenders, but here’s a more fun­da­men­tal one: How sure are they their au­to­mated tech­nol­ogy is color blind?

Even if a com­pany has the best in­ten­tions of fol­low­ing fair lend­ing prin­ci­ples, it’s de­bat­able whether the ar­ti­fi­cial in­tel­li­gence en­gines that on­line lenders typ­i­cally use — and that banks are just start­ing to de­ploy — are ca­pa­ble of mak­ing credit de­ci­sions with­out in­ad­ver­tently lend­ing in af­flu­ent sec­tions and not in mi­nor­ity neigh­bor­hoods.

AI-based lend­ing plat­forms an­a­lyze thou­sands of data points — in­clud­ing tra­di­tional and al­ter­na­tive credit bu­reaus, bank ac­count records, so­cial me­dia streams and pub­lic records — and find pat­terns that in­di­cate cred­it­wor­thi­ness, propen­sity to de­fault, and like­li­hood of fraud. The ma­chines could make credit de­ci­sions that end up redlin­ing an area, even if they never re­ceive ad­dresses.

For in­stance, a sys­tem that con­sid­ers col­lege data could start rec­og­niz­ing that grad­u­ates of a par­tic­u­lar school are a good credit risk, and those stu­dents may be from mostly priv­i­leged so­cioe­co­nomic back­grounds.

“Th­ese are is­sues ev­ery lender has,” said Jim Moynes, vice pres­i­dent of risk man­age­ment at Ford Mo­tor Credit Co., which re­cently be­gan test­ing ZestFi­nance’s soft­ware in its un­der­writ­ing process but has not yet put it into pro­duc­tion. “We have com­pli­ance pro­cesses to­day, and we’ll have to see how we ad­just, if need be, those pro­cesses in the fu­ture of ma­chine learn­ing to make sure we stay where we are to­day — com­pli­ant.”

Joao Me­nano, the co-founder and CEO of James, a provider of AIbased on­line lend­ing soft­ware to banks (un­til re­cently it was called CrowdPro­cess), pointed out that a lender might not con­sider age, gen­der or race in its un­der­writ­ing now, but ma­chine learn­ing could learn that a data point that cor­re­lates with one of those fac­tors is rel­e­vant to credit de­ci­sions.

“So how do you en­sure fair lend­ing? That’s the big ques­tion,” Me­nano said.

First of all, a com­pany has to fig­ure out how it de­fines dis­crim­i­na­tion, he said.

“If my bank is in a re­gion where there are more black peo­ple than white, what is not dis­crim­i­nat­ing?” Me­nano said. “Giv­ing 50-50? Or giv­ing ac­cord­ing to what­ever the population dis­tri­bu­tion is? Th­ese are the ques­tions that are go­ing to be all over the news­pa­pers and pon­dered by reg­u­la­tors for the next five years. It’s very com­plex.”

James’s soft­ware helps bank clients avoid dis­crim­i­na­tion by ap­ply­ing a test gen­er­ated by the Con­sumer Fi­nan­cial Pro­tec­tion Bureau to loan de­ci­sions.

“The CFPB has help­fully made some of their meth­ods for eval­u­at­ing dis­crim­i­na­tion avail­able on­line, which has greatly helped with build­ing pro­to­types with our U.S. clients in this field,” Me­nano said. “From a user point of view, the most im­por­tant abil­ity is to be able to self-di­ag­nose for il­le­gal dis­crim­i­na­tion, as the bias is of­ten con­tained in the data, ap­pear­ing with no fault of the risk of­fi­cer.”

The soft­ware also can ad­just the ac­cep­tance thresh­old to en­sure equal op­por­tu­nity for dif­fer­ent pop­u­la­tions, and it can mon­i­tor and flag in­con­sis­ten­cies.

Dou­glas Mer­rill, founder and CEO of ZestFi­nance, points out that hu­mans make ar­bi­trary as­sump­tions about cred­it­wor­thi­ness just as much as, if not more than, AI soft­ware.

“Any clas­si­fier you like is sub­ject to in­duc­ing cat­e­gories that you don’t want it to,” he said. “Peo­ple do face-to-face cat­e­go­riza­tion and ma­chines in­duce cat­e­gories.”

Banks tend to run an­nual tests on their loan port­fo­lios to en­sure their poli­cies, prac­tices and de­ci­sions are not hav­ing a dis­parate im­pact.

“It’s very painful,” Mer­rill said. “Some do it once a year, some do it only ev­ery time they’re ex­am­ined, which is ev­ery cou­ple of years. And of it’s a hor­ri­ble process. It starts with defin­ing cat­e­gories and hav­ing com­pli­ance lawyers an­a­lyze your cur­rent book and then go through ev­ery new loan, to make sure you haven’t trig­gered a prob­lem for your­self.”

ZestFi­nance, he said, has a set of tools that can ex­e­cute the same kinds of tests in real time, and de­ter­mine if the soft­ware has learned a clas­si­fi­ca­tion that could neg­a­tively af­fect a pro­tected cat­e­gory.

Cred­i­bly’s AI-based small-busi­ness lend­ing plat­form sifts through thou­sands of data points and finds the 200 or 300 that are pre­dic­tive, said Ryan Rosett, co-founder and CEO. Data that is not pre­dic­tive is kicked out of the sys­tem.

In one ex­am­ple of a pre­dic­tive data source, Cred­i­bly has an API with the New York City Health Depart­ment through which it re­ceives New York restau­rant rat­ings.

“If a restau­rant is down­graded, up­graded or closed, it goes into our data feed,” Rosett said. “So we know if there was a B rat­ing be­cause there was spoiled chicken or what­ever, so that would be an ex­am­ple of an al­ter­na­tive data source that’s some­what pre­dic­tive. We don’t want to lend them money if they’re in the busi­ness of sell­ing food and were re­cently cited by the health depart­ment.”

Cred­i­bly also takes in Yelp data. “We’re not look­ing at the rat­ings,

we’re look­ing to see if there’s a man­age­ment change, or they’re closed — we’re search­ing for key words,” Rosett said. “That’s another ex­am­ple of where we try to scrape a cer­tain amount of in­for­ma­tion that would then raise a flag, which would then hit our scor­ing model or trig­ger a hu­man un­der­writer to eval­u­ate it.”

Ev­ery credit de­ci­sion is reviewed by an un­der­writer. “Some are faster than oth­ers, but there is ver­i­fi­ca­tion, and there are check­points,” he said. The sys­tem pro­duces reports on the seg­men­ta­tion of the loan de­ci­sions.

Bank part­ners in its se­nior credit fa­cil­ity are shown the un­der­writ­ing mod­els for their ap­proval. “They have the right to re­view and ap­prove any mod­i­fi­ca­tions we make,” Rosett said.

Cred­i­bly also mea­sures dis­parate im­pact in its al­go­rithm, us­ing a method that mir­rors the anal­y­sis used by other banks like Cit­i­group, Amer­i­can Ex­press and JPMor­gan Chase, he said.

Reg­u­la­tors re­quire that banks pro­vide a clear rea­son for de­clin­ing a loan. AI tools that dis­cover pat­terns that in­di­cate cred­it­wor­thi­ness or lack thereof might take a cir­cuitous path that does not nec­es­sar­ily lend it­self to a crisp rea­son code. Ven­dors say they have tools to pro­vide such rea­son codes, but reg­u­la­tors have yet to grant an of­fi­cial bless­ing to any of them.

“Reg­u­la­tors love to know de­ter­min­is­tic out­comes — they like to know that when you un­der­wrote some­body, th­ese were the three or four fac­tors you used to de­ter­mine it,” said Michael Ab­bott, dig­i­tal lead for Ac­cen­ture Fi­nan­cial Ser­vices. “When you ap­ply AI and ma­chine learn­ing, you can’t de­scribe ex­actly the fac­tors, and the fac­tors may change over time, so one of the great­est po­ten­tial uses will be un­der­writ­ing but it will re­quire part­ner­ship with reg­u­la­tors. That’s the long­est pole in this tent.”

JOAO ME­NANO James

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.