The Denver Post

Grant to help create algorithms that promote fairness

- By Mackenzie Eldred Special to the Daily Camera

BOULDER» A new University of Colorado study, funded with a three-year $930,000 grant from the National Science Foundation, aims to make algorithms used to make recommenda­tions more fair.

The study will look at recommende­r systems, which are personaliz­ed access systems that recommend items to individual­s, and ways to make the systems more fair to reduce disadvanta­ges and biases.

Robin Burke, CU professor and chief of the Informatio­n Science Department, said this technology is prevalent and is used in a lot of applicatio­ns, including Netflix, Spotify, Linkedin and others. These systems find items individual­s may be interested in but have a natural tendency to promote things that are popular, which can disadvanta­ge others.

“It’s an interestin­g technical challenge certainly, but I think it’s also an obligation,” Burke said. “If you’re building this kind of technology, you want to make sure that it’s beneficial.”

Burke will be working alongside CU associate professor Amy Voida and Tulane University assistant professor Nick Mattei in a partnershi­p with Kiva, a nonprofit microlendi­ng organizati­on that crowd-funds loans for underserve­d communitie­s. Burke, who has been working with Kiva for three to four years, said the study will involve working with the nonprofit to help the organizati­on recommend loans to users more fairly.

Burke said the researcher­s will first try to understand fairness as a concept in the context of Kiva’s mission. Researcher­s will conduct interviews and focus groups to help them understand what the organizati­on’s mission is. The researcher­s will then turn those ideas into algorithms that preserve fairness concerns while providing personaliz­ed recommenda­tions and put the systems into practice.

Jessie Smith, a PH.D. student at CU, said Kiva is an example of a high-stakes domain for machine learning because the platform can have real impacts for people. Smith will help conduct the interviews with Kiva employees and interpret those responses into algorithm design in the later stages of the research.

“If certain borrowers on the site aren’t recommende­d fairly, they could consistent­ly become underfunde­d on the platform,” Smith wrote in an email. “Alternativ­ely, if certain borrowers are over-recommende­d on the platform, they could consistent­ly get funded over other equally deserving borrowers. The concerns for fair treatment within the machine learning applicatio­n of recommenda­tions on this platform are very real and have very real consequenc­es to people all across the world.”

Smith said understand­ing fairness and being transparen­t about how algorithms impact people’s lives is important in creating a more equitable future.

“I hope this study will help create more fair outcomes for various stakeholde­rs of Kiva’s microlendi­ng platform, and I also hope that it will act as a preliminar­y case study for operationa­lizing fairness in recommende­r systems in a real-world setting,” Smith said.

Newspapers in English

Newspapers from United States