Grant to help create algorithms that promote fairness
BOULDER» A new University of Colorado study, funded with a three-year $930,000 grant from the National Science Foundation, aims to make algorithms used to make recommendations more fair.
The study will look at recommender systems, which are personalized access systems that recommend items to individuals, and ways to make the systems more fair to reduce disadvantages and biases.
Robin Burke, CU professor and chief of the Information Science Department, said this technology is prevalent and is used in a lot of applications, including Netflix, Spotify, Linkedin and others. These systems find items individuals may be interested in but have a natural tendency to promote things that are popular, which can disadvantage others.
“It’s an interesting technical challenge certainly, but I think it’s also an obligation,” Burke said. “If you’re building this kind of technology, you want to make sure that it’s beneficial.”
Burke will be working alongside CU associate professor Amy Voida and Tulane University assistant professor Nick Mattei in a partnership with Kiva, a nonprofit microlending organization that crowd-funds loans for underserved communities. Burke, who has been working with Kiva for three to four years, said the study will involve working with the nonprofit to help the organization recommend loans to users more fairly.
Burke said the researchers will first try to understand fairness as a concept in the context of Kiva’s mission. Researchers will conduct interviews and focus groups to help them understand what the organization’s mission is. The researchers will then turn those ideas into algorithms that preserve fairness concerns while providing personalized recommendations and put the systems into practice.
Jessie Smith, a PH.D. student at CU, said Kiva is an example of a high-stakes domain for machine learning because the platform can have real impacts for people. Smith will help conduct the interviews with Kiva employees and interpret those responses into algorithm design in the later stages of the research.
“If certain borrowers on the site aren’t recommended fairly, they could consistently become underfunded on the platform,” Smith wrote in an email. “Alternatively, if certain borrowers are over-recommended on the platform, they could consistently get funded over other equally deserving borrowers. The concerns for fair treatment within the machine learning application of recommendations on this platform are very real and have very real consequences to people all across the world.”
Smith said understanding fairness and being transparent about how algorithms impact people’s lives is important in creating a more equitable future.
“I hope this study will help create more fair outcomes for various stakeholders of Kiva’s microlending platform, and I also hope that it will act as a preliminary case study for operationalizing fairness in recommender systems in a real-world setting,” Smith said.