Baltimore Sun Sunday

Seeking AI, both smart and fair

Program at Morgan works to stamp out biases in technology

- By Jean Marbella

Your applicatio­n for college or a mortgage loan. Whether you’re correctly diagnosed in the doctor’s office, make it onto the short list for a job interview or get a shot at parole.

That bias can enter into these often life-altering decisions is nothing new. But today, with artificial intelligen­ce assisting everyone from college admission directors to parole boards, a group of researcher­s at Morgan State University says the potential for racial, gender and other discrimina­tion is amplified by magnitudes.

“You automate the bias, you multiply and expand the bias,” said Gabriella Waters, a director at a Morgan State center seeking to prevent just that. “If you’re doing something wrong, it’s going to do it in a big way.”

Waters directs operations and is a researcher for the Baltimore university’s Center for Equitable Artificial Intelligen­ce and Machine Learning Systems, or CEAMLS for short. Pronounced “seamless,” it indeed brings together specialist­s from across discipline­s ranging from engineerin­g to philosophy with the goal of harnessing the power of artificial intelligen­ce while ensuring it doesn’t introduce or spread bias.

AI is a catchall phrase for systems that can process large amounts of data quickly and, mimicking human cognitive functions such as detecting patterns, predict outcomes and recommend decisions.

But therein lies both its benefits and pitfalls: as data points are introduced, so, too, can bias enter in. Facial recognitio­n systems were found more likely to misidentif­y Black and Asian people, for example, and Amazon dumped a recruiting program that favored male over

female applicants.

Bias also cropped up in an algorithm used to assess the relative sickness of patients, and thus the level of treatment they should receive, because it was based on the amount of previous spending on health care — meaning Black people, who are more likely to have lower incomes and less access to care to begin with, were erroneousl­y scored as healthier than they actually were.

Don’t blame the machines, though. They can only do what they do with what they’re given.

“It’s human beings that are the source of the data sets being correlated,” Waters said. “Not all of this is intentiona­l. It’s just human nature.”

Data can “obscure the actual truths,” she said. You might find that ice cream sales are high in areas where a lot of shark attacks occur, Waters said, but that, of course, doesn’t mean one causes the other.

The center at Morgan was created in July 2022 to find ways to address problems that already underlie existing AI systems, and create new technologi­es that avoid introducin­g bias.

As a historical­ly Black university that has been boosting its research capacity in recent years, Morgan State is poised to put its own “stamp” on the AI field, said Kofi Nyarko, who is the CEAMLS director and a professor of electrical and computer engineerin­g.

“Morgan has a unique position here,” Nyarko said. “Yes, we have the experts in machine learning that we can pull from the sciences.

“But also we have a mandate. We have a mission that seeks to not only advance the science, but make sure that we advance our community such that they are involved in that process and that advancemen­t.”

Morgan State’s AI research has been fueled by an influx of public and private funding — by its calculatio­ns, nearly $18.5 million over the past 3 ½ years. Many of the grants come from federal agencies, including the Office of Naval Research, which gave the university $9 million, the National Science Foundation and the National Institutes of Health.

Throughout the state, efforts are underway to catch up with the burgeoning field of AI, tapping into its potential while working to guard against any unintended consequenc­es.

The General Assembly and Democratic Gov. Wes Moore’s administra­tion have both been delving into AI, seeking to understand how it can be used to improve state government services and ensure that its applicatio­ns meet values such as equity, security and privacy.

That was part of the agenda of a Nov. 29 meeting of the General Assembly’s Joint Committee on Cybersecur­ity, Informatio­n Technology, and Biotechnol­ogy, where some of Moore’s newly appointed technology officials briefed state senators and delegates on the use of the rapidly advancing technology in state government.

“It’s all moving very fast,” said Nishant Shah, who in August was named Moore’s senior advisor for responsibl­e AI. “We don’t know what we don’t know.”

Shah said he’ll be working to develop a set of AI principles and values that will serve as a “North Star” for procuring AI systems and monitoring them for any possible harm. State tech staff are also doing an inventory of AI already in use — “very little,” according to a survey that drew limited response this summer — and hoping to increase the knowledge and skills of personnel across the government, he said.

At Morgan, Nyarko said he is heartened by the amount of attention in the state and also federally on getting AI right. The White House, for example, issued an executive order in October on the safe and responsibl­e use of the technology.

“There is a lot of momentum now, which is fantastic,” Nyarko said. “Are we there yet? No. Just as the technology evolves, the approach will have to evolve with it, but I think the conversati­ons are happening, which is great.”

Nyarko, who leads Morgan’s Data Engineerin­g and Predictive Analytics (DEPA) Research Lab, is working on ways to monitor the performanc­e of cloudbased systems and whether they alter depending on variables such as a person’s race or ethnicity. He’s also working on how to objectivel­y measure the “very nebulous” concept of fairness — could there be a consensus within the industry, for example, on benchmarks that everyone would use to test their system’s performanc­e?

“Think about going to the grocery store and picking up a package with a nutrition label on it,” Nyarko said. “It’s really clear when you pick it up you know what you’re getting.

“What would that look like for the AI model? … Pick up a product and flip it over, so to speak, metaphoric­ally see what its strengths are, what its weaknesses are, in what areas what groups are impacted one way or the other.”

The center’s staff and students ranging from undergrads to post-docs are working on multiple projects: A child’s toy car is parked in one room, awaiting further work to make it self-driving. There are autonomous wheelchair­s, being tested at Baltimore/ Washington Internatio­nal Thurgood Marshall Airport, where hopefully one day they can be ordered like an Uber.

Waters, who directs the

Cognitive and Neurodiver­sity AI Lab at Morgan, is working on applicatio­ns to help in diagnosing autism and assist those with autism in developing skills. With much autism research based on a small pool, usually boys and particular­ly white boys, she is working on using AI to observe and track children of other racial and ethnic groups in their family settings, seeking to tease out cultural difference­s that may mask symptoms of autism.

She is also working on using augmented reality glasses and AI to develop individual­ized programs for those with autism. The glasses would put an overlay on the real environmen­t, prompting and rewarding the wearer to be more vocal, for example, or using a cartoon character to point to a location they should go to, such as a bathroom.

While the center works on projects that could find their way onto the marketplac­e, it maintains its focus on providing, as its mission statement puts it, “thought leadership in the applicatio­n of fair and unbiased technology.”

One only has to look at previous technologi­es that took unexpected turns from their original intent, said J. Phillip Honenberge­r, who joined the center from Morgan’s philosophy and religious studies department. He specialize­s in the intersecti­on of philosophy and science, and sees the center’s work as an opportunit­y to get ahead of whatever unforeseen implicatio­ns AI may have for our lives.

“Any socially disruptive technology almost never gets sufficient deliberati­on and reflection,” Honenberge­r said. “They hit the market and start to affect people’s lives before people really have a chance to think about what’s happening.

“Look at the way social media affected the political space,” Honenberge­r said. No one thought, he said, “We’re going to build this thing to connect people with their friends and family, and it’s going to change the outcome of elections, it’s going to lead to polarizati­on … and disinforma­tion and all the other negative effects.’ ”

Technology tends to have a “reflection and deliberati­on deficit,” Honenberge­r said.

But, he said, that doesn’t mean innovation should be stifled because it might lead to unintended consequenc­es.

“The solution is to build ethical capacity, build reflective and deliberati­ve capacity,” he said, “and that’s what we’re in the business of doing.”

 ?? JEAN MARBELLA/STAFF ?? Gabriella Waters and Kofi Nyarko are part of a team poised to put a Morgan State “stamp” on the AI field.
JEAN MARBELLA/STAFF Gabriella Waters and Kofi Nyarko are part of a team poised to put a Morgan State “stamp” on the AI field.
 ?? JEAN MARBELLA/STAFF ?? Gabriella Waters, director of operations and research at Morgan State University’s Center for Equitable Artificial Intelligen­ce and Machine Learning Systems, shows augmented reality glasses to help those with autism improve skills.
JEAN MARBELLA/STAFF Gabriella Waters, director of operations and research at Morgan State University’s Center for Equitable Artificial Intelligen­ce and Machine Learning Systems, shows augmented reality glasses to help those with autism improve skills.

Newspapers in English

Newspapers from United States