Baltimore Sun

Rush to regulate AI at federal and state levels lacks educator insight

- By Jessica A. Stansbury Jessica A. Stansbury ( jstansbury@ubalt.edu) is the director of teaching and learning excellence in the University of Baltimore’s Center for Excellence in Learning,Teaching and Technology. Her research focus includes innovative tea

The rush to regulate Artificial Intelligen­ce is staggering, mirroring the rapid pace of technologi­cal advancemen­t itself. In just under six months, from President Biden’s initial executive order on AI in education to proposed state legislatio­n, the landscape has shifted dramatical­ly, further complicati­ng matters for both students and teachers who are considerin­g how to properly use this technology in the classroom.

The urgency to regulate raises several questions: Are we rushing into constraint­s before truly comprehend­ing this transforma­tive technology? Is it hard to get AI right for schools? Yes and yes. Do we need to plunge in and wall off AI before we even understand it? This human-based contributo­r says no.

Let’s rewind to the genesis of AI regulation. President Biden’s executive order in October aimed to foster equity and innovation in education. Yet, it triggered apprehensi­ons within academia, notably due to the absence of key stakeholde­rs — educators, researcher­s, and academics — in the policy discourse. This disconnect raises concerns about the alignment of regulation­s with classroom realities and their potential impact on marginaliz­ed groups. Despite these reservatio­ns, the order was enacted, leaving educators to navigate its implicatio­ns.

We’ve seen this before, back in the days when stationary desktop computers were a “must-have” for every student, regardless of need. The current scenario echoes those past missteps. Crafting regulation­s without empirical evidence risks similar misalignme­nts with real-world educationa­l contexts. Just as we wouldn’t administer medicine without clinical trials, AI policies should not be created without the expertise of educators and researcher­s, with all stakeholde­r voices in the conversati­on.

So why weren’t students’ educationa­l needs fully examined before several million heavyweigh­t computers were shoved onto their desks? Did in-class computing solve the divide between technology haves and have-nots? There’s not much evidence to show that it did.

This one-size-fits-all approach is particular­ly problemati­c given AI’s rapid evolution, possibly resulting in outdated rules. Add in the distrust expressed by minority and underrepre­sented groups towards such policies — intensifie­d due to historical research atrocities like the Tuskegee Experiment — and you have a strong level of skepticism against hastily implemente­d AI regulation­s. Equally important is ensuring equitable access to AI tools, avoiding exclusivit­y to resource-rich entities, and underscori­ng the need for diverse educator input to reflect varied student population­s. The latter is a perspectiv­e too often overlooked across much of government­al policymaki­ng.

Now, just as Biden’s order is settling in, we have proposed Maryland legislatio­n about Artificial Intelligen­ce. House

Bill 1271 would further intensify the complexiti­es, potential pitfalls and concerns of governing policies. The bill’s requiremen­t for state units to report AI tool procuremen­ts over $1,000, along with additional inventorie­s and assessment­s, would pose significan­t challenges. It potentiall­y affects faculty and students’ intellectu­al property rights and creates ambiguitie­s for lower-cost AI tools vital for equitable education.

Add in the bill’s compositio­n for an “Artificial Intelligen­ce Subcabinet,” its potential administra­tive burdens, and the lack of educator involvemen­t in crafting these policies, and you have several contentiou­s items now sitting on lawmakers’ desks.

Between the federal executive order and this bill, plus other bills no doubt making their way through statehouse­s across the country, there are numerous questions about the future of AI in education:

Where is the voice of teachers, especially those from minority and underrepre­sented background­s? Why are our insights, born from in-the-field experience­s, not being heard? Why is the government loudly proclaimin­g its decisions on AI are what’s best for all children, in all schools? Where is the research? Are the regulators sure they’re on the right path?

As we stand at the edge of a technology-driven revolution in education, it is crucial to ground our policies in data, historical context, and the diverse insights of educators. Teaching should remain the focal point of technologi­cal advancemen­ts, ensuring that AI complement­s pedagogy rather than overshadow­ing it.

Reflect on the image of an obsolete computer gathering dust in a classroom corner. There was a time when that machine was supposed to be the future of education. The way we’re treating Artificial Intelligen­ce for learning — either as a mystery that may yield miracles, or an unmitigate­d disaster — is a reminder of those fraught times. Just as we’ve learned from past mistakes, let’s approach AI in education with greater foresight and inclusivit­y of teachers’ insights this time.

 ?? ??

Newspapers in English

Newspapers from United States