Age, drought, rodents and neglect heighten flood danger in California
LOS ANGELES — The levee breach that left an entire California town underwater this past weekend is putting a spotlight on how the state’s vital flood control infrastructure is being weakened by age, drought, climate change, rodents and neglect — leaving scores of communities at risk.
On Friday night, the swollen Pajaro River burst through the worn-down levee, flooding the entire town of Pajaro and sending its roughly 3,000 residents into what officials are now estimating to be a multimonth-long exile. A second breach was reported Monday.
For decades, the levee was ignored by the federal government — never rising to the status of a fix-worthy project — despite repeated pleas, breaches, floods and even two deaths.
“Yeah, the money wasn’t there because the prioritization wasn’t there,” said Mark Strudley, executive director of the Pajaro Regional Flood Management Agency.
And as the communities and local government agencies begged for help and funding, the levee aged, eroded and, in some places, sank.
The situation is by no means unique to Pajaro. Experts say similar weaknesses plague levee systems across California and the nation.
As climate change threatens to intensify and exacerbate extreme weather events — such as flooding and even drought — the unease and desperation of residents and emergency responders in communities near these crumbling systems is growing.
“We all know that there’s a lot of economically disadvantaged communities that are built in natural disaster-prone areas,” Strudley said. “That’s just the very unfortunate way the planning and development process has worked over the past 100plus years in the United States.”
‘A limited life span’
Throughout Northern California, the Central Valley and the Sacramento-san Joaquin River Delta, there are more than 13,000 miles of levees designed to protect dry land from floods, deliver drinking water, and protect homes, businesses and agriculture from flooding.
According to work by Farhid Vahedifard — a professor of civil engineering at the University of Mississippi — a high percentage were constructed by settlers in the mid- to late 19th century to protect agricultural lands from flooding.
“And they’ve been worn down, just like anything else,” Strudley said. “They have a limited life span.”
In most cases, Strudley and Vahedifard said, the levees were built with poorly compacted, unengineered mixtures of sandy, clayey and organic soils, material “that was scraped up out of the riverbed and used for fill to build the levees,” Strudley said.
In addition, they’ve suffered the wear and tear of time, rodents, seismic events and drought.
“These things leak long before they get overtopped,” said Strudley, noting that one of the biggest problems is burrowing animals.
In 2011, the California Department of Water Resources examined Northern California’s levee system. The evaluation considered about 1,800 miles of earthworks throughout the Sacramento and San Joaquin River basins and found that more than half of the levees were what they considered “high hazard” — indicating they were in danger of failing during an earthquake or flood event.
And that was before the megadrought, which dried out soils within the levees and underneath them — causing the structures to weaken and lose strength, Vahedifard said.
Drought also hastened subsidence as water districts and users siphoned water from underground aquifers, depressing and sinking the land above.
Not a priority
But until recently, the federal government and the U.S. Army Corps of Engineers did not see these systems — particularly if they were in marginalized or economically disadvantaged communities — as a priority.
“The federal funding system for levees and other life protection systems completely disadvantages federally disadvantaged communities, systemically,” said Zach Friend, a Santa Cruz County supervisor whose district encompasses Watsonville and the northern side of the Pajaro River Valley.
He said what happened to the levee in Pajaro could have been prevented had resources been provided to the community to rebuild the levee — something it had been requesting since the 1960s.
“The storms that came that blew out the levee ... are a five- to seven-year-interval storm,” he said. “So if our infrastructure can’t even withhold what really is a relatively regular occurrence storm, climate change — what we are seeing in the future and what it’s going to do to disadvantaged communities — is something beyond the pale.”
He said the cumulative effect of these storms and the “weather whiplash” between severe drought and severe storms “really tests the possibility of how you even build for and plan for that level of resilience toward communities that have been underinvested in for the past 100 years and that are starting at a net negative rebuild.”
‘Just the beginning’
The levee failure on the Pajaro River points to larger hazards that California has yet to address in many areas where communities are vulnerable, said Deirdre Des Jardins, an independent water researcher and advocate.
“Pajaro is just the beginning,” Des Jardins said.
Des Jardins has for years urged state and local officials to invest in flood protection infrastructure in areas that are at risk, and she has suggested that an effective climate adaptation strategy should focus on “measurable, actionable targets for protection of vulnerable populations.”
“You look at where to invest money to protect lives. And we’re not doing that. We don’t have quantifiable targets about protecting lives and protecting these vulnerable communities,” De Jardins said.
In addition to rural communities like Pajaro, she said Stockton and neighboring cities face major flood risks due to their reliance on inadequate levees with known seepage problems.
As the federal government continues to re-evaluate how it invests in major infrastructure projects in economically disadvantaged communities, the cost of doing nothing will continue to mount.
“How much flood risk is there in California? A bunch. The reason is pretty simple,” said Jeffrey Mount, a senior fellow at the Public Policy Institute of California. “The economic consequences of even a modest flood are pretty high.”
He said this is especially the case in the L.A. Basin, where the growing likelihood of major flooding overlaps with the increasing value of surrounding properties.
“There is a long legacy of very bad flood management and land use choices along the L.A., San Gabriel and Santa Ana rivers,” Mount said. “That is an equation for high risk because eventually a flood will come and the economic costs will be immense.”
Local, state and federal officials have improved flood control infrastructure in the Sacramento area during the past two decades, which has reduced risk in the area, Mount said.
Still, many low-lying communities in the Central Valley face substantial flood risks. And many farms in and around the Delta sit below sea level, requiring levees to keep water out, Mount said.
“Our flood infrastructure is old,” Mount said, “and is designed for the hydrology of the past, not the future.
“This is going to be a major challenge going forward, particularly because we have increased the potential economic costs of flooding by our land use choices.”
In January, when levees failed along the Cosumnes and Mokelumne rivers and caused deadly flooding, Mount said those failures point to larger hazards. “There are two kinds of levees: Those that have failed, and those that will fail,” he said.
Mount said he is now especially concerned about the immense amount of snowmelt runoff expected from the central and southern Sierra this spring.
Sometimes Rabbi Joshua Franklin knows exactly what he wants to talk about in his weekly Shabbat sermons — other times, not so much. It was on one of those not-somuch days on a cold afternoon in late December that the spiritual leader of the Jewish Center of the Hamptons decided to turn to Artificialintelligence.who Franklin, 38, has dark wavy hair and a friendly vibe, knew that Openai’s new CHATGPT program could write sonnets in the style of Shakespeare and songs in the style of Taylor Swift. Now, he wondered if it could write a sermon in the style of a rabbi.
So he gave it a prompt: “Write a sermon, in the voice of a rabbi, about 1,000 words, connecting the Torah portion this week with the idea of intimacy and vulnerability, quoting Brené Brown” — the bestselling author and researcher known for her work on vulnerability, shame and empathy.
The result, which he shared that evening in the synagogue’s modern, blond wood sanctuary and later posted on Vimeo, was a coherent, if repetitive talk that many in his congregation guessed had been crafted by famous rabbis.
“You’re clapping,” Franklin said after revealing that the sermon he’d just delivered was composed by a computer. “I’m terrified.”
As experiments like Franklin’s and the recent unsettling conversation between a tech columnist and Microsoft’s new chatbot demonstrate just how eerily humanlike some AI programs have become, religious thinkers and institutions are increasingly wading into the conversation around the ethical uses of a rapidly expanding technology that might one day develop a consciousness of its own — at least according to its Silicon Valley apostles. Calling upon a wide range of myths from Icarus to the Tower of Babel to the tale of the genie who can grant all our wishes with disastrous results, they are sounding an ancient warning about what happens when humans try to play God.
Before delivering the sermon CHATGPT had written, Rabbi Franklin told his congregation that what he was about to read had been plagiarized.
“Friends,” he began, reading from the Ai-scripted sermon, “as we gather today to study the Torah portion of the week, Vayigash, let us consider the importance of developing intimacy in our relationship with others.”
The robotic sermon went on to relate the story of when Joseph, the son of Jacob, was reunited with his brothers after many years. Although they had betrayed him in the past, Joseph greeted them with warmth and love.
“By approaching them with openness and vulnerability he’s able to heal old wounds and create deeper, more meaningful bonds with his siblings,” Franklin read. “This is a powerful lesson for all of us.”
It was an adequate sermon, but not the one Franklin would have penned. “What was missed was the idea of how we find God in meaningful encounters with others,” he said later. “How community and relationship creates God in our lives.” In other words, a sense that the sermon had sprung from the lived experience of a yearning, questing, suffering human being rather than an algorithmic formula.
It’s possible that spiritual leaders may one day be replaced by robots as AI continues to improve, (anything is possible).
But most theologians say other ethical concerns relating to AI are more pressing. They worry about growing financial inequality as automation eliminates thousands of jobs, and they question our ability to exercise free will as we increasingly rely on computer algorithms to make decisions for us in medicine, education, the judicial system and even how we drive our cars and what we watch on TV.
On a more existential level, the better AI becomes at mimicking human intelligence, the more it will call into question our understanding of sentience, consciousness, and what it means to be human. Do we want Ai-driven robots to become our servants? Will they have feelings? And are we obliged to treat them as if they did?
These ethical dilemmas may feel new, but at their core they represent issues that faith traditions like Judaism, Islam and Christianity have grappled with for millenniums, religious leaders say.
While religious institutions have not always behaved ethically in the past, they have centuries of experience parsing moral conundrums through the lens of their own belief systems, said the Rev. James Keenan, a Catholic theologian at Boston College.
“There are certain ways you can say all these great traditions are problematic, but they also have their insights and wisdom,” he said. “They have a history behind them that is worth tapping into.”
Since the earliest days of AI research in the 1950s, the desire to create a humanlike intelligence has been compared to the legend of the golem, a mythical creature from Jewish folklore, created by powerful rabbis from mud and magic to do its master’s bidding. The most famous golem is the one allegedly made by the 16th century Rabbi Judah Low ben Bezulel of Prague to protect the Jewish people from antisemitic attacks. The golem also served as an inspiration for Mary Shelley’s Frankenstein.
For centuries, the idea of an animate creature made by man and lacking a divine spark or a soul, has been part of the Jewish imagination. Rabbis have argued over whether a golem can be considered a person, if it could be counted in a minyan, (the quorum of 10 men required for traditional Jewish public prayer), if it could be killed, and how it should be treated.
From these rabbinic discussions, an ethical stance on artificial intelligence emerged long before computers were invented, said Nachson Goltz, a law professor at Edith Cowan University in Australia, who has written about the Jewish perspective on AI. While it is considered permissible to create artificial entities to assist us in our tasks, “we must remember our responsibility to keep control over them, and not the other way around,” he wrote.
Rabbi Eliezer Simcha Weiss, a member of the Chief Rabbinate Council of Israel, echoed this idea in a recent speech. “In every story of the golem, the golem is finally destroyed or dismantled,” he said. “In other words, the lesson the rabbis are teaching is that anything man makes has to be controlled by man.”
The rabbis also concluded that while a golem could not be considered a full person, it was still important to treat it with respect.
“The way we treat these things impacts us,” Goltz said. “The way we treat them determines the development of our own characters and sets the future course of our own exercise of moral agency.”
Another cautionary tale from Jewish and Muslim folklore revolves around the djinn, a nonhuman entity made of smokeless fire, that can occasionally be bound by humans and chained to their will. This is the origin of the story of the genie who can grant us anything we want, but cannot be put back in the bottle.
“The stories of the genie are an example of what happens when you ask a nonhuman to grant human wishes,” said Damien Williams, a professor of philosophy and data science at the University of North Carolina at Charlotte. “What comes out the other side seems shocking and punitive, but if you actually trace it back, they are simply granting those desires to the fullest extent of their logical implications.”
Islam provides another ethical lens through which to look at AI development. A legal maxim of Islamic jurisprudence states that repelling harm always has priority over the procurement of benefits. From this point of view, a technology that helps some people but puts others out of a job would be deemed unethical.
“Most of these technologies are being designed and deployed in many cases for the sake of it, and the harms that accrue are sometimes probabilistic,” Junaid Qadir, professor of electrical engineering at Qatar University who organized a conference on Islamic Ethics and AI. “We don’t know what it will be, technology has its own unintended effects.”
Overall, Islamic tradition encourages a cautious approach to new technology and its uses, said Aasim Padela, a professor of emergency medicine and bioethics at the Medical College of Wisconsin.
“Things that try to make you rival God are not thought of as a purpose to pursue,” he said. “Trying to seek immortality through a brain transfer, or to make a better body then the one you’ve got, those impulses are to be checked. Immortality is in the afterlife, not here.”
“The Rule of St. Benedict,” a book written in the 6th century as a guide to monastic life, offers an answer to questions about how we can ethically interact with AI, both now and in the future when we might encounter robots with human features, said Noreen Herzfeld, professor of theology and computer science at St. John’s University and the College of St. Benedict in Minnesota.
In the section of the book addressing the cellarer — the person in charge of the monastery’s provisions — St. Benedict tells the cellarer to treat everyone who comes to him with a kind word, and to treat all the inanimate objects in his storehouse “as if they were consecrated vessels of the altar.”
“To me that is something we can apply to AI,” Herzfeld said. “People always come first, but we must treat AI with respect, with care, because all earthly things should be treated with respect. The way you treat things is part of what informs your own character, and informs how you treat the Earth and other human beings. “
The Catholic Church has been especially vocal in the push for an ethics of AI that benefits humanity, centers human dignity and that does not have as its sole goal greater profit or the gradual replacement of people in the workplace.
“Indeed, if technological progress increases inequality, it is not true progress,” Pope Francis said in a November 2020 video announcing his prayer intention that robotics and artificial intelligence may always serve humankind. The Vatican’s goal is not to slow down the development of artificial intelligence, but the church does believe caution is essential, said Paolo Benanti, a Franciscan monk and one of the Pope’s chief advisors on new technology.
“On the one hand we do not want to limit any of the transformative impulses that can lead to great results for humanity; on the other hand, we know that all transformations need to have a direction,” he wrote in an email. “We have to be aware that if AI is not well managed, it could lead to dangerous or undesirable transformations.”
To that end, Vatican leaders helped craft the Rome Call for AI Ethics, a pledge first signed in 2020 by representatives for the Pontifical Academy for Life, IBM, Microsoft and the Italian Ministry of Innovation among others to champion the creation of AI technologies that are transparent, inclusive, and impartial. On Jan. 10, leaders from Jewish and Islamic communities gathered at the Vatican to add their signatures as well.
Asking technology companies to prioritize humanitarian goals rather than corporate interests may feel like an unlikely proposition, but the influence of the religious hierarchy on AI ethics shouldn’t be underestimated, said Beth Singler, professor of digital religions at the University of Zurich.
“It can help the masses of believers to think critically and use their voices,” she said. “The more the conversation is had by significant charismatic voices like the Pope, it will only increase the possibility that people can, from a grassroots level, appreciate what’s going on and do something about it.”
Benanti agreed.
“The billions of believers who inhabit the planet can be a tremendous force for turning these values into something concrete in the development and application of AI,” he said.
As for Franklin, the rabbi in the Hamptons, he said that his experiment with CHATGPT has ultimately left him feeling that the rise of AI could have an upside for humanity.
While artificial intelligence may be able to mimic our words, and even read our emotions, what it lacks is the ability to feel our emotions, understand our pain on a physical level, and connect deeply with others, he said.
“Compassion, love, empathy, that’s what we do best,” he said. “I think that chat GPT will force us to hone those skills and become, God willing, more human.”
As experiments like Franklin’s and the recent unsettling conversation between a tech columnist and Microsoft’s new chatbot demonstrate just how eerily humanlike some AI programs have become, religious thinkers and institutions are increasingly wading into the conversation around the ethical uses of a rapidly expanding technology that might one day develop a consciousness of its own — at least according to its Silicon Valley apostles.