The New Yorker
In the fall of 2015, Rob Reich, a philosopher and a political scientist at Stanford, was chatting with a freshman during office hours. “I asked him what he planned to study,” Reich recalled recently. “He said, ‘Definitely computer science. I have some ideas for startups.’” In the spirit of small talk, Reich asked, What kind? “He looked at me with total earnestness and said, ‘To tell you that, I’d have to ask you to sign a nondisclosure agreement.’ ”
The student was hardly the first to treat a Stanford education as a prelude to an I.P.O. In 1939, two graduates started Hewlett-Packard; in 1996, two Ph.D. students founded Google. A recent study estimated that if all the companies established by Stanford alumni were to form their own country, it would have the world’s tenth-largest economy. For a long time, the default attitude on campus, where recruiters from Instagram and Palantir dispensed company swag outside the Gates Computer Science Building, was that Big Tech could do no wrong. That attitude became harder to justify after the post-2016 “techlash”— the widespread realization that along with the upsides of the digital revolution, such as more efficient burrito delivery, might come downsides, such as the unravelling of Western democracy.
To Reich, the university seemed ripe for a cultural transformation, or at least a nudge toward nuance. “You can’t become a doctor or a lawyer without being asked to engage seriously with the profession’s ethical duties and trade-offs,” he said. But it’s easy to pass through Stanford, as a disproportionate number of Silicon Valley titans do, without giving any real thought to tech ethics.
Reich approached two other Stanford professors—Mehran Sahami, a computer scientist and a former Google employee, and Jeremy Weinstein, a political scientist who served on President Obama’s National Security Council— about co-teaching an interdisciplinary course. It launched, in 2019, as Computers, Ethics, and Public Policy. (Headline on Stanford’s news site: “Tech ethics course urges students to move responsibly and think about things.”) The goal of the course was to infuse problem sets on policy dilemmas and philosophical debates with the brass tacks of coding. The first year, three hundred undergrads took the course.
Then COVID hit. Stanford, being Stanford, had a technological solution. A classroom was converted into a bespoke Zoom studio, with three cameras, a lighting rig, and a video wall, where students appeared in hundreds of rectangles. One of the cameras pointed at a sliding glass door from which professors emerged, like prizes on “Let’s Make a Deal,”when it was their turn to lecture. For a recent class, Reich entered first, removed his mask, and started to recap the assigned reading, “The Ones Who Walk Away from Omelas,” a 1973 short story by Ursula K. Le Guin, which he described as “a story about a Utopia, or possibly a dystopia.” Omelas is a city where all citizens live in splendor, but their comfort is made possible by the suffering of a child locked away in a dungeon. When the citizens learn this secret, some make their peace with it; others walk away in protest. Reich told his students,“What I would like you to discuss for three minutes, in a Zoom breakout room, is: Do you think the people who leave Omelas are heroes or cowards?” He then posted a Zoom poll, which included four options (heroes, cowards, both, neither). This generated a lively debate in the chat (Yuna Blajer de la Garza: “The tyranny of the majority!” Maya Ziv: “don’t sacrifice me on your altar of utilitarian utopia”), but the poll froze before the results could be finalized. “Good intentions tempered by technical affordances, as always happens,” Reich said.
Last month, Reich, Sahami, and Weinstein published a book titled “System Error: Where Big Tech Went Wrong and How We Can Reboot.” The introduction recounts the story of Joshua Browder, who “entered Stanford as a young, brilliant undergraduate in 2015.” After three months at Stanford, he invented a chatbot to help people get out of paying their
parking tickets; within a year he was the C.E.O. of DoNotPay, an “online robot lawyer” startup now valued at more than two hundred million dollars. “He is not a bad person,” the professors write. “He just lives in a world where it is normal not to think twice about how new technology companies could create harmful effects”—such as encouraging citizens to stop funding public roads, which tend to crumble when people DoNotPay for them. Browder wrote on Twitter that Sahami and Reich were “my two favorite Stanford professors,” and that he was “surprised to learn they spent the entire first chapter bashing DoNotPay.”
“Thanks for engaging with the book,” Sahami responded.
“I think we should discuss it in person during one of the classes,” Browder wrote back. “I will bring the data.”