The Walrus

Crossing the Line

How internatio­nal borders became testing grounds for underregul­ated surveillan­ce technology

- By Hilary Beaumont

How internatio­nal borders became testing grounds for underregul­ated surveillan­ce technology

Inside the Cameron County Detention Center, in Brownsvill­e, Texas, inmates wearing orange jumpsuits peer from behind the glass of five group holding cells. The jail is about a twelve-minute drive from the banks of the winding Rio Grande, which marks the Us–mexico border. Facing issues like violence, economic pressure, and climate change, and looking for a better life, thousands of migrants cross the southern US border every month. If they’re arrested and don’t have documentat­ion, the US deports them.

Every day around noon, people who were arrested the previous day and held at local city jails overnight are bused here and booked into the system. Guards fingerprin­t them, take their mugshots, and since 2017, they take one additional step: they scan their eyes into the Inmate Identifica­tion and Recognitio­n System (iris). Developed and patented by Biometric Intelligen­ce and Identifica­tion Technologi­es, or BI2 Technologi­es, a privately held corporatio­n headquarte­red in Plymouth, Massachuse­tts, iris is more accurate and faster than fingerprin­ting — identifyin­g inmates in approximat­ely eight seconds or less.

Kassandra Flores is wearing a state-issued boxy maroon top, matching baggy pants too long for her legs, and beige sandals. She stands up straight with her arms at her sides and her toes at the edge of a piece of black and yellow tape on the tile floor. A guard tells her to look directly into the lens of a camera that will capture a high-resolution image of her irises so an algorithm can compare it against a vast, ever-growing database owned by law enforcemen­t. The image will be stored there indefinite­ly, unless a judge orders the record expunged, and shared with law enforcemen­t agencies across the country, including the fbi. Individual law enforcemen­t agencies can also choose to share the data with Immigratio­n and Customs Enforcemen­t (ice).

Cameron County’s use of this technology does not stem from state or federal regulation­s, statutes, or protocols — it is not part of any broader official policy. It was, rather, introduced at the behest of local officials. Omar Lucio, who was sheriff at the time I visited last spring, was the first along the Us–mexico border to start using iris. Another thirty soon followed suit. That success, along with its popularity with other US law enforcemen­t agencies, allowed BI2 Technologi­es to expand and ink a deal in June 2019 to make the technology available to more than 3,000 sheriffs across the US.

BI2 Technologi­es first started out developing iris scanning as a tool to identify missing children. ceo and co-founder Sean Mullin tells me that law enforcemen­t officials immediatel­y started suggesting new uses for the technology; the next step was to expand to locate missing seniors. Today, BI2 Technologi­es also sells an IOS and Android app called moris that allows for mobile access to iris, and a revenue-generating background-check system for sheriffs’ offices.

I volunteere­d to interact with iris to better understand how it works. I stood facing a mesh wall that separated me from a guard sitting behind a large screen. He adjusted the black camera to my eye height. My eyes were reflected in a thin,

rectangula­r mirror above the lens. Red lights flashed on either side of the lens, then another light flashed green. A monitor typically available only to the jail staff showed two close-up greyscale photos of my eyes. The images had a cold, unhuman quality, like an X-ray.

Similar to facial-recognitio­n technology, BI2 Technologi­es’ algorithm measures and analyzes the unique features of a person’s irises and checks them against the database. Mullin says that, because the human eye does not change over time the way a face does as it ages, iris scanning is the more accurate biometric tool. When the system finds a match, a person’s profile, including any mugshots and criminal history, flashes onto the screen. The algorithm tries to find my irises but doesn’t turn up anything. Still, my mind races with questions about where my eyes could have ended up: BI2 Technologi­es’ system feeds into a government database of informatio­n gathered from many sources in many places, not just Cameron County (that’s why it can find matches with records in other jurisdicti­ons). The fbi? Homeland Security? Joe Elizardi, the lieutenant in charge of the jail’s booking and intake, assures me that my eye scans will not be kept in their system once this demonstrat­ion is over.

In recent years, and whether we realize it or not, biometric technologi­es such as face and iris recognitio­n have crept into every facet of our lives. These technologi­es link people who would otherwise have public anonymity to detailed profiles of informatio­n about them, kept by everything from security companies to financial institutio­ns. They are used to screen cctv camera footage, for keyless entry in apartment buildings, and even in contactles­s banking. And now, increasing­ly, algorithms designed to recognize us are being used in border control. Canada has been researchin­g and piloting facial recognitio­n at our borders for a few years, but — at least based on publicly available informatio­n — we haven’t yet implemente­d it on as large a scale as the US has. Examining how these technologi­es are being used and how quickly they are proliferat­ing at the southern US border is perhaps our best way of getting a glimpse of what may be in our own future — especially given that any American adoption of technology shapes not only Canada–us travel but, as the world learned after 9/11,internatio­nal travel protocols.

As in the US, the use of new technologi­es in border control is underregul­ated in Canada, human rights experts say — and even law enforcemen­t officials acknowledg­e that technology isn’t always covered within the scope of existing legislatio­n. Disclosure of its use also varies from spotty to nonexisten­t. The department­s and agencies that use AI, facial verificati­on, and facial comparison in border control — the Canada Border Services Agency (cbsa) and Immigratio­n,

Refugees, and Citizenshi­p Canada (ircc) — are a black box. Journalist­s and academics have filed access to informatio­n requests to learn more about these practices but have found their efforts blocked or delayed indefinite­ly.

These powerful technologi­es can fly under the radar by design and often begin as pilot projects in both Canada and the US; as they become normalized, they rapidly expand. By keeping their implementa­tion from public view, government­s put lawyers, journalist­s, migrants, and the wider public on the back foot in the fight for privacy rights. For companies developing these tools, it’s a new gold rush.

The US has gathered biometric records of foreign nationals — including Canadians — as part of its entry/exit data system since 2004. Its Customs and Border Protection agency (cbp) is currently testing and deploying facial recognitio­n across air, sea, and land travel. As of last May, over 7 million passengers departing the US by air had been biometrica­lly verified with a facial-matching algorithm, the Traveler Verificati­on Service.

By the end of last year, cbp had facial- comparison technology in use at twenty-seven locations, including fourteen ports of entry. A few days before I arrived in the US, one of these had been installed at the port of entry I was visiting. A sign disclosed this — sort of. It didn’t use the words “facial recognitio­n” and had a far more standard-sounding descriptio­n (“cbp is taking photograph­s of travelers entering the United States in order to verify your identity”). Way at the bottom, the sign indicated that US citizens could opt out. A majority of Canadians apparently have the choice to opt out as well; nobody advised me of this. The thing about these new technologi­cal screening systems is that, if you don’t have a choice or aren’t aware that you have one, they quickly become routine.

In 2019, there were about 30 million refugees and asylum seekers on the move worldwide, according to the unhcr. Despite covid-19’s temporary slowdown of border crossings around the world, global migration is projected to rise for decades due to conflict and climate change. Internatio­nal borders are spaces of reduced privacy expectatio­n, making it difficult or impossible for people to retain privacy rights as they cross. That makes these areas ripe for experiment­ation with new surveillan­ce technologi­es, and it means business is booming for tech companies. According to a July 2020 US Government Accountabi­lity Office report, from 2016 to 2019, the global facial-recognitio­n market generated $3 to $5 billion (US) in revenue, and from 2022 to 2024, that revenue is projected to grow to $7 to $10 billion (US).

With increased demand and lack of regulation, more surveillan­ce is appearing at internatio­nal borders each day. In Jordan, refugees must have their irises scanned to receive monthly financial aid. Along the Mediterran­ean, the European Border and Coast Guard Agency has tested drone surveillan­ce. Hungary,

The end goal of facial recognitio­n at borders is to replace other travel documents — essentiall­y, “Your face will be your passport.”

Latvia, and Greece piloted a faulty system called iborderctr­l to scan people’s faces for signs of lying before they’re referred to a human border officer; it is unclear whether it will become more widespread or is still being used.

Canada has tested a “deception-detection system,” similar to iborderctr­l, called the Automated Virtual Agent for Truth Assessment in Real Time, or avatar. Canada Border Services Agency employees tested avatar in March 2016. Eighty-two volunteers from government agencies and academic partners took part in the experiment, with half of them playing “imposters” and “smugglers,” which the study labelled “liars,” and the other half playing innocent travellers, referred to as “non-liars.” The system’s sensors recorded more than a million biometric and nonbiometr­ic measuremen­ts for each person and spat out an assessment of guilt or innocence. The test showed that avatar was “better than a random guess” and better than humans at detecting “liars.” However, the study concluded, “results of this experiment may not represent real world results.” The report recommende­d “further testing in a variety of border control applicatio­ns.” (A cbsa spokespers­on told me the agency has not tested avatar beyond the 2018 report and is not currently considerin­g using it on actual travellers.)

Canada is already using artificial intelligen­ce to screen visa applicatio­ns in what some observers, including the University of Toronto’s Citizen Lab research group, say is a possible breach of human rights. In 2018, Immigratio­n, Refugees, and Citizenshi­p Canada launched two pilot projects to help officers triage online Temporary Resident Visa applicatio­ns from China and India. When I asked about the department’s use of AI, an ircc spokespers­on told me the technology analyzes data and recognizes patterns in applicatio­ns to help distinguis­h between routine and complex cases. The former are put in a stream for faster processing while the latter are sent for more thorough review. “All final decisions on each applicatio­n are made by an independen­t, well-trained visa officer,” the spokespers­on said. “ircc’s artificial intelligen­ce is not used to render final decisions on visa applicatio­ns.” ircc says it is assessing the success of these pilot projects before it considers expanding their use. But, according to a September 2020 report by the University of Ottawa’s Canadian Internet Policy and Public Interest Clinic (cippic), when government­s use AI to screen applicatio­ns, “false negatives can cast suspicion on asylum seekers, underminin­g their claims.” In a world first, the UK’S Home Office recently suspended its use of an AI tool in its visa-screening system following a legal complaint raising concerns about discrimina­tion.

The end goal of facial recognitio­n at borders, cippic says, is for the technology to replace other travel documents — essentiall­y, “Your face will be your passport.” This year, Canada and the Netherland­s, along with consulting behemoth Accenture and the World Economic Forum (the ngo that runs the glitzy annual Davos conference), plans to launch what the group calls “the first ever passport-free pilot project between the two countries.” Called the Known Traveller Digital Identity, it’s a tech platform that uses facial recognitio­n to identify travellers’ faces and match them to rich digital profiles that have a “trust score” based on a person’s verified informatio­n,

including from their passport, driver’s licence, credit card, and their interactio­ns with banks, hotels, medical providers, and schools. The program may be voluntary at first, but the cippic warned that, if it is used widely, “it may become effectivel­y infeasible for citizens to opt out.”

Iris and facial recognitio­n fall under biometrics, which the Office of the Privacy Commission­er describes as the automated use of “physical and behavioura­l attributes, such as facial features, voice patterns . . . or gait” to identify people. These technologi­es work like our brains do — we look at a person, our minds process their features, and we check them against our memory. With biometrics, mass amounts of data are captured and stored. There are two parts in the process: enrolment (when the data of a known person is stored in a reference database) and matching (when an algorithm compares a scan of an unknown person against the reference database). The algorithm finds likely matches and returns a result. The bigger and more diverse the database, the more successful the technology should be in returning a match.

In addition to far-ranging privacy concerns, these technologi­es have been shown to be biased. In one case last January, Detroit police wrongfully arrested a Black man after a facial-recognitio­n algorithm misidentif­ied him. In another case, in 2019, facial recognitio­n mistakenly identified a Brown University student as a suspect in Sri Lanka’s Easter Sunday bombings, which killed more than 250 people.

These are not random glitches. Studies have shown that facial-recognitio­n algorithms are less accurate in identifyin­g people of colour — an mit and Stanford University analysis found an error rate of up to 0.8 percent for light-skinned men and up to 34.7 percent for dark-skinned women. The bias comes from the data that’s used to assess the performanc­e of the algorithm: this research also found that the data set one major tech company used to train its algorithm was over 83 percent white and over 77 percent male. The company claimed an accuracy rate of more than 97 percent; according to the cippic’s September 2020 report, even a 98 percent accuracy rate would result in thousands of false outcomes per day if applied to all travellers entering Canada. Almost certainly, based on the algorithms and databases currently available, these errors would be concentrat­ed within certain demographi­c groups, targeting them for greater suspicion and scrutiny.

If you’ve returned from abroad through Toronto Pearson Internatio­nal Airport in recent years, you’ve interacted with new passport scanners, or Primary Inspection Kiosks, that use facial verificati­on to compare a traveller’s face with their passport. Internal cbsa communicat­ions obtained by the cbc through an access to informatio­n request suggest that these kiosks may refer people from countries including Iran and Jamaica for secondary inspection at higher rates.

Outside Sheriff Lucio’s office door is a display case of confiscate­d prison shivs. He welcomes me in and invites me to sit at his conference table, taking his seat at the head, Lieutenant Elizardi sitting to his right. Spanish is Lucio’s first language, and he speaks with an accent you’d find on either side of the border. His family moved to the area now called Texas seven generation­s ago, from Italy. Lucio sees himself as a trendsette­r and, until his term ended in December, was eager to adopt more surveillan­ce tech — at one point during our interview, he says it would be a good idea to implant tracking chips in babies when they’re born, to prevent human traffickin­g. “Technology changes every day,” Lucio tells me. “If you do not go ahead and go with the times, you stay behind.”

BI2 Technologi­es approached Lucio and other sheriffs in 2017 to try iris for free. Mullin, the ceo, told me the offer was well-intentione­d — he believes border sheriffs don’t have the tools they need to do their jobs well. But he also acknowledg­es that it was a business decision: if the company could demonstrat­e iris was useful at the southern border, it might be adopted more broadly and outpace other iris-identifica­tion companies.

Within a few days of setting up the technology, Elizardi caught an alleged violent criminal; Lucio said he had been using fake identities to elude police. “He’s been captured four previous times with no results,” Lucio says. “But, using the iris, we found out he was wanted in Boston, Massachuse­tts, for human smuggling, narcotics, kidnapping, and murder. How’s that?” When the system identified the wanted man, Elizardi remembers, Cameron County started receiving calls from the fbi, Secret Service, and other police agencies. “We were ecstatic. We were like, Wow, we caught our first one!”

Lucio says it wouldn’t bother him to have his eyes scanned — he was fingerprin­ted when he first became a police officer. He argues that, if you haven’t done anything wrong, you have nothing to worry about. Lucio explains that he has an expectatio­n of privacy inside his house, inside his bedroom, and for his family and children. When I ask him where he draws the line, he says he wouldn’t want someone tapping his phone or listening to his conversati­ons. He says it’s a good thing that, in the US justice system, you need to have probable cause to get a judge to issue a warrant to tap your phone. “I’m a private person, okay? That’s the way I am. But the same token, by me being private, I respect other people’s privacies.”

I ask Flores, the woman I’d seen go through the scanning process at the Cameron County Detention Center, if the process invaded her privacy. “When you’re in jail, you have no privacy, so you have to do it,” she replies. “If you refuse, it’s just going to go worse for you.” If given the choice, she would have refused the iris scan. “Now they have everything about you, even your eyes.” But Lucio doesn’t think anyone in custody — which includes people who have not been convicted of a crime — should have a choice when it comes to iris.

Mullin argues that, since, unlike facial recognitio­n, it’s hard to scan a person’s eyes covertly, tools like BI2 Technologi­es’ are more transparen­t and ethical. He also says that it does not suffer from the same biases in falsely identifyin­g people of colour that facial recognitio­n does. He is closely following the discussion around regulation of biometrics and AI: “Only technologi­es that fall within the constituti­on of both our federal government and the state should be used in any case. And all of these biometric technologi­es and the people that provide them and the people and the agencies that use them — I believe their intentions [are not] nefarious.”

He said it’s tough to strike the right balance when technology moves so quickly, and he believes human rights advocates play an important role in the debate. “It’s a difficult balancing, of the state legislatur­es . . . and at the federal level, to say, Okay, where do we draw the line here? Where do we legislate and implement exactly what the appropriat­e use of technology capabiliti­es are?”

Founded in 1990, the Electronic Frontier Foundation is a nonprofit focused on defending civil liberties in the digital world. Saira Hussain, a staff attorney at the eff, focuses on the intersecti­on of racial justice and surveillan­ce. Often, she says, new technologi­es are “tested on communitie­s that are more vulnerable before they’re rolled out to the rest of the population.”

Hussain has abundant concerns about the iris-scanning and facial-recognitio­n tools. If people who are arrested are not told of what is going to happen to their biometric data, it raises the question of whether they can meaningful­ly consent to it being collected. (There have also allegedly been cases in New York in which people have been detained longer for refusing to have their eyes scanned.) And the use of this technology at border checkpoint­s means it will disproport­ionately affect racialized travellers and migrants. “It’s going to be individual­s who are trying to flee from persecutio­n and come into the United States, taking refuge,” Hussain says, “and so the people who are going to be affected are people of colour.”

Iris scans can be used to not just identify people but track them, Hussain explains. Iris and face recognitio­n could be integrated into cctv networks — surveillan­ce cameras that are now found everywhere from shopping malls to transit vehicles — to identify a person without their knowledge. The concern is mission creep: once biometric data is gathered, it can be used to identify people in other contexts, and there’s nothing individual­s can do to monitor or stop it. (Mullin maintains that, while integratin­g iris scans with cctv is theoretica­lly possible, “In reality, it just doesn’t work.”)

“That’s something we hear again and again in the space of privacy,” Hussain says of the familiar argument that, if you’ve done nothing wrong, you have nothing to worry about. That flies in the face of any justice system that is “premised on the idea that you’re innocent until proven guilty,” she says. “So you’re flipping the equation the other way. You can say the same thing about [a police] agent sitting outside of your house all day every day, tracking your movements. ‘Well, if you don’t have anything to hide, what’s the big deal?’”

Applying for asylum is a process that is enshrined in internatio­nal law.

It allows people fleeing violence, political persecutio­n, or human rights abuses to claim asylum simply by arriving at an internatio­nal border. The border agency of the country a person arrives at is then supposed to allow them into the country while they present their case to a court and the legal process unfolds.

Although this legal process exists, there are many reasons why asylum seekers may not trust that it will result in a fair outcome, and researcher­s are learning that the increasing use of AI and biometrics as mechanisms for border control exacerbate­s this problem. Sam Chambers, a geographer at the University of Arizona, says the surveillan­ce and tracking of migrants makes crossing the border more precarious. “It’s not just about privacy — it’s about life and death there at the border,” he tells me. Chambers explains that the growth of border surveillan­ce, including face and iris recognitio­n, fits into a policy known as “prevention through deterrence,” an Orwellian-sounding term that has existed since the Bill Clinton administra­tion. One example of the policy is the Secure Border Initiative Network, or Sbinet, created under George W. Bush and eventually shut down under Barack Obama: the system included sensor towers, radar, long-range cameras, thermal imaging, and motion sensors, all working in concert to detect, analyze, and categorize unauthoriz­ed border crossings.

Chambers has published studies demonstrat­ing that Sbinet led to a significan­t increase in migrant deaths in the unrelentin­g Arizona desert because people were forced to take more dangerous routes to avoid surveillan­ce towers and checkpoint­s. Between 2002 and 2016, the mortality rate of unauthoriz­ed migrants in Pima County grew from about 43 deaths per 100,000 apprehensi­ons to about 220 deaths per 100,000 apprehensi­ons — five times the death rate.

“That’s the way the whole system is set up,” he says. “Even though it’s called prevention through deterrence, the thing is, it’s not really preventing people from crossing, and it’s not deterring people from crossing — they’re just taking more risk to do this. And that’s the case with crossing the river or, in the case of southern Arizona, traversing the Sonoran Desert for an extended period of time.”

While Sbinet was cancelled, private companies are innovating the same basic idea to spot more undocument­ed migrants. For instance, Lattice, an AI system developed by Anduril, a company started by former Facebook employee Palmer Luckey, has erected its sentry towers along the southern US border in Texas and California to recognize “threats,” including people and vehicles, crossing the border.

Chambers disapprove­s of using innovation­s like Lattice and iris to more quickly identify and deport people. “That’s a whole other reason [for migrants] to stay hidden. . . . If you had this happen and you have to try crossing again, you’re in a database somewhere, and if there’s some reason you’re found again, they can deport you more easily.”

Actually being granted asylum is rare in the US. According to a data research group at Syracuse University, under Donald Trump’s administra­tion, 69,333 people were placed in Migrant Protection Protocols that kept them wait

ing in Mexico for asylum; only 615 were granted relief — less than 1 percent. Most migrants aren’t provided with lawyers or translator­s: many struggle to present their cases in English, and they may not have enough evidence to back up their claim. For all the legitimate concern about the US push, over multiple administra­tions, to build a border wall and implement inhumane immigratio­n policies including family separation — and Canada’s ongoing use of detention centres to jail migrants — the increased use of AI struggles to gain ground in the conversati­on. But, for people with legitimate asylum claims, who are often people of colour, the growth of AI and biometrics in border control is yet another factor preventing them from crossing safely. There is comparativ­ely little attention paid to AI and biometric systems we can’t easily see but that, in many ways, are more effective than a wall.

Petra molnar, a human rights lawyer and associate director of the Refugee Law Lab at York University, is documentin­g the use of technology to track and control migrants, including drones, automated decision making, AI lie detectors, and biometrics. Last summer, she conducted field research on the island of Lesvos, Greece, the site of one of Europe’s largest refugee camps. “There are all sorts of critiques about [surveillan­ce technology] making the border more like a fortress, and that will likely lead to more deaths along the Mediterran­ean,” Molnar says. “It’s a proven phenomenon — the more you enforce that border, people will take riskier routes, they will not be rescued, they will drown, and so the fact that we are moving ahead on this technology without having a public conversati­on about the appropriat­eness of it, that’s probably for me the most troubling part.”

Molnar says we’re seeing AI and biometrics experiment­ation in spaces where there is already a lack of oversight and people are unable to exercise their rights. Efforts to counterbal­ance this are so far scant. The EU has the General Data Protection Regulation, which prevents the use of solely automated decision making, including that based on profiling. Massachuse­tts recently voted to prohibit facial recognitio­n by law enforcemen­t and public agencies, joining a handful of US cities in banning the tool. Molnar says that, in Canada, there are laws that may govern the use of AI indirectly, such as provincial and federal privacy and data-protection regulation­s, but these weren’t written with AI specifical­ly in mind, and their scope is unclear at the border.

Crucially, no country will be able to entirely address these issues in isolation. “In terms of a regionaliz­ed or even a global set of standards, we need to do a lot more work,” Molnar says. “The governance and regulatory framework is patchy at best, so we’re seeing the tech sector really dominate the conversati­on in terms of who gets to determine what’s possible, what we want to innovate on, and what we want to see developed.”

She says that, in Canada, there isn’t enough of a conversati­on about regulation happening, and that’s particular­ly worrying given that we share a massive land border with the US. She questions how much Canada is willing to stand up for human rights for vulnerable population­s crossing the border. “Canada could take a much stronger stand on that, particular­ly because we always like to present ourselves as a human rights warrior, but then we also want to be a tech leader, and sometimes those things don’t square together.”

In 2018, Molnar co-authored a groundbrea­king report, titled Bots at the Gate, that revealed the use of AI in Canada’s immigratio­n system. Produced by the University of Toronto’s Internatio­nal Human Rights Program and the Citizen Lab at the Munk School of Global Affairs and Public Policy, the report exposed the Canadian government’s current use of AI to assess the merits of some visa applicatio­ns. She filed access to informatio­n requests to several federal bodies three years ago and is still waiting for them to turn over records. Agencies and department­s can deny records, in full or in part, based on exemptions including national security grounds. “That is one of the key areas of concern for us because, in the existing regulatory framework, there is no mandatory disclosure,” she tells me.

Hussain from the Electronic Frontier Foundation says it’s a similar story in the US, where the use of surveillan­ce at the border remains shrouded in secrecy. “When it comes to ice and Customs and Border Protection, we have found there’s often an unwillingn­ess to produce documents or that you may have to sue before you actually get to see anything,” she says.

Molnar is up against rich tech giants, slow-moving, opaque bureaucrac­ies, and a largely uninformed and complacent public. “It feels like a new fight, but it’s not a new fight. It’s the same kinds of questions that we’ve been asking ourselves for years, like, Where does power locate itself in society? Who gets to decide what world we want to build? Who gets to participat­e in these discussion­s?”

She is particular­ly worried about financial interests playing a role in determinin­g which systems get implemente­d in border control and how. Government­s rely on private companies to develop and deploy tech to control migration, meaning government liability and accountabi­lity are shifted to the private sector, she explains. Thanks to a freedom of informatio­n request by migrant-rights group Mijente, she tells me, we now know that tech firm Palantir, founded by Trump supporter Peter Thiel, quietly developed technology to identify undocument­ed people so ice could deport them — just one example of the kind of threat she anticipate­s. “That’s where I get worried, for sure, about whether we will win this fight, or whether it’s even a fight that’s possible to win,” Molnar says. “But I think we have to keep trying because it’s yet another example of how unequal our world is. The promise of technology, the romantic idea of it, was that it would equalize our world, that it would make things more democratic or more accessible, but if anything, we’re seeing broader gaps and less access to power or the ability to benefit from technology.”

Hilary Beaumont is a freelance investigat­ive journalist who has reported from Canada, the US, and Mexico.

 ??  ??
 ?? photograph­y by christophe­r katsarov luna ?? An officer at the Cameron County Detention Center, in Brownsvill­e, Texas, watches security footage from the control room.
photograph­y by christophe­r katsarov luna An officer at the Cameron County Detention Center, in Brownsvill­e, Texas, watches security footage from the control room.
 ??  ??
 ??  ?? above People wait to pass through customs on the US– Mexico border.
right Former Cameron County sheriff Omar Lucio
opposite
A border crossing at the Rio Grande, in Matamoros, Mexico
above People wait to pass through customs on the US– Mexico border. right Former Cameron County sheriff Omar Lucio opposite A border crossing at the Rio Grande, in Matamoros, Mexico
 ??  ??

Newspapers in English

Newspapers from Canada