Like a wound in the landscape, the rusty border wall cuts along Arizona’s Camino Del Diablo, the Devil’s Highway. You can drive up to it and touch it, the rust staining your hand for the rest of the day. Once the pride and joy of the Trump Administration, this wall is once again the epicenter of a growing political row.
I make my way slowly over the course of a few hours down the dusty Sonora desert, following the footsteps of a search-and-rescue group in southern Arizona to a memorial site of Elias Alvarado, a young husband and father from Central America, whose body was discovered mere kilometers from a major highway. Alvarado was ensnared in a growing surveillance and “smart border system,” a dragnet at the U.S.-Mexico border that has already claimed thousands of lives, underscored by a growing commitment by the U.S. government to make a virtual smart border extending far beyond its physical frontier.
[time-brightcove not-tgx=”true”]
High-risk and unregulated border technologies are impacting every aspect of migration. At the U.S.-Mexico border, fixed AI-surveillance towers scan the Sonora desert for movement, joining an arsenal of border technologies such as ground sensors, license plate readers, and facial recognition applications used by Customs and Border Protection (CBP). Now, in an election year, migration continues to be a defining issue for both the Biden administration as well as former President Trump, who promises to deport 15 to 20 million people, strengthen the wall, and its surveillance dragnet. In this politically fraught environment, we must pay close attention to these high-risk technologies, which are deepening divides between the powerful actors who develop high-tech interventions and the marginalized communities who are on their receiving end.
As a lawyer and anthropologist, I have been researching how new technologies are shaping migration. Over the last six years, my work has spanned borders from the U.S.-Mexico corridor to the fringes of Europe to East Africa and beyond. I have witnessed time and time again how technological border violence operates in an ecosystem replete with the criminalization of migration, anti-migrant sentiments, and over-reliance on the private sector in an increasingly lucrative border industrial complex. From vast biometric data collected without consent in refugee camps, to algorithms replacing visa officers and making discriminatory decisions, to AI lie detectors used at borders, the roll out of unregulated technologies is ever-growing. The biggest problem, however, is that the opaque and discretionary world of border enforcement and immigration decision-making is built on societal structures underpinned by intersecting systemic racism and historical discrimination against people migrating, allowing for high-risk technological experimentation to thrive at the border.
While presented as solutions to a so-called “border crisis,” border technologies as a deterrent simply do not work. In fact, they lead to an increasing loss of life. People desperate for safety—and exercising their internationally protected right to asylum—will not stop coming. They will instead use more circuitous routes, and scholars have already documented a threefold increase in deaths at the U.S.-Mexico frontier as the smart border expands. While investigating this technology and standing on the sands of the Sonora to visit Alvarado’s memorial site in early spring of 2022, in a moment that is etched in my memory as one of the more surreal ones of my career, the U.S. Department of Homeland Security (DHS) announced that it was rolling out robo-dogs to join its arsenal of border enforcement technologies along the US-Mexico corridor. In the not-so-distant future, will people like Alvarado be pursued by these robo-dogs?
Read More: The Current Migrant Crisis Is a Collective Trauma
It is no accident that very little laws currently exist to govern high-risk technologies at the border. For example, despite years of tireless advocacy by a coalition of civil society and academics , the European Union’s much heralded new law regulating artificial intelligence falls short on protecting the most vulnerable. The EU’s AI Act could have been a landmark global standard for the protection of these rights. But once again, it did not provide the necessary safeguards around border technologies. In fact, the lack of bans and red lines under the high-risk uses of border technologies in the EU is in opposition to years of academic research and international guidance. A 2023 report by the UN’s Office of the Human Rights Commissioner (OHCHR), which I co-authored with Professor Lorna McGregor, argues for a human rights-based approach to digital border technologies, including a moratorium on harmful and high risk border technologies such as border surveillance. The EU did not take even a fraction of this position on border technologies.
The U.S. is also no exception, and in an election year where migration is once again in the spotlight, there does not seem to be much incentive to regulate technologies at the border. The Biden administration’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence does not mention the impacts of border technologies on people migrating. And while the DHS has released its 2024 Roadmap on Artificial Intelligence, outlining its framework for what the agency considers “responsible use of AI,” the document neglects to mention the human rights impacts of people on the move. More globally, the UN itself has a lot of work to do, with its recent resolution on AI, once again, not engaging with the real harms that these technologies perpetuate for people who are migrating.
We must also pay close attention to the role of the private sector, as big business drives the development of border technologies, and private companies do not have an incentive to regulate these lucrative projects. Surveillance companies set the agenda of what we innovate on and why, presenting technical “solutions” to migration like robo-dogs or AI lie detectors, instead of developing AI to root out racist border guards, or creating technologies for information-sharing or mental health support at the border.
Borders are a viable testing ground for technologies. But oftentimes, this technology does not stop there. Projects like robo-dogs chasing people at the border become normalized and bleed over into public life—the New York City Police Department, for instance, proudly announced in 2023 that it will be deploying robo-dogs to “keep New York safe.” One such robo-dog is even painted with polka-dots like a dalmatian.
How many more people must die at the hands of a deadly and digital border regime for us to pay attention?
We need stronger laws to prevent further human rights abuses at these deadly digital frontiers. To shift the conversation, we must focus on the profound human stakes as smart borders emerge around the globe. With bodies becoming passports and matters of life and death are determined by algorithm, witnessing and sharing stories is a form of resistance against the hubris and cruelty of those seeking to use technology to turn human beings into problems to be solved.
Leave a comment