February 15, 2024
Credits @FFHR.CZ
Digital technology used for asylum and migration management systems are increasingly becoming a key human rights concern, Amnesty International says in a new report. Tools like surveillance and lie detectors have a discriminatory impact on refugees, migrants and asylum seekers, the rights group claims.
Amid a proliferation of technologies designed to help government agencies of EU countries automate decisions regarding migration and asylum, NGOs and researchers have been issuing warnings against the potential downsides of deploying these technologies at scale in the context of migration.
The latest reminder comes from rights group Amnesty International in the form of a report that highlights the human rights issues the use of digital technologies can entail.
The briefing, published on February 5, and entitled "Defending the Rights of Refugees and Migrants in the Digital Age", looks at the introduction of digital technologies in the European Union, the United Kingdom and the United States of America -- including technologies for border externalization and 'lie detectors' that use artificial intelligence (AI).
"The proliferation of these technologies risks perpetuating and reinforcing discrimination, racism, disproportionate and unlawful surveillance against racialized people," said Matt Mahmoudi, Amnesty International Adviser on Artificial Intelligence and Human Rights Technology.
The report encompasses several clusters: the externalization of borders, biometric data collection and surveillance, algorithmic decision making, as well as alternatives to detention.
Externalization of borders
This cluster comprises a breadth of technologies including drones, radar, high-resolution cameras and biometric identification, satellite data and movement recognition.
Among other things, the report points out that the expansion of surveillance infrastructure is leading to a shift in migrants' routes toward more remote and more dangerous areas.
According to Amnesty, the European Union is using real-time aerial surveillance and drones over the central Mediterranean to identify migrant boats. In well-documented practice, this information is often handed to the search and rescue zones in the area, including the Libyan authorities, who then use this information to intercept and return people to Libya, in violation of international law.
The report also shows how European states, including Germany, are increasingly searching asylum seekers' cell phones and disproportionately violating their privacy in the process.
"Mass surveillance violates privacy, be it blanket searches of cell phones or mass surveillance of people's movements," Lena Rohrbach, expert on human rights in the digital age at Amnesty International in Germany, states in the report.
Biometrics
The second cluster of tools the report talks about is biometric data collection and surveillance, which is among the most "ubiquitous technologies deployed for identification, verification, and authentication purposes along borders," the authors write.
This could manifest itself in discrimination based on race or ethnicity, such as the misrecognition of Black people by facial recognition technologies, according to the report.
Credits @FFHR.CZ
UN agencies like the UNHCR or the World Food Program have extensive data fingerprint and iris databases with the goal to avoid multiple registrations, among other things.
"Digital technologies are reinforcing border regimes that disproportionately impact [certain] people. Inherent racism is deeply ingrained within migration management and asylum systems," said Charlotte Phillips, Amnesty International Advisor on Refugee and Migrants' Rights, adding that these technologies have "inherent biases and errors" that threaten human rights.
The European Union, too, uses databases like Eurodac, the EU's asylum fingerprint database, to determine which member state is responsible for the asylum application according to the Dublin Regulation. Thanks to two interoperability decrees, Eurodac is compatible with five other large databases in the areas of police and border management.
Algorithmic decision-making
The third cluster the Amnesty report analyzes is automated decision-making at the border used for, among other things, deciding asylum applications.
One of the automated border control systems mentioned in the report is the EU-funded '|BorderCtrl' pilot project in Hungary, Latvia and Greece, which uses a "lie-detecting system fronted by a virtual border guard to quiz travelers seeking to cross borders," the report states.
Derya Ozkul, assistant Professor at the University of Warwick, recently told InfoMigrants that lie detection and behavior analysis at the border is among the most "dangerous" ways that identity verification technologies are being tested and used.
"The main point of criticism has been around the technology's ability to accurately assess human behavior," she said. "It has been suggested that these types of assessments can lead to biases against people of different color, gender, age, and culture."
Starting in 2025, people entering the EU via third countries will have to contend with the Etias screening algorithm in Greece. This risk assessment will include past journeys, among other information.
Alternatives to detention
A fourth cluster in the Amnesty reports takes a look at so-called electronic alternatives to detention.
In 2016, the UK introduced this controversial tool for persons scheduled to be deported. Five years later, it was reportedly widened to include all people under a so-called immigration bail. Since 2022, there have been plans to widen the application further, for instance with a smart watch tracking system for all asylum seekers.
The Amnesty report concludes with a list of recommendations for governments to protect the rights of people on the move that includes not using technologies "at odds with human rights" and making sure digital technologies address "systemic racism, xenophobia and discrimination."
Amnesty also urges states to conduct human rights and data protection impact assessments as well as abstain from using AI-based tools like automated risk assessment, profiling systems, predictive technology and emotion recognition.
"AI-based alleged 'emotion recognition' leads to discrimination and errors and must be banned -- in Germany and in the EU," says Rohrbach.
Source: infomigrants.net
Comments