top of page

Migration in the European Union: Can AI and Biometrics Coexist with the Right to Asylum?

  • Sara Majer
  • 10 hours ago
  • 5 min read

Every time you scan your passport, your fingerprint, or even your face, you are not just crossing a border, you are entering a data system. Increasingly, it is this data, not just documents, that determines who is allowed to move and who is not. Europe’s borders are no longer just physical lines on a map. Increasingly, they exist in databases, algorithms, and biometric systems that track and assess individuals long before they reach EU territory. From fingerprint databases to automated risk assessments, the European Union has embraced what it calls “smart borders”. But as these technologies are swiftly authorized, an important question cannot be ignored: can digital border control coexist with the fundamental right to seek asylum?


The Rise of Digital Borders


Over the past decade, EU border governance has shifted toward data-driven systems. Tools such as Eurodac, the Schengen Information System, and the upcoming ETIAS allow authorities to collect and share biometric data, including fingerprints and facial images, across multiple platforms. These systems are designed to improve efficiency and strengthen security by identifying individuals, detecting risks, and managing large volumes of migration data. In this regard, borders are integrated into digital infrastructures that go much beyond checkpoints.


Supporters of these technologies argue that they bring clear benefits. AI-assisted systems can process applications faster, reduce human error, and help overstretched administrations manage complex caseloads. Biometric identification, for example, makes it easier to prevent multiple asylum claims and detect identity fraud. In theory, this could lead to more consistent and efficient decision-making. For governments facing political pressure to manage migration effectively, these tools are highly attractive.


However, the benefits of digital border systems are primarily administrative, not protective. In other words, they are designed to make systems run more efficiently, not necessarily to ensure that asylum seekers’ rights are respected or protected. While they may improve efficiency, they do not inherently safeguard the rights of asylum seekers. In fact, their growing use raises serious concerns about privacy, fairness, and access to protection. 

What do algorithms actually do in this area of application? At the core of these systems are algorithms, sets of rules or instructions that process data to produce a result. In simple terms, an algorithm takes information, such as nationality, travel history, or biometric data, compares it to patterns from past data, and generates an outcome, such as whether someone is flagged as a “risk”.


Many of these systems rely on machine learning, meaning they are trained on large datasets to identify patterns and make predictions. However, this also means that they learn from existing data, which may already contain biases or inequalities. Moreover, when datasets lack sufficient diversity or representation, these systems can reinforce and even amplify existing biases rather than correct them. As a result, algorithmic decisions are not neutral; they merely reflect the data and assumptions on which they are built.


The Risks: Privacy, Bias, and Opacity


One major issue is the scale of data collection. Systems like Eurodac rely on the large-scale storage of sensitive biometric data, often for extended periods. As these databases become interconnected, data collected for one purpose, such as asylum processing, can be reused for others, including law enforcement. This blurring of purpose raises necessary questions about privacy and proportionality, particularly for individuals in vulnerable situations.

There are also risks of discrimination. AI systems and biometric technologies are not neutral; they are shaped by the data on which they are trained. Studies have shown that facial recognition systems, for instance, can be less accurate for certain demographic groups. In the context of asylum, this can translate into unequal treatment, where some individuals are more likely to be flagged, scrutinized, or misidentified than others.


Perhaps most concerning is the issue of transparency. Many AI-driven systems operate as “black boxes,” meaning that their decision-making processes are not easily understood, even by those who use them. For asylum seekers, this creates a serious problem: how can someone challenge a decision if they do not know how it was made? Without clear explanations and accessible appeal mechanisms, the right to effective remedy risks becoming more theoretical than real.


Introducing Eurodac and ETIAS


These concerns are particularly visible in specific EU systems. Eurodac, originally designed to help determine which member state is responsible for an asylum claim, has evolved into a much broader surveillance tool. Its expansion to include more categories of individuals, longer data retention, and increased access for law enforcement reflects a shift from administrative coordination toward security-oriented control. While its core function may be justified, its current scope raises questions about whether it remains proportionate.

Similarly, ETIAS introduces a new layer of pre-emptive border control. By requiring visa-exempt travelers to undergo automated risk assessments before entering the EU, it allows authorities to identify potential “risks” in advance. However, this system also risks excluding individuals before they even have the chance to apply for asylum. Decisions may be influenced by opaque algorithms and proxy indicators, making them difficult to challenge and potentially discriminatory in their effects.


Taken together, these developments point to a broader transformation: border control is no longer just about managing movement, but about predicting and preventing it. Individuals are increasingly assessed not on their personal circumstances, but on data-driven risk profiles. This shift raises profound questions about fairness, accountability, and the future of asylum in Europe.


AI Is Not Going Away, So What Needs to Change to Ensure Rights Continue to Be Protected?


Significantly, the issue is not whether AI should be used in border governance at all. At this point, this question is largely obsolete. AI and biometric technologies are already deeply embedded in migration management systems, and their use is only set to expand further, not only at borders but across public governance more broadly, as we have increasingly seen across Europe. This expansion is not theoretical; countries are already experimenting with AI in formal political roles. Albania introduced the first AI-generated cabinet member in the world in the form of Diella, the minister for public procurement, illustrating how quickly these technologies are being normalized within governance structures. The real challenge, therefore, is not resisting these technologies, but controlling and regulating how they are developed and applied.


Because these systems are built on human data and trained through algorithms, they are not neutral or inherently reliable. They require time, scrutiny, and continuous refinement. Deploying them in high-stakes contexts such as asylum decision-making before they are sufficiently tested risks producing flawed and potentially harmful outcomes. For this reason, their use must be accompanied by constant and meaningful human oversight. Decision-making authority cannot be quietly transferred to automated systems; it must remain with accountable human actors who can question, interpret, and override algorithmic outputs.

At the same time, while the European Union has taken steps toward regulating AI, with the EU AI Act implemented in 2024, current safeguards remain insufficient, particularly in the context of border control. The scale, opacity, and consequences of these systems demand stronger, more targeted protections. Transparency must go beyond formal requirements and enable real understanding. Remedies must be accessible in practice, not just in theory. And safeguards against discrimination must be actively built into both the design and ongoing monitoring of these technologies.


Ultimately, how can rights continue to be protected in a digital border world? If AI is to play a role in asylum governance, it must do so on strictly human terms. Without sustained oversight and stronger protections, digital border systems risk entrenching injustice rather than improving governance. Protecting the right to asylum in the digital age therefore requires not only technological innovation, but political restraint.


The OCC publishes a wide range of opinions that are meant to help our readers think of International Relations. This publication reflects the views only of the author, and neither the OCC nor Saint Louis University can be held responsible for any use which may be made of the opinion of the author and/or the information contained therein.

To quote this article, please use the following reference:

Majer, S. (2026, March). Migration in the European Union: Can AI and Biometrics Coexist with the Right to Asylum?. Observatory on Contemporary Crises. https://www.crisesobservatory.org/post/migration-in-the-european-union-can-ai-and-biometrics-coexist-with-the-right-to-asylum

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.

Observatory on Contemporary Crises (OCC) | © 2022 Saint Louis University – Madrid Campus

bottom of page