Skip to main navigation Skip to main content
The University of Southampton
Web Science Institute

Relational Accountability in Using AI for Immigration and Refugee Decision-making: Distrust Regulation and Trustworthiness Demonstration

Overview

Global migration and refugee experiences signify a humanitarian crisis. The use of AI in immigration and refugee decision-making risks turning an already highly discretionary system into a testing ground for high-risk technological experiments. Vulnerable and under-resourced communities, such as refugees, have less human rights safeguards and resources to defend those rights. Adopting new technologies in a biased manner may only contribute to exacerbate existing inequalities.

Despite calls to form a task force that brings together government stakeholders, academics, and civil society to better understand the consequences of automated decision-making technology, critical, empirical, rights-oriented research into the use of AI in immigration and refugee decision-making needs advancing. Our WSI pilot project seeks to address how stakeholders can enact institutional conditions to facilitate the design and deployment of AI systems in a fair, transparent, and just manner.

The focus of this project is not to rectify an informational gap by developing tools to provide access to indicators of trustworthiness inside the AI black box. Rather, we’d examine how (dis-)trust of AI, via the lens of relational accountability, is enacted in an institutional environment, i.e., a wider ecosystem that determines the rules that the institutional adopters of AI abide by and the resources for enforcing those rules, thereby overcoming the societal problem of not being able to forge trust in specific AIs.

Staff:

Principal Investigator: Dr Ai Yu

Co-investigator: Dr Yingqin Zheng

Privacy Settings