Skip to main content
Research project

Administrative Inhumanity and the Governance and Design of AI

Project overview

WSI Pilot Project

This project addresses the topic of administrative inhumanity and its relationship to the potential uses of AI in public administration in order to begin to draw out implications for the governance and design of AI in such contexts.

The general issue of administrative inhumanity refers to a mode of citizen-state relationship in which the agency of the citizen is engaged by the system (we might think of this as the state instrumentalising the agency of the citizen for its ends) but in a way that manifests as experiences of powerlessness, frustration and humiliation combining a lack of responsiveness to the individual case and a lack of accountability in (and for) the decision-making process. It has two primary foci:
1. Categorical reasoning, discretion and the scope for justice: the ways in which administrative systems (whether involving AI or not) operate through structures of categorization and in which decision-trees are forms of categorical reasoning – and the need for the decision-making to be responsive to individual cases those justice-salient features are not adequately captured by the categories in play => both process of refinement/revision of categories and the necessity of a capacity for discretionary judgment when doing justice to the individual case requires. The central issue here concerns the logical point that the morally relevant features of an individual case may not be made visible by the categories in terms of which the administrative system operates and hence that any administrative system must be alert to this possibility and have a discretionary mechanism for addressing it.
2. Accountability and agency in AI contexts: the general opaqueness of algorithms to those subject to them (and often those executing decisions arrived at through them) and hence problems of accountability. The main issue here concerns that ways in which the design (and redesign) of algorithms might be opened up to users (administrators) and the subjects of public administration in order to build some form of accountability into the system. The underlying concern is that (2) threatens to make the problem identified in (1) more intractable unless the governance and design of AI for use in public administration can address the needs identified in (1) and (2).

Staff

Lead researcher

Professor David Owen

Professor in Politics

Research interests

  • 1) Post-Kantian social and political philosophy from Nietzsche to Foucault, and the Frankfurt School.
  • 2) The Ethics and Politics of Migration
  • 3) Democratic Theory and Practice
Other researchers

Professor John Boswell

Professor

Dr Richard Gomer BSc(Hons) MSc PhD (he/him)

Lecturer in Computer Science

Research interests

  • Human-Computer Interaction
  • Agency-oriented Design
  • Data Institutions

Professor Mark Weal

Professor

Research interests

  • Digital Health
  • Behavioural Interventions
  • Semantic Web

Collaborating research institutes, centres and groups

Back
to top