Skip to main navigationSkip to main content
The University of Southampton
Web Science Institute

Trustworthy AI for the Future of Society

CALL CLOSED

About

The trustworthiness of new technologies such as AI is an issue currently arousing considerable concern for governments, academia, business and the general public alike. It is becoming widely acknowledged that deep ambivalences lie at the heart of contemporary technological developments on the future of society. On the one hand, technologies such as Artificial Intelligence (AI) and Automated Decision Making (ADM) are seen to offer enormous potential to economies, societies and government from new accuracies and speed in computer data processing which go far beyond human capabilities. Significant step-changes in economic productivity and social inclusion are promised through AI’s abilities to streamline the ways governments and business interact with data for improved decision-making, more efficient deployment of resources, and new opportunities for creativity and innovation.

On the other hand, the very same capabilities in data handling and exchange raise substantial concerns about the ownership, deployment, governance and storage of data and infrastructure, with resulting fears of human rights violations, surveillance and privacy intrusions, bias, discrimination and social polarization, and fake news and political manipulation.

The UK Government’s Review of AI correctly identifies the urgency of the need to secure public trust and confidence in AI systems. However, much of the existing data and research evidence on the impact of AI decision-making on social inclusion and public confidence draws on the US experience, and rigorous, context-specific social science knowledge of the challenges of establishing trustworthy, transparent and explainable systems of AI governance across the range of economic, social and political institutions in the UK is more scant.

This theme therefore invites research proposals which investigate any aspect of trustworthiness in relation to AI and ADM.  In particular we welcome research which addresses one or more of the following:

1.       To advance scientific knowledge about the values, principles and foundations of trustworthy and ethical AI systems as understood by diverse social groups (e.g. by gender, race and ethnicity, age, socio-economic background, and occupation) across a range of key economic, political and social institutions, including education and work; judiciary, legislature and security; and national and local political government.

2.       To develop and apply innovative theories to understand the challenges and opportunities of trustworthy AI across key economic, political and social domains and identify key components of trust and trustworthiness in AI by diverse social groups and institutions.

3.       To develop and apply innovative, interdisciplinary research methods to measure and understand trust and trustworthiness in AI by different social groups in different economic, political and social contexts, drawing on expertise across the social and computational sciences.

4.       To advance empirical evidence by gathering new quantitative and qualitative data for novel and innovative indices of trust and trustworthiness of AI designs and systems of governance.

5.       To contribute insights of immediate and long-term value to stakeholders including the public, national and local policymakers and business of the values, principles and foundations of trustworthy, ethical and explainable AI to assist implementation and workability of interventions.

Theme Lead

Professor Pauline Leonard

Email: Pauline.Leonard@soton.ac.uk

Privacy Settings