Humans, Machines, and Decision Responsibility

PROJECT SUMMARY

Businessman using cell phone on subway train

Humans, Machines, and Decision Responsibility

Focus Areas: News & Media, Social Services, Mobilities, Health
Research Program: Institutions, Machines
Status: Active

Automated decision-making provokes a range of anxieties around transparency, equality, and accountability. A key response has been the call to ‘re-humanise’ automated decisions, with the hope that human control of automated systems might defend human values from mindless technocracy. Regulation of automated decision-making and AI often embeds this form of human centrism by prescribing a ‘human in the loop’ and the need for automated decisions to be ‘explained’. These requirements are central elements of the risk-based approaches AI regulation currently in development.

Despite their intuitive appeal, empirical research is revealing the limitations and complexities of these approaches. AI explanations sometimes provide little that is useful for decision subjects or decision makers, and risk distracting from more meaningful interrogation of why decisions are made. A human in the loop sometimes functions as a rubber stamp for automated decisions, cleaving accountability away from the true sites of decision responsibility.

This project seeks to generate better understandings of the functions, capacities, and normative role of humans within automated decision systems. It will investigate the ways that automated systems ought to explain or be explained to humans within decision processes, and how elements of decision-making including processes, responsibility, authority, and what counts as a decision itself, are fragmented and redistributed between humans, machines, and organisations. The goal is to generate empirical knowledge of how automated systems, humans, and organisations interact in different contexts when making decisions, and to move past outdated understandings of decisions-making that are hindering effective governance of automation in new decision contexts.

RESEARCHERS

Jake Goldenfein

Dr Jake Goldenfein

Lead Investigator,
University of Melbourne

Learn more

ADM+S Associate Director Jean Burgess

Prof Jean Burgess

Chief Investigator,
QUT

Learn more

Paul Henman headshot

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Chris Leckie

Chief Investigator,
University of Melbourne

Learn more

Prof Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

Distinguished Professor Julian Thomas

Prof Julian Thomas

Chief Investigator,
RMIT University

Learn more

Kim Weatherall

Prof Kim Weatherall

Chief Investigator,
University of Sydney

Learn more

Henry Fraser

Dr Henry Fraser

Research Fellow,
QUT

Learn more

Awais Hameed Khan profile image

Dr Awais Hameed Khan

Research Fellow,
UQ

Learn more

Christopher O'Neill

Dr Chris O’Neil

Research Fellow,
Monash University

Learn more

Ash Watson

Dr Ash Watson

Research Fellow,
UNSW

Learn more

Fan Yang

Dr Fan Yang

Research Fellow,
University of Melbourne

Learn more

Jenny Kennedy

Libby Young

PhD Student
University of Sydney

Learn more

Fabio Mattioli

Dr Fabio Mattioli

Affiliate
University of Melbourne

Learn more

Trauma-informed AI: Developing and testing a practical AI audit framework for use in social services

PROJECT SUMMARY

Woman's face with artificial intelligence graphic on right side

Trauma-informed AI: Developing and testing a practical AI audit framework for use in social services

Focus Areas: Social Services
Research Program: Machines
Status: Active

Artificial Intelligence (AI) is increasingly being used in the delivery of social services. While it offers opportunities for more efficient, effective and personalised service delivery, AI can also generate greater problems, reinforcing disadvantage, generating trauma or re-traumatising service users.

Conducted by a multi-disciplinary research team with extensive expertise in the intersection of social services and digital technology, this project seeks to co-design an innovative AI trauma-informed audit framework to assess the extent to which an AI’s decisions may generate new trauma or re-traumatise.

The value of a trauma-informed AI audit framework is not simply to assess digital technologies after they are built and in operation, but also to inform designs of digital technologies and digitally enabled social services from their inception.

It will be road-tested using multiple case studies of AI use in child/family services, domestic and family violence services, and social security/welfare payments.

RESEARCHERS

Paul Henman

Prof Paul Henman

Lead Investigator,
University of Queensland

Learn more

ADM+S Investigator Philip Gillingham

Dr Philip Gillingham

Associate Investigator,
University of Queensland

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

Suzanna Fay

Dr Suzanna Fay

Senior Lecturer,
University of Queensland

Learn more

PARTNERS

University of Notre Dame-IBM Tech Ethics Lab

University of Notre Dame-IBM Tech Ethics Lab

Visit website

The Toxicity Scalpel: Prototyping and evaluating methods to remove harmful generative capability from foundation models

PROJECT SUMMARY

Person with colourful text overlay

The Toxicity Scalpel: Prototyping and evaluating methods to remove harmful generative capability from foundation models

Focus Areas: News and Media
Research Programs: Machines
Status: Active

AI language models have made significant strides over the past few years. Computers are now capable of writing poetry and computer code, producing human-like text, summarising documents, engaging in natural conversation about a variety of topics, solving math problems, and translating between languages.

This rapid progress has been made possible by a trend in AI development where one general ‘foundational’ model is developed (usually using a large dataset from the internet) and then adapted many times to fit diverse applications, rather than beginning from scratch each time.

This method of ADM development can appear time and cost effective, but ‘bakes in’ negative tendencies like the creation of toxic content, misogyny, or hate speech at the foundational layer, which subsequently spread to each downstream application.

The goal of this project is to examine how language models used in ADM systems might be improved by making modifications at the foundation model stage, rather than at the application level, where computational interventions, social responsibility, and legal liability have historically focussed.

[This project description was generated by summarising parts of the project proposal document using a language model AI].

RESEARCHERS

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Nic Suzor

Prof Nic Suzor

Chief Investigator,
QUT

Learn more

Hao Xue

Dr Hao Xue

Associate Investigator,
UNSW

Learn more

Dr Aaron Snoswell

Dr Aaron Snoswell

Research Fellow,
QUT

Learn more

Lucinda Nelson

Lucinda Nelson

PhD Student,
QUT

Learn more

Transparent Machines: From Unpacking Bias to Actionable Explainability

PROJECT SUMMARY

Person typing on computer

Transparent Machines: From Unpacking Bias to Actionable Explainability

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Research Program: Machines
Status: Active

ADMs, their software, algorithms, and models, are often designed as “black boxes” with little efforts placed on understanding how they work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers, who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data from which the models are often trained on.

Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models.

Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing.

This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g. a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate diverse, robust and actionable explanations for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data. To this end, we look to generate counterfactual explanations that have a shared dependence on the data distribution and the local behaviour of the black-box model by level, and offer new metrics in order to measure the opportunity cost of choosing one counterfactual over another. We further aim to explore the intelligibility of different representations of explanations to diverse audiences through an online user study.

RESEARCHERS

ADM+S Investigator Flora Salim

Prof Flora Salim

Lead Investigator,
UNSW

Learn more

Daniel Angus

Prof Daniel Angus

Chief Investigator,
QUT

Learn more

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Damiano Spina

Dr Damiano Spina

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Maarten de Rijke

Prof Maarten de Rijke

Partner Investigator,
University of Amsterdam

Learn more

Peibo Li

Peibo Li

PhD Student,
UNSW

Learn more

Edward Small

Edward Small

PhD Student,
RMIT University

Learn more

Kacper Sokol

Kacper Sokol

Affiliate,
ETH Zurich

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website

Quantifying and Measuring Bias and Engagement

PROJECT SUMMARY

Man working on laptop

Quantifying and Measuring Bias and Engagement

Focus Areas: News & Media, Health
Research Programs: Machines, Data
Status: Active

Automated decision-making systems and machines – including search engines and intelligent assistants – are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete, by including new aspects such as fairness, that are as important as the traditional definitions of quality, to inform the design, evaluation and optimisation of such systems.

Recent developments in machine learning, information access, and AI communities attempt to define mechanisms to minimise the creation and reinforcement of unintended cognitive biases.

However, there are a number of research questions related to quantifying and measuring bias and engagement that remain unexplored:
– Is it possible to measure bias by observing users interacting with search engines, or intelligent assistants?
– How do users perceive fairness, bias, or trust? How can these perceptions be measured effectively?
– To what extent can sensors in wearable devices and interaction logging (e.g., search queries, app swipes, notification dismissal, etc) inform the measurement of bias and engagement?
– Are the implicit signals captured from sensors and interaction logs correlated with explicit human ratings w.r.t. bias and engagement?

The research aims to address the research questions above by focusing on information access systems that involve automated decision-making components. By partnering with experts in fact-checking, we use misinformation management as the main scenario of study, given that bias and engagement play an important role in three main elements of the automated decision-making processes: the user, the system, and the information that is presented and consumed.

The methodologies considered to address these questions include lab user studies (e.g., observational studies), and the use of crowdsourcing platforms (e.g., Amazon Mechanical Turk). The data collection processes include: logging human-system interactions; sensor data collected using wearable devices; and questionnaires.

RESEARCHERS

Dr Damiano Spina

Dr Damiano Spina

Lead Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Anthony McCosker

Assoc Prof Anthony McCosker

Chief Investigator,
Swinburne University

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

ADM+S Associate Investigator Jenny Kennedy

Dr Jenny Kennedy

Associate Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator,
RMIT University

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Research Fellow,
RMIT University

Learn more

Person icon

Nuha Abu Onq

PhD Student,
RMIT University

Marwah Alaofi

Marwah Alaofi

PhD Student,
RMIT University

Learn more

Person icon

Hmdh Alknjr

PhD Student,
RMIT University

Danula Hettiachchi

Sachin Cherumanal

PhD Student,
RMIT University

Learn more

Kaixin Ji

Kaixin Ji

PhD Student,
RMIT University

Learn more

PARTNERS

ABC logo

Australian Broadcasting Corporation

Visit website

AlgorithmWatch Logo

Algorithm Watch (Germany)

Visit website

Bendigo Health logo

Bendigo Hospital

Visit website

Google Logo

Google Australia

Visit website

RMIT ABC Fact Check Logo

RMIT ABC Fact Check

Visit website

Mapping ADM Across Sectors

PROJECT SUMMARY

Blurred crowd of people

Mapping ADM Across Sectors

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Research Programs: Data, Machines, Institutions, and People
Status: Active

ADM systems have the potential to greatly improve the overall quality of life in society, but they may also exacerbate social, political, and economic inequality. The role they play in reinforcing, reproducing, and reconfiguring power relations is, as recent events demonstrate, a key concern with respect to the deployment of automated decision making systems. When such systems are used to decide how benefits, resources, services, or information are allocated in society, they bear directly on the character and quality of life in that society. We are interested in both the potential benefits of the deployment of the technology and the potential harms. We do not treat such systems in the abstract, but are centrally concerned with the social, political, and economic relations in which they are embedded and which shape their deployment. A key question for the ADM+S Centre, in other words, is not just how best to design and deploy the technology, but what economic and political arrangements are most compatible with their fair, ethical, responsible, and democratic use.

The Social Issues in Automated Decision-Making report brings together material collected from discussions with leaders in the Centre’s focus areas and feedback from an international collection of experts in their respective domains. For each focus area we followed a similar methodology for canvassing key social issues. We started by discussing key social issues with Focus Area leaders and their teams. We then canvassed the academic literature, reports by industry groups and relevant independent organisations, and media coverage. For each area, we sought to identify key applications of ADM and the possible social benefits and harms with which they are associated. We also sought to identify continuities in these social issues both within and across the Centre’s main focus areas.

This is neither a final nor a definitive report. It marks the first step in the Centre’s ongoing social issues mapping project. The document will develop over time to reflect the insights that emerge from ongoing collaborations.

Read the report.

RESEARCHERS

Mark Andrejevic

Prof Mark Andrejevic

Lead Investigator,
Monash University

Learn more

Paul Henman

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Investigator Ramon Lobato

Assoc Prof Ramon Lobato

Associate Investigator,
RMIT University

Learn more

Jathan Sadowski

Dr Jathan Sadowski

Associate Investigator,
Monash University

Learn more

Kelly Lewis

Dr Kelly Lewis

Research Fellow,
Monash University

Learn more

Christopher O'Neill

Dr Christopher O’Neil

Research Fellow,
Monash University

Learn more

Georgia Van Toorn

Dr Georgia van Toorn

Research Fellow,
UNSW

Learn more

Ash Watson

Dr Ash Watson

Research Fellow,
UNSW

Learn more

Vaughan Wozniak-O'Connor

Dr Vaughan Wozniak-O’Connor

Research Fellow,
UNSW

Learn more

Daniel Binns

Dr Daniel Binns

Affiliate,
RMIT University

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

PARTNERS

OVIC Logo

Office of the Victorian Information Commissioner

Learn more

Australian Red Cross Logo

Australian Red Cross

Learn more

Mapping ADM Machines in Australia and Asia-Pacific

PROJECT SUMMARY

People walking in city centre

Mapping ADM Machines in Australia and Asia-Pacific

Focus Area: Social Services
Research Program: Machines
Status: Completed

This project aimed to map ADM machines in Social Services in Australia and the Asia Pacific to provide foundational empirical and conceptual knowledge of ADM in social services beyond Europe and North America, and into the Asia-Pacific region. Viewing ADM as an assemblage of data systems and decision-making in social-political context, this project built a knowledge base about what ADM systems are being used in social services delivery in Australia and the Asia Pacific, and how they are used, and who is affected by this.

Based on a conceptual definitions and framework of ADM systems, this project provided a detailed mapping of ADM systems used in social services in Australia, worked with academics across the Asia-Pacific to map ADM systems used in social services in their countries, and conducted a countermapping of ADM in social services in Australia. Data was collected via webscraping of government websites and reports and major and specialist IT media outlets to build a detailed history and understanding of each ADM system identified, supplemented by interviews with developers and user stakeholders.

Major outputs included the Mapping ADM systems in Australian Social Services report, as well as presentations in national and international conferences, webinars, and journal articles in leading journals, including Qualitative Inquiry.

Major benefits of this project include:
• Improved public understanding of what ADM systems are being used in social services in Australia and the Asia Pacific
• Increased focus by public institutions, like the NSW Ombudsman, to monitor and map what ADMs are being used in governmental decision-making to improve transparency
• Attention by major players, like IBM, on the way ADM systems are used in social services delivery, its impacts on service users’ wellbeing and different ways to think about the roll out of new technologies in the sector (e.g, using trauma informed practice principles).

RESEARCHERS

Paul Henman

Prof Paul Henman

Lead Investigator,
University of Queensland

Learn more

Brooke Coco

Brooke Ann Coco

PhD Student,
RMIT University

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

PARTNERS

AlgorithmWatch Logo

Algorithm Watch (Germany)

Visit website

Adaptive, Multi-Factor Balanced, Regulatory Compliant Routing ADM Systems

PROJECT SUMMARY

People on bus using mobile phones

Adaptive, Multi-Factor Balanced, Regulatory Compliant Routing ADM Systems

Focus Area: Transport and Mobilities
Research Program: Machines
Status: Active

This project develops new approaches to combine fairness, transparency and safety guarantees for ADM systems, such as machine learning based systems. We focus on resource allocation problems where there is a high level of uncertainty about the demand for resources, such as in the response to natural disasters or cyber security incidents.

In particular, we consider the problem of how criminal and malicious agents can manipulate such decision-making problems for their own advantage, and what measures can be taken to detect this manipulation.

RESEARCHERS

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Lead Investigator,
University of Melbourne

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Sarah Erfani

Dr Sarah Erfani

Associate Investigator,
University of Melbourne

Learn more

Considerate and Accurate Multi-party Recommender Systems for Constrained Resources

PROJECT SUMMARY

Mobile with Spotify music app

Considerate and Accurate Multi-party Recommender Systems for Constrained Resources

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Research Program: Machines
Status: Active

This project will create a next generation recommender system that enables equitable allocation of constrained resources. The project will produce novel hybrid socio-technical methods and resources to create a Considerate and Accurate REcommender System (CARES), evaluated with social science and behavioural economics lenses.

CARES will transform the sharing economy by delivering systems and methods that improve user and non-user experiences, business efficiency, and corporate social responsibility.

PARTICIPATE

Participate in an online user study on multi-party fair recommendations

We are looking for users of the Spotify music application to complete a brief online study. In the study, you are expected to browse music recommendations and answer a set of questions.

The study is expected to take less than 15 minutes, and you will receive a AU$10 gift card as a thank you.

You will need to have an active Spotify account with at least 6 months of listening history to take part.

To verify your eligibility and participate in the study, please fill out this form.

RESEARCHERS

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Lead Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Chief Investigator,
University of Melbourne

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Research Fellow,
RMIT University

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website

A taxonomy of decision-making machines

PROJECT SUMMARY

Blurred people walking towards a city building with green trees on the side of the pathway

A taxonomy of decision-making machines

Focus Area(s): News and Media, Health, Social Services, Transport and Mobilities
Research Program: Machines
Status: Active

The concept of Automated Decision Making (ADM) is relatively uncommon compared to Artificial Intelligence (AI). An important challenge for the Centre and for researchers is to clarify the meaning of ADM and how it relates to and differs from similar concepts.

This project sought to provide conceptual clarity of the this field of concepts. It secondly developed a way in which to conceptualise the various dimensions of ADM systems, providing a taxonomy of ADM. The project engaged with and provides an augmentation of the 2022 OECD Framework for the Classification of AI systems.

The purpose of identifying an ADM taxonomy was to enable more systematic identification and analysis of ADM. Such a systematic approach provides for comparison of ADM systems from different projects.

Based on the formative work of the project, draft definitions and taxonomy were adopted and revised in both the ADM+S Mapping ADM in social services in Australia project and the ADM+S NSW Ombudsman funded project Mapping ADM in NSW state and local governments.

It is anticipated that an ADM+S project report will be published.

RESEARCHERS

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Lead Investigator,
University of Queensland

Learn more

Jake Goldenfein

Dr Jake Goldenfein

Chief Investigator,
University of Melbourne

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Chief Investigator,
University of Melbourne

Learn more

ADM+S Chief Investigator Jason Potts

Prof Jason Potts

Chief Investigator,
RMIT University

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Distinguished Professor Julian Thomas

Prof Julian Thomas

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Philip Gillingham

Dr Philip Gillingham

Associate Investigator,
University of Queensland

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

PARTNERS

AlgorithmWatch logo

AlgorithmWatch
Visit website

Data & Society
Visit website