Automated informality: generative frictions in ADM systems

PROJECT SUMMARY

A seller stands next to his electronic devices outside in front of graffiti art on wall.

Automated informality: generative frictions in ADM systems

Focus Areas: News & Media
Research Program: People, Data, Machines & Institutions
Status: Active

Informality, especially in economic practice, poses a recurrent problem in development literature. Economic informality is broadly associated with weaker economic outcomes: countries with larger informal sectors have lower per capita incomes, greater poverty, less financial development, and weaker growth in output, investment, and productivity. As such regimes across the globe have sought to intervene in, and formalize the informal sector through worker registration drives, technology transfers, and other interventions which attempt to expand the reach of the formal economy bringing swaths of the working population under regimes of taxation, workplace safety, and enhanced productivity.

Recently, such interventions have turned on the possibilities and promises of automation. While industrial robotics systems boost manufacturing productivity, digital platforms make possible immediate and traceable circulation of funds, even as biometric databases enable automated identity verification in commercial and civic contexts.  Here new technologies of automation hold out the potential to formalize economic practices by extending standardized protocols in the form of apps, database architectures, and machinery.

Scholars of informal work have emphasized that informal and formal economic practices have long been intertwined, and they are connected by exchanges of personnel, ideas, content, and capital as highly contingent interactions. Especially in the Global South, the informal is not exceptional but typical with informality characterizing most economic practices. In India, for example, the rise of formal IT outsourcing firms has been matched by the growth of temporary and unregulated service workers who clean the offices, fix the meals, and provide transportation to professional employees.

In Brazil, wageless trash collectors sort recyclable items from Rio de Janeiro’s municipal waste dumps enabling the operation of this public infrastructure while extracting a livelihood from reselling this waste. Far from eliminating informal economies contemporary regimes of accumulation generate value by weaving formal and informal practices together.

Currently missing from this body of scholarship is a range of contingent and non-standard work that proliferates as a result of the friction that exists within automated systems as complex self-coordinating and self-organising mechanisms. This type of work – which we call small automation – is different from gig work in that it is unregulated, opportunistic, and marginalised; it is largely invisible and opaque, but unlike ghost work, its invisibility is key to its survival.

Small automation is different from both gig work and ghost work in the sense that it encompasses a range of informal enterprises created by informal actors that circumvent, exploit, or co-opt automated systems, rather than being deployed by Silicon Valley to develop new technologies.

This project maps a range of informal automated activities that proliferate within automated systems across various empirical domains, such as click farming, CAPTCHA hacking, phone farming, dropshipping, OTP scams, fraudulent loan apps, and free jacking. The proliferation of automated informality can create unexpected implications for the operation of automated systems and our information environment more generally. Our focus on mapping automated informality works to supplement current research on gig work and ghost work while demonstrating the theoretical and empirical value of examining automated systems in context.

RESEARCHERS

Dang Nguyen

Dr Dang Nguyen

Lead Investigator,
RMIT University

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Associate Investigator,
RMIT University

Learn more

Rakesh Kumar

Rakesh Kumar

PhD Student,
Western Sydney University

Learn more

Adam Sargent

Dr Adam Sarget

Affiliate,
Australian National University (ANU)

Learn more

Humans, Machines, and Decision Responsibility

PROJECT SUMMARY

Businessman using cell phone on subway train

Humans, Machines, and Decision Responsibility

Focus Areas: News & Media, Social Services, Mobilities, Health
Research Program: Institutions, Machines
Status: Active

Automated decision-making provokes a range of anxieties around transparency, equality, and accountability. A key response has been the call to ‘re-humanise’ automated decisions, with the hope that human control of automated systems might defend human values from mindless technocracy. Regulation of automated decision-making and AI often embeds this form of human centrism by prescribing a ‘human in the loop’ and the need for automated decisions to be ‘explained’. These requirements are central elements of the risk-based approaches AI regulation currently in development.

Despite their intuitive appeal, empirical research is revealing the limitations and complexities of these approaches. AI explanations sometimes provide little that is useful for decision subjects or decision makers, and risk distracting from more meaningful interrogation of why decisions are made. A human in the loop sometimes functions as a rubber stamp for automated decisions, cleaving accountability away from the true sites of decision responsibility.

This project seeks to generate better understandings of the functions, capacities, and normative role of humans within automated decision systems. It will investigate the ways that automated systems ought to explain or be explained to humans within decision processes, and how elements of decision-making including processes, responsibility, authority, and what counts as a decision itself, are fragmented and redistributed between humans, machines, and organisations. The goal is to generate empirical knowledge of how automated systems, humans, and organisations interact in different contexts when making decisions, and to move past outdated understandings of decisions-making that are hindering effective governance of automation in new decision contexts.

RESEARCHERS

Jake Goldenfein

Dr Jake Goldenfein

Lead Investigator,
University of Melbourne

Learn more

ADM+S Associate Director Jean Burgess

Prof Jean Burgess

Chief Investigator,
QUT

Learn more

Paul Henman headshot

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Chris Leckie

Chief Investigator,
University of Melbourne

Learn more

Prof Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

Distinguished Professor Julian Thomas

Prof Julian Thomas

Chief Investigator,
RMIT University

Learn more

Kim Weatherall

Prof Kim Weatherall

Chief Investigator,
University of Sydney

Learn more

Henry Fraser

Dr Henry Fraser

Research Fellow,
QUT

Learn more

Awais Hameed Khan profile image

Dr Awais Hameed Khan

Research Fellow,
UQ

Learn more

Fan Yang

Dr Fan Yang

Research Fellow,
University of Melbourne

Learn more

Libby Young

Libby Young

PhD Student
University of Sydney

Learn more

Joe Brailsford

Joe Brailsford

Affiliate
University of Melbourne

Learn more

Fabio Mattioli

Dr Fabio Mattioli

Affiliate
University of Melbourne

Learn more

Christopher O'Neill

Dr Chris O’Neil

Affiliate,
Deakin University

Learn more

Ash Watson

Dr Ash Watson

Affiliate,
UNSW

Learn more

Designing Automated Tools to Support Welfare Rights Advocacy

PROJECT SUMMARY

Person typing on laptop with graphic overlay

Designing Automated Tools to Support Welfare Rights Advocacy

Focus Areas: Social Services
Research Program: Machines
Status: Active

Welfare rights lawyers across Australia advocate for claimants of income support payments (e.g., unemployment benefits, disability support pension, family tax benefit etc.) paid by Services Australia — Centrelink. Claimants rely on welfare payments as a substantive part of their income, and often depend on welfare rights organisations to assist them in disputing decisions by Centrelink. These disputes can range from alleged debt due to overpayment, cessation of payment, or denial of payment altogether.

When engaging a client to support a dispute claim, welfare rights lawyers often submit a Freedom of Information (FOI) request to Centrelink, to access client files. Centrelink provides this information in the form of a large PDF document (colloquially referred to as THE BRICK), which contains hundreds of pages of client data, including case notes and screenshots from Centrelink computers, documents using a lot of system and internal acronyms. Lawyers must then trawl through and make sense of this detailed document, reconstructing the history of a client’s case, while attempting to decipher the decisions made by Centrelink and their rationale. This is a heavy time consuming and onerous process, reducing the actual time a lawyer spends in engaging with their client, and making the legal arguments for the case.

Working closely with welfare rights lawyers (and their teams), advocacy groups, and users of social services — this project aims to collaboratively design, prototype, and pilot an automated data extraction tool to support welfare rights lawyers in making sense of Services Australia (Centrelink) system-generated FOI documents.

This project explores the following research questions:

  1. How can we digitally scaffold and support sense-making of Freedom of Information (FOI) system-generated responses through a data extraction tool outside the government system?
  2. What methods/approaches can facilitate collaborative design of Automated Decision-Making (ADM) support systems in social services with key stakeholders?
  3. How might we reclaim and democratize sense-making/deciphering of government ADM outputs from outside of government systems — designing for controlled activism?
  4. What impact can using an ADM support system, such as the data extraction tool, have on an organisational work flow and capacity of welfare rights lawyers to support their clients?

Key objectives are to:

  • Design, prototype and build an automated data extraction tool to support welfare rights lawyers in sense-making of system-generated FOI documents
  • Involve welfare rights lawyers, advocacy groups, and social service users and professionals in co-design of ADM tools to support such sense-making
  • Evaluate the Implications of the tool on organisational practice and processes

RESEARCHERS

Paul Henman headshot

Prof Paul Henman

Lead Investigator,
University of Queensland

Learn more

Terry Carney profil picture

Prof Terry Carney

Associate Investigator,
University of Sydney

Learn more

Robert Mullins profile picture

Dr Robert Mullins

Associate Investigator,
University of Queensland

Learn more

Awais Hameed Khan profile image

Dr Awais Hameed Khan

Research Fellow,
University of Queensland

Learn more

ADM+S professional staff Abdul Obeid

Dr Abdul Obeid

Data Engineer,
QUT

Learn more

Dan Trang

Dan Trang

Software Developer,
QUT

Learn more

PARTNERS

Economic Justice Australia

Economic Justice of Australia

Visit website

Services Australia

Services Australia

Visit website

Welfare Rights Centre

Welfare Rights Centre

Visit website

Trauma-informed AI: Developing and testing a practical AI audit framework for use in social services

PROJECT SUMMARY

Woman's face with artificial intelligence graphic on right side

Trauma-informed AI: Developing and testing a practical AI audit framework for use in social services

Focus Areas: Social Services
Research Program: Machines
Status: Completed

Artificial Intelligence (AI) is increasingly being used in the delivery of social services. While it offers opportunities for more efficient, effective and personalised service delivery, AI can also generate greater problems, reinforcing disadvantage, generating trauma or re-traumatising service users.

Conducted by a multi-disciplinary research team with extensive expertise in the intersection of social services and digital technology, this project seeks to co-design an innovative AI trauma-informed audit framework to assess the extent to which an AI’s decisions may generate new trauma or re-traumatise.

The value of a trauma-informed AI audit framework is not simply to assess digital technologies after they are built and in operation, but also to inform designs of digital technologies and digitally enabled social services from their inception.

It will be road-tested using multiple case studies of AI use in child/family services, domestic and family violence services, and social security/welfare payments.

PUBLIC RESOURCES

Building a Trauma-Informed Algorithmic Assessment Toolkit

Target audience: Social service organisations

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI.

View toolkit

RESEARCHERS

Paul Henman

Prof Paul Henman

Lead Investigator,
University of Queensland

Learn more

ADM+S Investigator Philip Gillingham

Dr Philip Gillingham

Associate Investigator,
University of Queensland

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

Suzanna Fay

Dr Suzanna Fay

Senior Lecturer,
University of Queensland

Learn more

PARTNERS

University of Notre Dame-IBM Tech Ethics Lab

University of Notre Dame-IBM Tech Ethics Lab

Visit website

The Toxicity Scalpel: Prototyping and evaluating methods to remove harmful generative capability from foundation models

PROJECT SUMMARY

Person with colourful text overlay

The Toxicity Scalpel: Prototyping and evaluating methods to remove harmful generative capability from foundation models

Focus Areas: News and Media
Research Programs: Machines
Status: Completed

AI language models have made significant strides over the past few years. Computers are now capable of writing poetry and computer code, producing human-like text, summarising documents, engaging in natural conversation about a variety of topics, solving math problems, and translating between languages.

This rapid progress has been made possible by a trend in AI development where one general ‘foundational’ model is developed (usually using a large dataset from the internet) and then adapted many times to fit diverse applications, rather than beginning from scratch each time.

This method of ADM development can appear time and cost effective, but ‘bakes in’ negative tendencies like the creation of toxic content, misogyny, or hate speech at the foundational layer, which subsequently spread to each downstream application.

The goal of this project is to examine how language models used in ADM systems might be improved by making modifications at the foundation model stage, rather than at the application level, where computational interventions, social responsibility, and legal liability have historically focussed.

PUBLICATIONS

First page of Journal Article: Measuring Misogyny in Natural Language Generation: Preliminary Results from a Case Study on two Reddit Communities

Measuring Misogyny in Natural Language Generation: Preliminary Results from a Case Study on two Reddit Communities,2023

Snoswell, A., Nelson, L., Xue, H., Salim, F., Suzor, N., & Burgess, J.

Journal article

RESEARCHERS

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Nic Suzor

Prof Nic Suzor

Chief Investigator,
QUT

Learn more

Dr Aaron Snoswell

Dr Aaron Snoswell

Associate Investigator,
QUT

Learn more

Hao Xue

Dr Hao Xue

Associate Investigator,
UNSW

Learn more

Lucinda Nelson

Lucinda Nelson

PhD Student,
QUT

Learn more

Transparent Machines: From Unpacking Bias to Actionable Explainability

PROJECT SUMMARY

Person typing on computer

Transparent Machines: From Unpacking Bias to Actionable Explainability

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Status: Active

ADMs, their software, algorithms, and models, are often designed as “black boxes” with little efforts placed on understanding how they work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers, who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data from which the models are often trained on.

Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models.

Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing.

This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g. a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate diverse, robust and actionable explanations for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data. To this end, we look to generate counterfactual explanations that have a shared dependence on the data distribution and the local behaviour of the black-box model by level, and offer new metrics in order to measure the opportunity cost of choosing one counterfactual over another. We further aim to explore the intelligibility of different representations of explanations to diverse audiences through an online user study.

PUBLICATIONS

i-Align: An Interpretable Knowledge Graph Alignment Model, 2023

Salim, F., Scholer, F., et al.

Journal article

TransCP: A Transformer Pointer Network for Generic Entity Description Generation with Explicit Content-Planning, 2023

Salim, F., et al.

Journal article

Contrastive Learning-Based Imputation-Prediction Networks for In-hospital Mortality Risk Modeling Using EHRs, 2023

Salim, F., et al.

Conference paper

How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies, 2023

Small, E., Chan, J., et al.

Journal article

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness, 2023

Small, E., Sokol, K., et al.

Conference paper

Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations, 2023

Small, E., Xuan, Y., et al.

Workshop paper

Navigating Explanatory Multiverse Through Counterfactual Path Geometry, 2023

Small, E., Xuan, Y., Sokol, K.

Workshop paper

Mind the gap! Bridging explainable artificial intelligence and human understanding with Luhmann’s Functional Theory of Communication, 2023

Sokol, K., et al.

Workshop paper

Measuring disentangled generative spatio-temporal representation, 2022

Chan, J., Salim, F., et al.

Conference paper

FAT Forensics: A Python toolbox for algorithmic fairness, accountability and transparency, 2022

Sokol, K., et al.

Journal article

Analysing Donors’ Behaviour in Non-profit Organisations for Disaster Resilience: The 2019–2020 Australian Bushfires Case Study, 2022

Chan, J., Sokol, K., et al.

Conference paper

BayCon: Model-agnostic Bayesian Counterfactual Generator, 2022

Sokol, K., et al.

Conference paper

RESEARCHERS

ADM+S Investigator Flora Salim

Prof Flora Salim

Lead Investigator,
UNSW

Learn more

Daniel Angus

Prof Daniel Angus

Chief Investigator,
QUT

Learn more

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Damiano Spina

Dr Damiano Spina

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Maarten de Rijke

Prof Maarten de Rijke

Partner Investigator,
University of Amsterdam

Learn more

Peibo Li

Peibo Li

PhD Student,
UNSW

Learn more

Edward Small

Edward Small

PhD Student,
RMIT University

Learn more

Kacper Sokol

Kacper Sokol

Affiliate,
ETH Zurich

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website

Quantifying and Measuring Bias and Engagement

PROJECT SUMMARY

Man working on laptop

Quantifying and Measuring Bias and Engagement

Focus Areas: News & Media, Health
Research Programs: Machines, Data
Status: Active

Automated decision-making systems and machines – including search engines and intelligent assistants – are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete, by including new aspects such as fairness, that are as important as the traditional definitions of quality, to inform the design, evaluation and optimisation of such systems.

Recent developments in machine learning, information access, and AI communities attempt to define mechanisms to minimise the creation and reinforcement of unintended cognitive biases.

However, there are a number of research questions related to quantifying and measuring bias and engagement that remain unexplored:
– Is it possible to measure bias by observing users interacting with search engines, or intelligent assistants?
– How do users perceive fairness, bias, or trust? How can these perceptions be measured effectively?
– To what extent can sensors in wearable devices and interaction logging (e.g., search queries, app swipes, notification dismissal, etc) inform the measurement of bias and engagement?
– Are the implicit signals captured from sensors and interaction logs correlated with explicit human ratings w.r.t. bias and engagement?

The research aims to address the research questions above by focusing on information access systems that involve automated decision-making components. By partnering with experts in fact-checking, we use misinformation management as the main scenario of study, given that bias and engagement play an important role in three main elements of the automated decision-making processes: the user, the system, and the information that is presented and consumed.

The methodologies considered to address these questions include lab user studies (e.g., observational studies), and the use of crowdsourcing platforms (e.g., Amazon Mechanical Turk). The data collection processes include: logging human-system interactions; sensor data collected using wearable devices; and questionnaires.

PUBLIC RESOURCES

Person working on laptop on wooden desk next to window

Open Source Software: Factchecking – Presentations

Target audience: Researchers, Software Developers
Code type: Python

View on Github

PUBLICATIONS

Report Cover: Quantifying and Measuring Bias and Engagement in Automated Decision-Making

Quantifying and Measuring Bias and Engagement in Automated Decision-Making, 2024

Spina, D., Hettiachchi, D., McCosker, A.

Report

Human-AI Cooperation to Tackle Misinformation and Polarization, 2023

Spina, D., Sanderson, M., et al.

Journal article

Examining the Impact of Uncontrolled Variables on Physiological Signals in User Studies for Information Processing Activities, 2023

Ji, K., Spina, D., et al.

Conference paper

Can Generative LLMs Create Query Variants for Test Collections? 2023

Alaofi, M., Sanderson, M., et al.

Conference paper

Mitigating Negative Transfer with Task Awareness for Sexism, Hate Speech, and Toxic Language Detection, 2023

Spina, D., Rosso, P., Felipe Magnossão de Paula, A.

Conference paper

Do Social Media Users Change Their Beliefs to Reflect those Espoused by Other Users? 2023

Alknjr, H.

Conference paper

How do Human and Contextual Factors Affect the Way People Formulate Queries? 2023

Abu One, N.

Conference paper

Towards Detecting Tonic Information Processing Activities with Physiological Data, 2023

Ji, K., Hettiachchi, D., et al.

Conference paper

Ranking Interruptus: When Truncated Rankings Are Better and How to Measure That, 2022

Spina, D., et al.

Conference paper

Where Do Queries Come From? 2022

Alaofi, M., Spina, D., et al.

Conference paper

User-centered Non-factoid Answer Retrieval, 2022

Alaofi, M.

Conference paper

A Crowdsourcing Methodology to Measure Algorithmic Bias in Black-box Systems: A Case Study with COVID-related Searches, 2022

Scholar, F., Spina, D., Chia, H., Le, B.

Conference paper

AWARDS

2023 Pervasive and Ubiquitous Computing (UbiComp) International Symposium on Wearable Computing (ISWC)
Student Challenge Award
zzzGPT: An Interactive GPT Approach to Enhance Sleep Quality
Yonchanok (Pro) KhaokaewKaixin Ji, Marwah Alaofi, Hiruni Kegalle, Thuc Hanh Nguyen (UNSW) and Prof Flora Salim

2023 Pervasive and Ubiquitous Computing (UbiComp) International Symposium on Wearable Computing (ISWC)
Best Poster Award
Towards Detecting Tonic Information Processing Activities with Physiological Data’
Dr Damiano SpinaKaixin Ji, Prof Falk Scholer, Dr Danula Hettiachchi and Prof Flora Salim

17th Conference on Evaluation of Information Access Technologies (NTCIR-17)
Best Oral Presentation
Sachin Cherumanal Pathiyan

RESEARCHERS

Dr Damiano Spina

Dr Damiano Spina

Lead Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Anthony McCosker

Assoc Prof Anthony McCosker

Chief Investigator,
Swinburne University

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Associate Investigator,
RMIT University

Learn more

ADM+S Associate Investigator Jenny Kennedy

Assoc Prof Jenny Kennedy

Associate Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator,
RMIT University

Learn more

ADM+S Member

Nuha Abu Onq

PhD Student,
RMIT University

Marwah Alaofi

Marwah Alaofi

PhD Student,
RMIT University

Learn more

ADM+S Member

Hmdh Alknjr

PhD Student,
RMIT University

Sachin Pathiyan Cherumanal

PhD Student,
RMIT University

Learn more

Kaixin Ji

Kaixin Ji

PhD Student,
RMIT University

Learn more

PARTNERS

ABC logo

Australian Broadcasting Corporation

Visit website

AlgorithmWatch Logo

Algorithm Watch (Germany)

Visit website

Bendigo Health logo

Bendigo Hospital

Visit website

Google Logo

Google Australia

Visit website

RMIT ABC Fact Check Logo

RMIT ABC Fact Check

Visit website

Mapping ADM Across Sectors

PROJECT SUMMARY

Blurred crowd of people

Mapping ADM Across Sectors

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Research Programs: Data, Machines, Institutions, and People
Status: Active

ADM systems have the potential to greatly improve the overall quality of life in society, but they may also exacerbate social, political, and economic inequality. The role they play in reinforcing, reproducing, and reconfiguring power relations is, as recent events demonstrate, a key concern with respect to the deployment of automated decision making systems. When such systems are used to decide how benefits, resources, services, or information are allocated in society, they bear directly on the character and quality of life in that society. We are interested in both the potential benefits of the deployment of the technology and the potential harms. We do not treat such systems in the abstract, but are centrally concerned with the social, political, and economic relations in which they are embedded and which shape their deployment. A key question for the ADM+S Centre, in other words, is not just how best to design and deploy the technology, but what economic and political arrangements are most compatible with their fair, ethical, responsible, and democratic use.

The Social Issues in Automated Decision-Making report brings together material collected from discussions with leaders in the Centre’s focus areas and feedback from an international collection of experts in their respective domains. For each focus area we followed a similar methodology for canvassing key social issues. We started by discussing key social issues with Focus Area leaders and their teams. We then canvassed the academic literature, reports by industry groups and relevant independent organisations, and media coverage. For each area, we sought to identify key applications of ADM and the possible social benefits and harms with which they are associated. We also sought to identify continuities in these social issues both within and across the Centre’s main focus areas.

This is neither a final nor a definitive report. It marks the first step in the Centre’s ongoing social issues mapping project. The document will develop over time to reflect the insights that emerge from ongoing collaborations.

Read the report.

PUBLICATIONS

Social issues in ADM

Social Issues in Automated Decision Making, 2022

O’Neill, C., Sadowski, J.,  Andrejevic, M. et al

Report

RESEARCHERS

Mark Andrejevic

Prof Mark Andrejevic

Lead Investigator,
Monash University

Learn more

Paul Henman

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Investigator Ramon Lobato

Assoc Prof Ramon Lobato

Associate Investigator,
RMIT University

Learn more

Jathan Sadowski

Dr Jathan Sadowski

Associate Investigator,
Monash University

Learn more

Georgia Van Toorn

Dr Georgia van Toorn

Associate Investigator,
UNSW

Learn more

Kelly Lewis

Dr Kelly Lewis

Research Fellow,
Monash University

Learn more

Christopher O'Neill

Dr Christopher O’Neil

Research Fellow,
Monash University

Learn more

Daniel Binns

Dr Daniel Binns

Affiliate,
RMIT University

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

PARTNERS

OVIC Logo

Office of the Victorian Information Commissioner

Learn more

Australian Red Cross Logo

Australian Red Cross

Learn more

Mapping ADM Machines in Australia and Asia-Pacific

PROJECT SUMMARY

People walking in city centre

Mapping ADM Machines in Australia and Asia-Pacific

Focus Area: Social Services
Research Program: Machines
Status: Completed

This project aimed to map ADM machines in Social Services in Australia and the Asia Pacific to provide foundational empirical and conceptual knowledge of ADM in social services beyond Europe and North America, and into the Asia-Pacific region. Viewing ADM as an assemblage of data systems and decision-making in social-political context, this project built a knowledge base about what ADM systems are being used in social services delivery in Australia and the Asia Pacific, and how they are used, and who is affected by this.

Based on a conceptual definitions and framework of ADM systems, this project provided a detailed mapping of ADM systems used in social services in Australia, worked with academics across the Asia-Pacific to map ADM systems used in social services in their countries, and conducted a countermapping of ADM in social services in Australia. Data was collected via webscraping of government websites and reports and major and specialist IT media outlets to build a detailed history and understanding of each ADM system identified, supplemented by interviews with developers and user stakeholders.

Major outputs included the Mapping ADM systems in Australian Social Services report, as well as presentations in national and international conferences, webinars, and journal articles in leading journals, including Qualitative Inquiry.

Major benefits of this project include:
• Improved public understanding of what ADM systems are being used in social services in Australia and the Asia Pacific
• Increased focus by public institutions, like the NSW Ombudsman, to monitor and map what ADMs are being used in governmental decision-making to improve transparency
• Attention by major players, like IBM, on the way ADM systems are used in social services delivery, its impacts on service users’ wellbeing and different ways to think about the roll out of new technologies in the sector (e.g, using trauma informed practice principles).

PUBLICATIONS

Submission by the ARC Centre of Excellence for Automated Decision-Making and Society (AMD+S). Royal commission into the Robodebt Scheme, 2023

ADM+S

Submission

Female dependents, individual customers and promiscuous digital personas: The multiple governing of women through the Australian social security couple rule, 2023

Sleep, L.

Journal article

ADM in child and family services: mapping what is happening and what we know, 2022

Henman, P., Coco, B., Sleep, L.

Working paper

Mapping ADM in Australian Social Services, 2022

Sleep, L., Coco, B., Henman, P.

Report

From Making Automated Decision Making Visible to Mapping the Unknowable Human: Counter-Mapping Automated Decision Making in Social Services in Australia, 2022

Sleep, L.

Journal article

Digital Inclusion and Social Services Delivery – Special Edition Journal of Social Inclusion, 2022

Sleep, L., Harris, P.

Journal special ed.

The importance of digital inclusion in accessing care and support in our increasingly digitised world, 2021

Sleep, L., Harris, P.

Journal article

RESEARCHERS

Paul Henman

Prof Paul Henman

Lead Investigator,
University of Queensland

Learn more

Brooke Coco

Brooke Ann Coco

PhD Student,
RMIT University

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

PARTNERS

AlgorithmWatch Logo

Algorithm Watch (Germany)

Visit website

Adaptive, Multi-Factor Balanced, Regulatory Compliant Routing ADM Systems

PROJECT SUMMARY

People on bus using mobile phones

Adaptive, Multi-Factor Balanced, Regulatory Compliant Routing ADM Systems

Focus Area: Transport and Mobilities
Research Program: Machines
Status: Active

This project develops new approaches to combine fairness, transparency and safety guarantees for ADM systems, such as machine learning based systems. We focus on resource allocation problems where there is a high level of uncertainty about the demand for resources, such as in the response to natural disasters or cyber security incidents.

In particular, we consider the problem of how criminal and malicious agents can manipulate such decision-making problems for their own advantage, and what measures can be taken to detect this manipulation.

PUBLICATIONS

Exploiting patterns to explain individual predictions”. Knowledge and Information Systems, 2020

Leckie, C., et al.

Journal article

Unsupervised online change point detection in high-dimensional time series, 2020

Salim, F., Leckie, C., et al.

Journal article

Propagation2Vec: Embedding partial propagation networks for explainable fake news early detection, 2021

Leckie, C., et al.

Journal article

Discovery of contrast corridors from trajectory data in heterogeneous dynamic cellular networks, 2020

Erfani, S., Leckie, C., et al.

Conference paper

Improving Single and Multi-View Blockmodelling by Algebraic Simplification, 2020

Leckie, C., Chan, J., et al.

Conference paper

METEOR: Learning Memory and Time Efficient Representations from Multi-modal Data Streams, 2020

Leckie, C., et al.

Conference paper

Embracing Domain Differences in Fake News: Cross-domain Fake News Detection using Multi-modal Data, 2021

Leckie, C., et al.

Conference paper

RESEARCHERS

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Lead Investigator,
University of Melbourne

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Sarah Erfani

Dr Sarah Erfani

Associate Investigator,
University of Melbourne

Learn more

Considerate and Accurate Multi-party Recommender Systems for Constrained Resources

PROJECT SUMMARY

Mobile with Spotify music app

Considerate and Accurate Multi-party Recommender Systems for Constrained Resources

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Research Program: Machines
Status: Active

This project will create a next generation recommender system that enables equitable allocation of constrained resources. The project will produce novel hybrid socio-technical methods and resources to create a Considerate and Accurate REcommender System (CARES), evaluated with social science and behavioural economics lenses.

CARES will transform the sharing economy by delivering systems and methods that improve user and non-user experiences, business efficiency, and corporate social responsibility.

PARTICIPATE

Participate in an online user study on multi-party fair recommendations

We are looking for users of the Spotify music application to complete a brief online study. In the study, you are expected to browse music recommendations and answer a set of questions.

The study is expected to take less than 15 minutes, and you will receive a AU$10 gift card as a thank you.

You will need to have an active Spotify account with at least 6 months of listening history to take part.

To verify your eligibility and participate in the study, please fill out this form.

PUBLICATIONS

Are footpaths encroached by shared e-scooters? Spatio-temporal analysis of Micro-mobility services, 2023

Kegalle, H., Hettiachchi, D., et al.

Conference paper

Capacity-aware fair POI recommendation combining Transformer Neural Networks and Resource allocation Policy. Submitted to journal Knowledge Based Systems, 2023

Chan, J.

Journal article

More is Less: When do Recommender Systems Underperform for Data-rich Users? 2023

Xuan, Y., Sanderson, M., et al.

Conference paper

How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies, 2023

Small, E., Chan, J., et al.

Conference paper

RESEARCHERS

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Lead Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Chief Investigator,
University of Melbourne

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Associate Investigator,
RMIT University

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website

A taxonomy of decision-making machines

PROJECT SUMMARY

Blurred people walking towards a city building with green trees on the side of the pathway

A taxonomy of decision-making machines

Focus Area(s): News and Media, Health, Social Services, Transport and Mobilities
Research Program: Machines
Status: Completed

The concept of Automated Decision Making (ADM) is relatively uncommon compared to Artificial Intelligence (AI). An important challenge for the Centre and for researchers is to clarify the meaning of ADM and how it relates to and differs from similar concepts.

This project sought to provide conceptual clarity of the this field of concepts. It secondly developed a way in which to conceptualise the various dimensions of ADM systems, providing a taxonomy of ADM. The project engaged with and provides an augmentation of the 2022 OECD Framework for the Classification of AI systems.

The purpose of identifying an ADM taxonomy was to enable more systematic identification and analysis of ADM. Such a systematic approach provides for comparison of ADM systems from different projects.

Based on the formative work of the project, draft definitions and taxonomy were adopted and revised in both the ADM+S Mapping ADM in social services in Australia project and the ADM+S NSW Ombudsman funded project Mapping ADM in NSW state and local governments.

It is anticipated that an ADM+S project report will be published.

PUBLICATIONS

Mapping ADM in Australian social services, 2022

Sleep, L., Coco, B., Henman, P.

Report

RESEARCHERS

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Lead Investigator,
University of Queensland

Learn more

Jake Goldenfein

Dr Jake Goldenfein

Chief Investigator,
University of Melbourne

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Chief Investigator,
University of Melbourne

Learn more

ADM+S Chief Investigator Jason Potts

Prof Jason Potts

Chief Investigator,
RMIT University

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Distinguished Professor Julian Thomas

Prof Julian Thomas

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Philip Gillingham

Dr Philip Gillingham

Associate Investigator,
University of Queensland

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Affiliate,
Central Queensland University

Learn more

PARTNERS

AlgorithmWatch logo

AlgorithmWatch
Visit website

Data & Society
Visit website