The Regulatory Project

PROJECT SUMMARY

The Regulatory Project

Focus Areas:  News & Media, Mobilities, Social Services, Health
Status: Active

ADM systems (including AI, foundation models and generative AI) pose ongoing regulatory challenges for Australian governments at every level, and across multiple domains. 

2024-2027 is a critical period for the development of regulation of AI. Worldwide, governments are taking concrete steps to adapt existing laws to technological, social and other changes brought about by expanding uses of AI, and to develop new, risk-based regulatory frameworks.

At the same time others are moving to provide more governance for AI: for example via the development of technical standards and other frameworks. 

The Regulatory Project will contribute to this process, examining fundamental questions that these technologies pose for our regulatory techniques, and engaging research and researchers from across the Centre and the Centre’s partners to inform and respond to regulatory initiatives and quandaries.

PROJECT OBJECTIVES

  • Examine and understand the deployment of ADM systems (including AI), by public and private sector actors and across supply chains, and the effect on fundamental legal concepts, such as natural justice (procedural fairness) as it applies to ADM use by government and firms; responsibility, and accountability, delivering critical new knowledge regarding the changing nature of law and regulation in the AI/ADM space; 
  • Examine and analyse emerging regulatory and governance mechanisms for the development and deployment of AI, including their interaction with socio-technical context, in order to understand what mechanisms are emerging, whether they work, and (if so) how;
  • Translate these understandings across other projects and themes in the centre by collaborating on emerging regulatory implications of research and projects across ADM+S; and
  • Provide a hub for ongoing government and policy engagement and to bring legal and regulatory perspectives to research across the Centre.

PUBLIC RESOURCES

GenAI Concepts

Target audience: Government agencies, industry, researchers, general public

This resource offers technical, operational and regulatory terms and concepts for generative artificial intelligence (GenAI), developed in collaboration with the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and the Office of the Victorian Information Commissioner (OVIC).

View website
View PDF guide

MORE INFORMATION

RESEARCHERS

Kim Weatherall

Prof Kimberlee Weatherall

Project Co-Leader and Chief Investigator,
University of Sydney

Learn more

ADM+S Investigator Christine Parker

Prof Christine Parker

Project Co-Leader and Chief Investigator,
University of Melbourne

Learn more

Jake Goldenfein

Dr Jake Goldenfein

Project Co-Leader and Chief Investigator,
University of Melbourne

Learn more

Michael Richardson

Assoc Prof Michael Richardson

Project Co-Leader and Associate Investigator,
UNSW

Learn more

ADM+S Chief Investigator Nic Suzor

Prof Nicolas Suzor

Chief Investigator,
QUT

Learn more

Zofia Bednarz

Dr Zofia Bednarz

Associate Investigator,
University of Sydney

Learn more

Emeritus Professor Terry Carney

Prof Terry Carney

Associate Investigator,
University of Sydney

Learn more

Kylie Pappalardo profile picture

Prof Kylie Pappalardo

Associate Investigator,
QUT

Learn more

Scarlet Wilcock

Dr Scarlet Wilcock

Associate Investigator,
University of Sydney

Learn more

José-Miguel Bello y Villarino

Dr Jose-Miguel Bello y Villarino

Research Fellow,
University of Sydney

Learn more

Henry Fraser

Dr Henry Fraser

Research Fellow,
QUT

Learn more

Fan Yang

Dr Fan Yang

Research Fellow,
University of Melbourne

Learn more

Tegan Cohen

Dr Tegan Cohen

Affiliate,
QUT

Learn more

PARTNERS

ABC logo

Australian
Broadcasting
Corporation

Visit website

AlgorithmWatch logo

Algorithm Watch
(Germany)

Visit website

Consumer Policy Research Centre Logo

Consumer Policy R
esearch Centre

Visit website

Cornell Tech logo

Cornell Tech

Visit website

OVIC Logo

Victorian Information
Commissioner

Visit website

COLLABORATORS

University of Melbourne logo

Centre for Artificial Intelligence
and Digital Ethics (CAIDE)

Visit website

CHOICE

Visit website

Gradient Institute logo

Gradient Institute

Visit website

Transparent Machines: From Unpacking Bias to Actionable Explainability

PROJECT SUMMARY

Person typing on computer

Transparent Machines: From Unpacking Bias to Actionable Explainability

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Status: Active

ADMs, their software, algorithms, and models, are often designed as “black boxes” with little efforts placed on understanding how they work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers, who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data from which the models are often trained on.

Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models.

Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing.

This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g. a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate diverse, robust and actionable explanations for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data. To this end, we look to generate counterfactual explanations that have a shared dependence on the data distribution and the local behaviour of the black-box model by level, and offer new metrics in order to measure the opportunity cost of choosing one counterfactual over another. We further aim to explore the intelligibility of different representations of explanations to diverse audiences through an online user study.

PUBLICATIONS

i-Align: An Interpretable Knowledge Graph Alignment Model, 2023

Salim, F., Scholer, F., et al.

Journal article

TransCP: A Transformer Pointer Network for Generic Entity Description Generation with Explicit Content-Planning, 2023

Salim, F., et al.

Journal article

Contrastive Learning-Based Imputation-Prediction Networks for In-hospital Mortality Risk Modeling Using EHRs, 2023

Salim, F., et al.

Conference paper

How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies, 2023

Small, E., Chan, J., et al.

Journal article

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness, 2023

Small, E., Sokol, K., et al.

Conference paper

Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations, 2023

Small, E., Xuan, Y., et al.

Workshop paper

Navigating Explanatory Multiverse Through Counterfactual Path Geometry, 2023

Small, E., Xuan, Y., Sokol, K.

Workshop paper

Mind the gap! Bridging explainable artificial intelligence and human understanding with Luhmann’s Functional Theory of Communication, 2023

Sokol, K., et al.

Workshop paper

Measuring disentangled generative spatio-temporal representation, 2022

Chan, J., Salim, F., et al.

Conference paper

FAT Forensics: A Python toolbox for algorithmic fairness, accountability and transparency, 2022

Sokol, K., et al.

Journal article

Analysing Donors’ Behaviour in Non-profit Organisations for Disaster Resilience: The 2019–2020 Australian Bushfires Case Study, 2022

Chan, J., Sokol, K., et al.

Conference paper

BayCon: Model-agnostic Bayesian Counterfactual Generator, 2022

Sokol, K., et al.

Conference paper

RESEARCHERS

ADM+S Investigator Flora Salim

Prof Flora Salim

Lead Investigator,
UNSW

Learn more

Daniel Angus

Prof Daniel Angus

Chief Investigator,
QUT

Learn more

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Damiano Spina

Dr Damiano Spina

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Maarten de Rijke

Prof Maarten de Rijke

Partner Investigator,
University of Amsterdam

Learn more

Peibo Li

Peibo Li

PhD Student,
UNSW

Learn more

Edward Small

Edward Small

PhD Student,
RMIT University

Learn more

Kacper Sokol

Kacper Sokol

Affiliate,
ETH Zurich

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website