PROJECT SUMMARY

Person typing on computer

Transparent Machines: From Unpacking Bias to Actionable Explainability

Focus Areas: News and Media, Transport and Mobility, Health, and Social Services
Status: Active

ADMs, their software, algorithms, and models, are often designed as “black boxes” with little efforts placed on understanding how they work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers, who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data from which the models are often trained on.

Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models.

Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing.

This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g. a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate diverse, robust and actionable explanations for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data. To this end, we look to generate counterfactual explanations that have a shared dependence on the data distribution and the local behaviour of the black-box model by level, and offer new metrics in order to measure the opportunity cost of choosing one counterfactual over another. We further aim to explore the intelligibility of different representations of explanations to diverse audiences through an online user study.

PUBLICATIONS

i-Align: An Interpretable Knowledge Graph Alignment Model, 2023

Salim, F., Scholer, F., et al.

Journal article

TransCP: A Transformer Pointer Network for Generic Entity Description Generation with Explicit Content-Planning, 2023

Salim, F., et al.

Journal article

Contrastive Learning-Based Imputation-Prediction Networks for In-hospital Mortality Risk Modeling Using EHRs, 2023

Salim, F., et al.

Conference paper

How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies, 2023

Small, E., Chan, J., et al.

Journal article

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness, 2023

Small, E., Sokol, K., et al.

Conference paper

Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations, 2023

Small, E., Xuan, Y., et al.

Workshop paper

Navigating Explanatory Multiverse Through Counterfactual Path Geometry, 2023

Small, E., Xuan, Y., Sokol, K.

Workshop paper

Mind the gap! Bridging explainable artificial intelligence and human understanding with Luhmann’s Functional Theory of Communication, 2023

Sokol, K., et al.

Workshop paper

Measuring disentangled generative spatio-temporal representation, 2022

Chan, J., Salim, F., et al.

Conference paper

FAT Forensics: A Python toolbox for algorithmic fairness, accountability and transparency, 2022

Sokol, K., et al.

Journal article

Analysing Donors’ Behaviour in Non-profit Organisations for Disaster Resilience: The 2019–2020 Australian Bushfires Case Study, 2022

Chan, J., Sokol, K., et al.

Conference paper

BayCon: Model-agnostic Bayesian Counterfactual Generator, 2022

Sokol, K., et al.

Conference paper

RESEARCHERS

ADM+S Investigator Flora Salim

Prof Flora Salim

Lead Investigator,
UNSW

Learn more

Daniel Angus

Prof Daniel Angus

Chief Investigator,
QUT

Learn more

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Chief Investigator,
University of Queensland

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Damiano Spina

Dr Damiano Spina

Associate Investigator,
RMIT University

Learn more

ADM+S Investigator Maarten de Rijke

Prof Maarten de Rijke

Partner Investigator,
University of Amsterdam

Learn more

Peibo Li

Peibo Li

PhD Student,
UNSW

Learn more

Edward Small

Edward Small

PhD Student,
RMIT University

Learn more

Kacper Sokol

Kacper Sokol

Affiliate,
ETH Zurich

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website