University of Bristol campus (Image credit: University of Bristol / Flickr)
University of Bristol campus (Image credit: University of Bristol / Flickr)

The Explanatory Multiverse: Maximising User Agency in Automated Systems

Author  Loren Dela Cruz
Date 29 June 2023

In a recent talk at the University of Bristol, ADM+S PhD Student Edward Small from RMIT University discusses a paper that explores the concept of an explanatory multiverse to capture all possible paths to a desired (or all desired) outcomes in eXplainable Artifical Intelligence (XAI).

Edward introduces a framework in order to directly compare the geometry of these paths in order to generate additional paths that maximise user-agency under (potentially) imperfect information at t=0.

“Counterfactuals and algorithmic recourse are a powerful, ad-hoc, human-centric explainability tool for AI. They embody the concept of “X happened, but if I had done Z actions Y would have occurred instead.” However, the metrics we use to measure how good the recourse is are still lacking. Something humans value greatly when making decisions is choice, and people are also likely to over-commit to decisions, known as the sunk cost fallacy. We therefore want to find recourse paths that allow a user to easily change their mind during the intervention with little cost. This is especially important when not all information is available to the user when making this decision,” said Edward who is currently undertaking a four-month research program with Machine Learning and Computer Vision (MaVi) at the University of Bristol.

The paper has since been accepted to the International Conference on Machine Learning (ICML) workshop for Counterfactuals in Minds and Machines.

“It was an honour to be chosen to bring the MaVi series of seminars for the 2022/2023 academic year to a close. The talk generated interesting discussions and we are now working towards making the concept more robust and applying it to high-impact areas where uncertainty and imperfect information is rife, such as healthcare,” said Edward.

Edward’s research looks at fair, explainable, and transparent artificial intelligence. His thesis examines the robustness and stability of current fairness strategies, and looks to resolve the mathematical conflict between group fairness and individual fairness.

Learn more about Edward’s work.