
ADM+S researchers awarded 2025 Google Research Scholar Award in Human–Computer Interaction
Author
Date 17 June 2025
Dr Danula Hettiachchi (RMIT University) and Dr Kacper Sokol (ETH Zurich & USI) from the ARC Centre of Excellence for Automated Decision-Making and Society have been awarded the 2025 Google Research Scholar Award in Human–Computer Interaction for their research project Misunderstanding of AI explanations through follow-up interactions and multi-modal explainers.
This highly competitive award recognises early-career academics doing exceptional research in computer science and related fields.
Drawing on latest advancements in AI – including generative techniques such as large language models – their research proposes an explanation pipeline that can refine users’ information needs and dynamically generate tailored, interactive, multi-modal explanations for AI decisions.
These explainers will be tailored to address the specific information needs that users may have after encountering an initial AI explanation — particularly where that explanation might seem clear on the surface but actually lacks important details, contains ambiguity, or leads users to incorrect assumptions—without users realising it.
The proposed work is informed by their recent research alongside authors Yueqing Xuan, Edward Small and Mark Sanderson, Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanations. The paper reveals a critical challenge in AI communication: users often misinterpret even the most basic explanations, drawing inaccurate conclusions or inferring information that wasn’t actually provided.
The award comes with $60,000 US funding to support the advancement of Danula and Kacper’s research agenda as well as mentorship from researchers at Google.
The Google Research Scholar Program in Human-Computer Interaction (HCI) supports academic research advancing innovative, human-centered interactive systems.
This recognition places awardees at the forefront of research in responsible AI and underscores the global significance of their work in shaping how people and machines interact.