PROJECT SUMMARY

Man working on laptop

Quantifying and Measuring Bias and Engagement

Focus Areas: News & Media, Health
Research Programs: Machines, Data
Status: Active

Automated decision-making systems and machines – including search engines and intelligent assistants – are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete, by including new aspects such as fairness, that are as important as the traditional definitions of quality, to inform the design, evaluation and optimisation of such systems.

Recent developments in machine learning, information access, and AI communities attempt to define mechanisms to minimise the creation and reinforcement of unintended cognitive biases.

However, there are a number of research questions related to quantifying and measuring bias and engagement that remain unexplored:
– Is it possible to measure bias by observing users interacting with search engines, or intelligent assistants?
– How do users perceive fairness, bias, or trust? How can these perceptions be measured effectively?
– To what extent can sensors in wearable devices and interaction logging (e.g., search queries, app swipes, notification dismissal, etc) inform the measurement of bias and engagement?
– Are the implicit signals captured from sensors and interaction logs correlated with explicit human ratings w.r.t. bias and engagement?

The research aims to address the research questions above by focusing on information access systems that involve automated decision-making components. By partnering with experts in fact-checking, we use misinformation management as the main scenario of study, given that bias and engagement play an important role in three main elements of the automated decision-making processes: the user, the system, and the information that is presented and consumed.

The methodologies considered to address these questions include lab user studies (e.g., observational studies), and the use of crowdsourcing platforms (e.g., Amazon Mechanical Turk). The data collection processes include: logging human-system interactions; sensor data collected using wearable devices; and questionnaires.

RESEARCHERS

Dr Damiano Spina

Dr Damiano Spina

Lead Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Anthony McCosker

Assoc Prof Anthony McCosker

Chief Investigator,
Swinburne University

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Chief Investigator,
UNSW

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator,
RMIT University

Learn more

ADM+S Associate Investigator Jenny Kennedy

Dr Jenny Kennedy

Associate Investigator,
RMIT University

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator,
RMIT University

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Research Fellow,
RMIT University

Learn more

Person icon

Nuha Abu Onq

PhD Student,
RMIT University

Marwah Alaofi

Marwah Alaofi

PhD Student,
RMIT University

Learn more

Person icon

Hmdh Alknjr

PhD Student,
RMIT University

Danula Hettiachchi

Sachin Cherumanal

PhD Student,
RMIT University

Learn more

Kaixin Ji

Kaixin Ji

PhD Student,
RMIT University

Learn more

PARTNERS

ABC logo

Australian Broadcasting Corporation

Visit website

AlgorithmWatch Logo

Algorithm Watch (Germany)

Visit website

Bendigo Health logo

Bendigo Hospital

Visit website

Google Logo

Google Australia

Visit website

RMIT ABC Fact Check Logo

RMIT ABC Fact Check

Visit website