PROJECT SUMMARY

Quantifying and Measuring Bias and Engagement
Focus Areas: News & Media, Health
Research Programs: Machines, Data
Status: Active
Automated decision-making systems and machines – including search engines and intelligent assistants – are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete, by including new aspects such as fairness, that are as important as the traditional definitions of quality, to inform the design, evaluation and optimisation of such systems.
Recent developments in machine learning, information access, and AI communities attempt to define mechanisms to minimise the creation and reinforcement of unintended cognitive biases.
However, there are a number of research questions related to quantifying and measuring bias and engagement that remain unexplored:
– Is it possible to measure bias by observing users interacting with search engines, or intelligent assistants?
– How do users perceive fairness, bias, or trust? How can these perceptions be measured effectively?
– To what extent can sensors in wearable devices and interaction logging (e.g., search queries, app swipes, notification dismissal, etc) inform the measurement of bias and engagement?
– Are the implicit signals captured from sensors and interaction logs correlated with explicit human ratings w.r.t. bias and engagement?
The research aims to address the research questions above by focusing on information access systems that involve automated decision-making components. By partnering with experts in fact-checking, we use misinformation management as the main scenario of study, given that bias and engagement play an important role in three main elements of the automated decision-making processes: the user, the system, and the information that is presented and consumed.
The methodologies considered to address these questions include lab user studies (e.g., observational studies), and the use of crowdsourcing platforms (e.g., Amazon Mechanical Turk). The data collection processes include: logging human-system interactions; sensor data collected using wearable devices; and questionnaires.
RESEARCHERS








Nuha Abu Onq
PhD Student,
RMIT University


Hmdh Alknjr
PhD Student,
RMIT University


PARTNERS

Australian Broadcasting Corporation

Algorithm Watch (Germany)

Bendigo Hospital

Google Australia
