SACHIN PATHIYAN CHERUMANAL
Fairness-Aware Question Answering for Intelligent Assistants
Conversational intelligent assistants, such as Amazon Alexa, Google Assistant, and Apple Siri, have the potential to address complex information needs, but are at the moment mostly limited to answering with facts expressed in a few words. For example, when a user asks Google Assistant if coffee is good for their health, it responds by justifying why it is good for their health without shedding any light on the side effects coffee consumption might have. Such limited exposure to perspectives can lead to change in perceptions, preferences, and attitude of users. Making conversational intelligent assistants to provide a fair exposure of complex answers (including those with opposing perspectives) is an open research problem. This proposal aims to better understand the role of fair exposure of multiple perspectives in conversational search. While addressing the challenge of characterising, evaluating, and optimising fairness and relevance, it is also crucial that the system maintains user satisfaction. This proposal hence puts forth a qualitative laboratory study to identify different presentation strategies for relevant, fair, and engaging conversations for complex information needs. The evaluation, improvement, and presentation strategies generated during this research shall enable future researchers and developers to provide fair access of information to users and alleviate the problem of misinformation.
The aim of the proposal is to find the best way to fairly expose multiple perspectives and relevant complex answers to users in a multi-turn conversation without negatively impacting user engagement. In an attempt to achieve this, the proposal puts forth the following research questions.
Research Question 1: How can we quantify fairness of opposing perspectives?
Research Question 2: How can we jointly optimise relevance and multi-attribute fairness in Question Answering?
Research Question 3: How can we fairly present multi-perspective answers to the user in a multi-turn conversation without compromising user satisfaction?
Prof Falk Scholer, RMIT University
Dr Damiano Spina, RMIT University