Winning Hackathon team stading in front of Microsoft sign
Left to right: Hiruni Kegalle, Rhea Erica D'Silva, Dr Lida Ghahremanlou & Awais Hameed Khan at Microsoft.

Hackathon project explores multimodal AI to grapple with human and machine bias

Author  Kathy Nickels
Date 5 January 2024

Winners of the 2023 ADM+S Hackathon have visited Microsoft and Canva offices in Sydney to further advance their research exploring human and machine bias. 

The project, named Sub-Zero. A Comparative Thematic Analysis Experiment of Robodebt Discourse Using Humans and LLMs, was originally developed to investigate human and machine bias in the context of large language models (LLMs) like GPT-4 and Llama 2 for Qualitative Data Analysis (QDA). 

It was one of five projects developed over a two-day hackathon hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in August 2023.

Sub-Zero was selected as the winning project as it introduces a perspective of human-AI collaboration for qualitative research with a real potential to grapple with the complexities of bias and perception within our AI era.

It was commended by judges who said that the project is not only concerned with creating advanced qualitative research mechanisms but also about ingraining creativity and self-reflection into the process. It does not just navigate the data; it invites us to scrutinise our biases and preconceptions to pursue more nuanced research outcomes.

Dr Lida Ghahremanlow, Data Scientist Lead at Microsoft, affiliate at the ADM+S and Sub-Zero project mentor hosted the team at Microsoft. Here the team used insights from the Hackathon and expanded their project’s scope from LLMs to AI multimodal systems. 

“We constructed a method that used image-to-text-to-image to scrutinise the multimodal reasoning of multiple commercial and open-source GenAI systems while providing insights to human researchers about our own conceptions and assumptions when we try to observe and mitigate bias,” explained Rhea Erica D’Silva, one of the team researchers from ADM+S at Monash University.

Multimodal artificial intelligence combines multiple types of data to create more accurate determinations, conclusions and make more precise predictions about real-world problems. These systems train with and use video, audio, speech, images, text and a range of traditional numerical data sets. 

Left to right: Peter Bailey (Canva), Damiano Spina, Ned Watt, Awais Hameed Khan, Rhea Erica D’Silva, Hiruni Kegalle & Lida Ghahremanlou at Canva.

Ned Watt from ADM+S at QUT said, “this approach aims to expose biases and blind spots across modalities that emerge and re-emerge as GenAI models juggle multiple types of inputs and outputs.”

Most importantly, multimodal AI means numerous data types are used in tandem to help AI establish content and better interpret context, something missing in LLMs and earlier AI.

The project was showcased to Canva’s Trust, Safety, and Responsible AI team, eliciting valuable feedback and insights.

During the visit, the team also worked with Dr Damiano Spina, Associate Investigator at the RMIT University node of the ADM+S, to explore model degradation using image-to-text-to-image. 

“Our approach aims to embed reflexivity in both human and machine bias detection and mitigation using a combination of human-in-the-loop and machine-in-the-loop to broaden, deepen, and scale multimodal bias detection,” said Ned

The team will be releasing results from further investigations soon.

Project Team

Assoc. Prof Liam Magee (mentor), Dr Lida Ghahremanlou (mentor), Ned Watt, Hiruni Kegalle, Rhea D’Silva, Daniel Whelan-Shamy, and Dr Awais Hameed Khan.

Acknowledgements

The Sub-Zero project team extend their thanks to Peter Bailey (Canva), Dr Damiano Spina and mentors Dr Lida Ghahremanlow and Assoc. Prof Liam Magee.

SEE ALSO