#AccelerateAction: Spotlighting ADM+S research on gender bias in AI and ADM systems

Author ADM+S Centre
Date 5 March 2025

International Women’s Day celebrates the social, economic, cultural, and political achievements of women, global progress towards gender equality, and recognises that there is substantial work still to be done.

In the fields of technology, automated decision-making, and generative AI, women are still under-represented but disproportionately affected by the negative effects of emerging digital technologies.

This International Women’s Day we’re highlighting the work of ADM+S members across our research program who are investigating gender bias in AI and ADM systems.

By identifying inequalities in the ways users experience technology, these projects aim to #AccelerateAction in creating a more just and inclusive digital environment.

Advanced technology is taking us backwards on gender equity.

She might go by Siri, Alexa, or inhabit Google Home. She keeps us company, orders groceries, vacuums the floor, and turns out the light. The principal prototype for these virtual helpers – designed in male-dominated industries – is the 1950s housewife.

In The Smart Wife, Yolande Strengers and Jenny Kennedy examine the emergence of digital devices that carry out “wifework”–domestic responsibilities that have traditionally fallen to (human) wives. They offer a Smart Wife “manifesta,” proposing a rebooted Smart Wife that would promote a revaluing of femininity in society in all her glorious diversity.

In 2024, Yolande’s research on gendered voicebots was adapted into an educational school program in partnership with the Monash Tech School and Monash University’s Faculty of IT, called Superbots.

Superbots is a two-day interactive Industry Immersion program that explores the history, ethics, and societal influences on Voicebots and voice-assisted software development.

ADM+S filmmaker Jeni Lee produced a short film about the program, which observes and engages with students from Brentwood Secondary College as they ideate, test and construct their own voicebot personality.

Superbots will be available on SBS on Demand from Saturday 9 March.

This paper considers how algorithmic recommender systems and other core affordances and infrastructures of major social media platforms contribute to the harms of ‘hate speech’ against or vilification of women online.

The paper argues that this kind of speech occurring on major social media platforms exists at the intersections of patriarchy and platform power and is thus platformed.

Platforms also seek to maintain control or influence over the conditions for their own regulation and governance through use of their discursive power. Related to this is a privileging of self-regulatory action in current laws and law reform proposals for platform governance, which we argue means that platformed speech that vilifies women is also auspiced by platforms.

This auspicing, as an aspect of platforms’ discursive power, represents an additional ‘layer’ of contempt for women, for which platforms currently are not, but should be, held accountable.

 

Existing studies have examined depictions of journalists in popular culture, but artificial intelligence understandings of what a journalist is and what they look like is a different topic, yet to receive research attention.

This study analyses 84 images generated by AI from four “generic” keywords (“journalist,” “reporter,” “correspondent,” and “the press”) and three “specialized” ones (“news analyst,” “news commentator,” and “fact-checker”) over a six-month period.

The results reveal an uneven distribution of gender and digital technology between the generic and specialized roles and prompt reflection on how AI perpetuates extant biases in the social world.

 

Drawing on two ADM+S reports led by Dr Quilty (automation in transport mobilities scoping study and expert visions of future automated mobilities), this article introduces a critical concept called Pod Man that examines the gendered and racial formations embedded into technologies like self-driving cars.

Dr Quilty defines Pod Man as the technology-driven, hyper-mobile and hyper-masculine transport consumer found at the centre of sociotechnical imaginaries of automated mobilities. He represents the ideal mobility subject who is both invisible and powerful, shaping visions of the future of mobility.

Pod Man is both a provocation and an entry point for thinking about how emerging technologies, such as autonomous vehicles, are shaping unequal relations of power in visions of mobility futures.

Image: Miranda Burton

Generative AI systems learn how to create from our existing, unequal past; now, they’re embedding those same historical biases into our future.

ADM+S PhD Student Sadia Sharmin is researching how biases baked into AI models shape broader social views, amplifying and reinforcing existing power relations through their outputs.

The subtle biases produced by GenAI may seem innocuous, but they are insidious in that they shape cultural narratives, reinforce stereotypes, and influence social perceptions and opportunities for women on a potentially massive scale.

Her research seeks to tackles this subtle but pervasive problem by developing new ways to measure and identify gender bias in AI outputs – going beyond simple statistics – to understand how Generative AI systems might reinforce stereotypes about women’s place, capabilities, and value in society.

This includes creating new tools that go beyond obvious and quantifiable forms of bias, and instead assess the more subtle ways AI systems might undersell women’s achievements, limit their perceived potential, or reinforce gender-based assumptions.

 

Artificial Intelligence (AI) is increasingly being used in the delivery of social services including domestic violence services. While it offers opportunities for more efficient, effective and personalised service delivery, AI can also generate greater problems, reinforcing disadvantage, generating trauma or re-traumatising service users.

Building on work in social services on trauma-informed practice, this project identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma.

This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI.

 

This collaboration with UNED Madrid and The Polytechnic University of Valencia aimed to create an evaluation benchmark for automatic sexism characterisation in social media.

In recent years, the rapid increase in the dissemination of offensive and discriminatory material aimed at women through social media platforms has emerged as a significant concern.

The EXIST campaign has been promoting research in online sexism detection and categorization in English and Spanish since 2021. The fourth edition of EXIST, hosted at the CLEF 2024 conference, consisted of three groups of tasks analysing Tweets and Memes: sexism identification, source intention identification, and sexism categorization.

The “learning with disagreement” paradigm is adopted to address disagreements in the labelling process and promote the development of equitable systems that are able to learn from different perspectives on the sexism phenomena.

 

Crowdsourced annotation is vital to both collecting labelled data to train and test automated content moderation systems and to support human-in-the-loop review of system decisions. However, annotation tasks such as judging hate speech are subjective and therefore, highly sensitive to biases stemming from annotator beliefs, characteristics and demographics.

This research involved two crowdsourcing studies on Mechanical Turk to examine annotator bias in labelling sexist and misogynistic hate speech.

Results from 109 annotators show that annotator political inclination, moral integrity, personality traits, and sexist attitudes significantly impact annotation accuracy and the tendency to tag content as hate speech.

In exploring how workers interpret a task — shaped by complex negotiations between platform structures, task instructions, subjective motivations, and external contextual factors — we see annotations not only impacted by worker factors but also simultaneously shaped by the structures under which they labour.

 

At the ADM+S Centre, we recognise that racism, colonialism, sexism, homophobia, transphobia, and ableism are principal obstacles to equity, diversity and inclusion, and remain primary causes of injustice and inequality. We believe that gender equality for all means equality for marginalised groups, and that the cause of gender equality includes the experiences of including Indigenous and POC women, and transgender and non-binary people. You can read about how we are working to foster diversity and inclusion in the ADM+S community and through our research via our Equity and Diversity Strategy and Action Plan (website link).

Dr Anjalee de Silva, an expert on harmful speech and its regulation in online contexts and a member of the ADM+S Equity and Diversity Committee, explains “AI and ADM technologies have the potential to, and consistently have been evidenced to, replicate ‘real world’ biases against and harms to structurally vulnerable groups, including women and minorities.

“Scholarship considering these biases and harms is thus a crucial part of systemically informed and equitable approaches to the development, use, and regulation of such technologies.”

Prof Yolande Strengers adds, “Now more than ever we need to work hard to protect the progress we have made to support the unequal opportunities women and other minorities in technology fields experience.

“We also need research and programs that bring less heard voices into the public domain and push for further advances in equity.”

Watch: ADM+S community celebrates IWD

SEE ALSO