Yueqing Xuan presents full paper at CIKM conference in South Korea

Yueqing Xuan presents at CIKM

Yueqing Xuan presents full paper at CIKM conference in South Korea

Author ADM+S Centre
Date 12 January 2026

ADM+S researcher and PhD student at RMIT, Yueqing Xuan,  visited South Korea in November 2025 to present at The 34th ACM International Conference on Information and Knowledge Management (CIKM).

CIKM provides an international forum for discussion on research information and knowledge management, as well as recent advances on data and knowledge bases. The goal of the conference is to shape future directions of research by encouraging high quality, applied and theoretical research findings.

Yueqing presented her full research paper, co-authored with ADM+S researchers Kacper Sokol, Mark Sanderson and Jeffrey ChanEvaluating and Addressing Fairness Across User Groups in Negative Sampling for Recommender Systems

“My presented work systematically evaluates state-of-the-art recommender systems with respect to user-side fairness, specifically focusing on whether these systems provide equitable recommendation quality to users with different activity levels,” said Yueqing.

“The motivation for this work is that users with low activity levels often include individuals with limited digital literacy or access to digital services, such as elderly users or those from disadvantaged socio-economic backgrounds”

“Ensuring fair recommendation quality for these users is essential for inclusive and responsible digital systems.” She said.

The findings demonstrate that recommender systems consistently provide better accuracy for highly active users compared to inactive users. The paper calls for the development of more equitable training and sampling strategies to address fairness concerns in recommender systems. 

During the conference, Yueqing also served as a session chair, which involved moderating presentations, managing time and facilitating discussion.  She engaged with other PhD students and academics working in fairness in recommender systems.

“Serving as a session chair was a valuable and new experience, providing insight into how to effectively moderate academic discussions, ask constructive questions, and facilitate meaningful exchanges among presenters and the audience.”

Yueqing attended several industry sessions at the conference, and highlighted that she gained a better understanding of how real-world systems operate at large scales and involve complexities that are often simplified or abstracted in academic research.

“An important lesson I learned is the need to ground research problems in real-world settings and ensure practical relevance,” Yueqing said.

Yueqing explained that after discussions with fellow researchers, there was a strong foundation for future collaboration and for integrating different methodologies. She plans to maintain contact with these researchers to explore further opportunities.

This research trip was funded by the ADM+S RMIT node and ADM+S HDR funding.

SEE ALSO

Devi Malaal completes research trip to Denmark and the Netherlands

A pink tinted glass building with a city view
Aarhus Modern Art Gallery. Devi Malaal.

Devi Malaal completes research trip to Denmark and the Netherlands

Author ADM+S Centre
Date 13 January 2026

ADM+S researcher Devi Malaal, who is a PhD student at RMIT University, has recently completed a 2 month research trip to Denmark and the Netherlands. Devi participated in the Doctoral Consortium of Aarhus University’s decennial conference, Aarhus 2025: Computing (x) Crisis and undertook a visiting scholarship at Utrecht University in the Netherlands.

Devi was selected as one of 12 participants in the Doctoral Consortium at Aarhus 2025: Computing (x) Crisis, and the sole representative from the Asia-Pacific region. The Consortium brought together PhD researchers from across disciplines for an intensive mentorship process and research discussion on the conference theme ‘Computing (x) Crisis’

The conference program invited speakers to present new agendas and perspectives for addressing the current state of computing, including political activism, civic engagement, aesthetics, and creative practice. Devi presented her work on large language models in news and media contexts, alongside projects exploring diverse human-AI futures. 

Devi then travelled to the Netherlands, where she was a visiting student scholar at Utrecht University, hosted by ADM+S Affiliate Professor Annette Markham at the Futures + Literacies + Methods Lab (FLL). While there, Devi participated in seminars and workshops focussed on speculative design thinking, and critical data studies in relation to Generative AI. She also assisted with coordination of these events, including organising an introductory Retrieval-Augmented Generation (RAG) workshop.

The workshop was possibly the most fruitful aspect of my time in the Netherlands,” Devi said.

“It provided me with important foundational knowledge about how Large language Models operate and the requirements for installing, operating, and fine-tuning smaller models, knowledge that I aim to continue building on as I enter the second half of my PhD candidature”

While in the Netherlands, Devi connected with several other ADM+s affiliates based at the University of Amsterdam’s Information Retrieval Lab. She was invited to participate in a series of one-one sessions with their research students as well as the program leader. Devi and other researchers were able to discuss and share feedback about the aims and methods of their respective projects. 

This visit was funded by the ARC Centre of Excellence for Automated Decision-Making and Societies’ Research Training Grant

SEE ALSO

Brooke Coco presents research in USA and visits partner organisation Cornell Tech

Brooke Coco, left, and Metagov colleagues in front of the Brooklyn Bridge.

Brooke Coco presents research in USA and visits partner organisation Cornell Tech

Author ADM+S Centre
Date 12 January 2026

ADM+S PhD student Brooke Coco from RMIT has recently returned from a research trip to the USA, where she met with ADM+S Partner Organisation Cornell Tech.

In New York, Brooke visited the Digital Life Initiative (DLI) research lab at Cornell Tech. While at the Roosevelt Island campus, she met with doctoral and postdoctoral fellows and attended a DLI Working Group meeting.

“Student groups shared progress on a range of projects, including experiments with automated purchasing agents designed to locate and buy items online, as well as the development of digital tools aimed at promoting healthier lifestyles,” said Brooke.

 While in New York, Brooke also met with colleagues from Metagov, the primary field site of her PhD research. Metagov is an open, online collective committed to cultivating tools, practices and communities that enable self-governance in the digital age. Brooke’s ethnographic research within Metagov contributes to the co-development of the Knowledge Organisation Infrastructure (KOI), a sociotechnical system designed to enhance the coordination, sustainability, and discoverability of shared knowledge. 

 “This trip marked my first in-person meeting with the KOI project manager and only my second with the community manager.” 

Brooke then travelled to New Orleans to attend the 2025 American Anthropological Association (AAA) Annual Meeting. Over the course of the conference, she attended a range of panels and workshops, including “Selling In, Selling Up, Selling Out and Shutting Up:” Examining These Myths via the Lived Experience of Business Anthropologists, where practitioners reflected on common critiques of business anthropology through their own industry experiences. 

Brooke presented twice over the course of the meeting, firstly delivering a short flash presentation on her ethnographic research into the development and implementation of KOI

Speaking to the conference theme of Ghosts, Brooke explored how contemporary data infrastructures are haunted by the epistemic assumptions of their designers, by the data they privilege or ignore, and by the practices they render invisible.

“I discussed how KOI is creating the capacity to confront these ghosts by offering affordances that empower local communities with greater collective control over how their knowledge is curated, managed, and shared.”

“In doing so, it invites us to reimagine data infrastructures not as haunted, but as living systems that remember, respond to, and evolve with the communities they serve.” Brooke stated.

 Brooke was a panellist in a roundtable discussion titled Ghosts in the Machine: Reanimating Anthropological Engagement with AI, which explored anthropology’s historical role in shaping AI. Together with other researchers engaging with AI, they discussed how the discipline might re-engage with AI in more practice-oriented ways to support the development of more situated and ethical systems. 

 During the roundtable, Brooke highlighted her current use of Telescope, a participatory digital ethnography tool co-developed by ADM+S Associate Investigator and Metagov Research Director Professor Ellie Rennie.

“Telescope addresses key challenges associated with ethnographic research in digital environments, by enabling researchers and community members to collaboratively flag forum posts relevant to ongoing research, which then trigger an automated, consent-based data collection workflow.” 

Brooke discussed the team’s plans to reintegrate these enriched artefacts back into Metagov’s knowledge base, where they may seed new research, insights, and workflows.

Brooke highlighted a number of promising collaboration pathways after conversations with fellow panellists. For example, following the roundtable Brooke was invited to take part in a workshop on AI agents to be held at Monash University in 2026.

 Brooke Coco’s research trip activities were supported by ADM+S HDR Funding, ADM+S RMIT Node funding and the RMIT School of Media and Communication.

SEE ALSO

Best Paper Award at the International Conference on Advances in Geographic Information Systems 2025

Du Yin and Prof Flora Salim receiving the best paper award at the ACM SIGSPATIAL Conference 2025.
Du Yin and Prof Flora Salim receiving the best paper award at the ACM SIGSPATIAL Conference 2025 (Image provided).

Best Paper Award at the International Conference on Advances in Geographic Information Systems 2025

Author ADM+S Centre
Date 24 December 2025

Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society at UNSW have received the Best Research Paper Award at the ACM SIGSPATIAL Conference 2025 for their traffic forecasting dataset featuring over 22 years of data from California, USA and Transport for NSW.

The research “XXL Traffic Expanding and Extremely Long Traffic forecasting beyond test adaptation” was authored by PhD student Du YinDr Hao Xue, and Prof Flora Salim from the ADM+S alongside colleagues Arian Prabowo and Shuang Ao.

The ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems 2025 (ACM SIGSPATIAL 2025) is an annual event that brings together researchers, developers, users, and practitioners in relation to novel systems based on geospatial data and knowledge, and fosters interdisciplinary discussions and research in all aspects of geographic information systems. 

Held in Minneapolis, USA, the conference provides a forum for original research contributions covering all conceptual, design, and implementation aspects of geospatial data ranging from applications, user interfaces, and visualisation to data storage and query processing and indexing. The Best Paper Award is only awarded to one paper during the entire conference.

ADM+S researchers and UNSW colleagues were the largest group from non-US universities attending the conference with the following papers presented: 

STOAT: Spatial-Temporal Probabilistic Causal Inference Network 
Yang Yang, Du Yin, Hao Xue, Flora Salim (UNSW)

A Probabilistic Framework for Imputing Genetic Distances in Spatiotemporal Pathogen Models
Haley Stone, Jing Du, Hao Xue (UNSW), Matthew Scotch (ASU), David Heslop (UNSW), Andreas Züfle (Emory), Raina MacIntyre (UNSW), Flora Salim

Dynamic Budgeted Reinforcement Learning for Fairness in Spatial-Temporal Resource Allocation
Yufan Kang (RMIT, Monash), Jie Zhang (UESTC), Wei Shao (UNSW, Data61), Rui Tang (Fuzhou University), Mark Andrejevic (Monash), Jeffrey Chan (RMIT), Flora Salim (UNSW).

FairDRL-ST: Disentangled Representation Learning for Fair Spatio-Temporal Mobility Prediction
Sichen Zhao (RMIT), Wei Shao (UNSW, Data61), Jeffrey Chan (RMIT), Ziqi Xu (RMIT), Flora Salim (UNSW).

GenUP: Generative User Profilers as In-Context Learners for Next POI Recommender Systems
Wilson Wongso, Hao Xue, Flora Salim

Classical Feature Embeddings Help in BERT-Based Human Mobility Prediction.
Yunzhi Liu , Haokai Tan, Rushi Kanjaria, Lihuan Li, Flora Salim (UNSW)

EpiScale: Large-Scale Simulation of Infectious Disease Based on Human Mobility
Ruochen Kong (Emory), Taylor Anderson (George Mason University), David Heslop (UNSW), Matthew Scotch (ASU), Flora Salim (UNSW), Raina MacIntyre (UNSW), Andreas Züfle (Emory).

SEE ALSO

Sara Allawati presents research on LLM query generation at CIKM in South Korea

Sara Allawati stands next to a power point presentation

Sara Allawati presents research on LLM query generation at CIKM in South Korea

Author ADM+S Centre
Date 18 December 2025

Sara Allawati, an ADM+S researcher and PhD student at RMIT, recently visited Seoul, South Korea to attend The 34th ACM International Conference on Information and Knowledge Management (CIKM). Sara met and collaborated with researchers from around the world, while also presenting a full paper for the first time.

CIKM provides an international forum for discussion on research information and knowledge management, as well as recent advances on data and knowledge bases. The goal of the conference is to shape future directions of research by encouraging high quality, applied and theoretical research findings.

While at CIKM, Sara presented the long paper titled: A Comparative Analysis of Linguistic and Retrieval Diversity in LLM-Generated Search Queries. The paper, co-authored with ADM+S researchers Oleg Zendel, Falk Scholer and Mark Sanderson, with Lida Rashidi from RMIT, compares human-written query datasets, collected five years apart, with queries generated by large language models (LLMs) in the context of search engines.  A ‘query’ is what users type into a search engine, such as Google, when searching for information. 

Sara, along with her fellow researchers applied different methodologies to generate queries using LLMs. Their findings show that while LLMs can generate diverse queries, their patterns still differ from human queries. Sara explained in her presentation that LLMs show promise for query generation, but should be used with caution in future.

Sara also highlighted the importance of preparing a presentation that can be understood across disciplines.

“This was my first time presenting a full paper, and I learned the importance of putting effort into both your slides and your talk,” Sara said.

“I learned that keeping a paper presentation simple and digestible is what makes it stand out.”

“When people listen to presentations all day, delivering content that is both engaging and digestible for different audiences goes a long way,” Sara explained.

After the paper presentation, Sara received several follow up questions, indicating a high level of audience engagement. From there, she had discussions with other attendees from Seoul, Germany and New Zealand, all of whom expressed interest in future collaborations.

Sara plans to submit follow-up papers in February 2026 and intends to reach out to some of these contacts for potential collaboration.

This research trip was funded by the ADM+S RMIT node and ADM+S HDR funding.


SEE ALSO

Wilson Wongso completes USA research trip

Wilson Wongso and other researchers standing in a group
Left-to-right: Flora Salim, Wei Shao, Haley Stone, Yufan Kang, Wilson Wongso, Du Yin, Yang Yang.

Wilson Wongso completes USA research trip

Author ADM+S Centre
Date 16 December 2025

ADM+S researcher Wilson Wongso, a PhD student from UNSW, has recently completed a research trip to the University of California, Berkeley and the University of Minnesota in the United States. While there, he attended a conference and delivered presentations about his research on Large Language Models (LLM’s).

Wilson attended the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL 2025), hosted by the University of Minnesota. He travelled with fellow researchers from Collaborative Human-Centric AI Systems (CRUISE), a UNSW based research group which includes ADM+S Chief Investigator Flora Salim. Wilson also met with ADM+S Research Fellow Yufan (Tina) Kang from Monash University.

SIGSPATIAL 2025 attracted participants from a wide range of universities, institutes and industry partners, including Google and Amazon. While at the conference, Wilson presented his research on GenUP as a lightning talk and during the poster session.

“The core idea of GenUP is to generate user profiles that inform POI recommender systems, giving end-users more control over their recommendations.” Wilson said.

At SIGSPATIAL, Wilson served as a program committee member for an UrbanAI workshop, organised by Oak Ridge National Laboratory, UNSW and Emory University. He was also present for supervisor Flora Salim’s invited talk titled “Towards World Models for Urban Mobility” at a workshop on Urban Mobility Foundation Models.

“My biggest takeaway is that SIGSPATIAL showcases a diverse range of interconnected research, and it was encouraging to see that my PhD research questions remain both open and highly relevant challenges” Wilson said.

“I also gained new ideas on methods and approaches that I can potentially apply to my research.”

Wilson then visited the HuMNet Lab at the University of California, Berkeley. He was hosted by Professor Marta C. Gonzalez, whose research focuses on urban mobility. At Berkeley, Wilson presented his PhD work so far, including GenUP and Massive-STEPS to Professor Gonzalez and her team.

Wilson and Professor Gonzalez sit at a table with their lunch, smiling.
Wilson with Prof González at UC Berkeley.

The presentation sparked in-depth discussions with her students about their research, spanning topics such as clustering human lifestyles from mobility traces, examining geographical biases in existing systems, and applying classical urban mobility theories to modern LLM approaches.

As a result, Wilson confirmed an upcoming collaboration with one of Professor Gonzalez’ students. He plans to contribute modern machine learning techniques alongside classical theoretical approaches.

“I aim to dive deeper into classical urban mobility theories, as my background in cutting-edge LLMs can overlook these foundational concepts, combining modern models with classical theories will allow us to build more robust and explainable ‘hybrid’ systems.” Wilson said.

“It was inspiring to see that we are tackling the same research problems in parallel, each leveraging our own strengths and perspectives, collaborating in this way yields meaningful and impactful results.”

Wilson’s research on this trip is part of the broader GenAISim Signature project at ADM+S. This trip was funded through ADM+S HDR funding and the GenAISim project.

SEE ALSO

ADM+S RMIT team win first prize at international RAG challenge

The team at an evaluation session at RMIT. Mark Sanderson.

ADM+S RMIT team win first prize at international RAG challenge

Author ADM+S Centre
Date 11 December 2025

Congratulations to the team of ADM+S Researchers from RMIT, who have won first place at the Massive Multi-Modal User-Centric Retrieval-Augmented Generation (MMU-RAG) Challenge at NeurIPS Conference 2025. The team consisted of several ADM+S researchers, including:

The inaugural MMU-RAG competition took place at the 39th edition of the Annual Conference on Neural Information Processing Systems (NeurIPS 2025).

“Our team, comprising RMIT students and staff, placed first in the open-source systems track under the dynamic user-based evaluation,” said Oleg Zendel.

“This evaluation used a chatbot arena format where users submitted any query they wanted and compared the responses from several systems side by side.”

Out of 81 total registered teams, just 8 managed to submit a fully working system, due to the challenging technical requirements. 

The MMU-RAG challenge is a new international competition, developed by Carnegie Mellon University’s Language Technologies Institute (LTI) in partnership with Amazon.

It was launched to evaluate the next generation of Retrieval-Augmented Generation (RAG) systems by recreating the complexity of real-world information needs. RAG systems combine large-scale information retrieval with AI text generation, allowing them to produce informed and contextually relevant responses.

These systems are increasingly used in applications like advanced chatbots, digital assistants, and research tools.

This marks the teams second win this year in a RAG competition, following on from their win of the LiveRAG competition at the 2025 SIGIR conference.

SEE ALSO

ADM+S Researcher Sarah Erfani wins Award for AI Safety Research

Sarah Erfani holding her award in front of a blue sign.

ADM+S Researcher Sarah Erfani wins Award for AI Safety Research

Author ADM+S Centre
Date 11 December 2025

Congratulations to Associate Professor Sarah Erfani, from the University of Melbourne node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) for being awarded a Young Tall Poppy Award. Sarah was awarded in recognition for her research on AI safety assurance.

Presented by the Australian Institute of Policy and Science (AIPS), the Young Tall Poppy Science Awards celebrate outstanding early-career researchers who not only excel in their fields but also demonstrate a strong commitment to engaging the public in science. The awards recognise excellence in both research achievement and science communication.

“This award is both humbling and deeply energising, it renews my confidence and inspires me to keep pushing the boundaries of AI safety, ensuring that my research continues to protect and support our communities.” Sarah said.

“This recognition reminds me why I am so committed to this work and motivates me to go even further, both in advancing scientific discovery and in shaping a future where AI genuinely makes a positive difference in everyone’s lives.”

Sarah’s work focuses on developing methods for AI safety assurance, ensuring AI systems operate reliably and transparently. Her research aims to build public trust in AI technologies by enabling stakeholders to safely adopt AI tools in real world situations.

The Young Tall Poppy Awards have been running in Victoria since 1999, with more than 150 researchers recognised for their excellence over that time. The program forms part of AIPS’ broader Tall Poppy Campaign, which aims to encourage a culture that values scientific achievement and public engagement with research.

SEE ALSO

Making AI work visible with the GenAI Arcade: best presentation award at Next Generation Responsible AI Symposium

ADM+S Researchers Dr Bernadette Hyland-Wood (left) and Dr Aaron Snoswell (right) at the Next Generation Responsible AI Sympsosium (Image suppled).
ADM+S Researchers Dr Bernadette Hyland-Wood (left) and Dr Aaron Snoswell (right) at the Next Generation Responsible AI Sympsosium (Image suppled).

Making AI work visible with the GenAI Arcade: best presentation award at Next Generation Responsible AI Symposium

Author ADM+S Centre
Date 11 December 2025

ADM+S researchers from QUT’s GenAI Lab have been awarded Best Presentation at the Next Generation Responsible AI Symposium, jointly hosted by CSIRO and the Australian Institute for Machine Learning (AIML) at the University of Adelaide.

Associate Investigator from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), Dr Aaron Snoswell presented “Making AI Work Visible with the GenAI Arcade” showcasing the GenAI Arcade, an interactive platform designed to make the inner workings of generative AI visible, engaging, and accessible for diverse communities. 

Developed by ADM+S researchers at the QUT GenAI Lab, the Arcade reflects the Lab’s mission to combine technical, humanities, and social science expertise to build tools that uplift public understanding of generative AI.

“We created this site to help people explore how generative AI works, what it can and can’t do, and why that matters,” said Dr Snoswell.

Held from 1–2 December 2025 in Adelaide, and coinciding with the release of Australia’s National AI Plan, the two-day symposium brought together Australia’s leading Early- and Mid-Career Researchers (EMCRs) along with government and industry representatives to explore the future of responsible AI.

Through presentations, cross-disciplinary discussions, and hands-on workshops, participants focused on translating responsible AI principles into practice,  providing a launchpad for ongoing collaboration, with opportunities for participants to contribute to a post-event writing initiative and strengthen Australia’s responsible AI community.

The award-winning presentation, “Making AI Work Visible with the GenAI Arcade,” is co-authored by William He, Distinguished Professor Jean Burgess and Dr Kevin Witzenberger.

Their work, the GenAI Arcade showcases creative, interactive methods for making generative AI processes more transparent and understandable for public audiences.

SEE ALSO

State Information Centre Delegation visits ADM+S

Distinguished Professor and Centre Director Julian Thomas and Deputy Director-General of the Public Technology Service Department Lifeng Zhang
Distinguished Professor and Centre Director Julian Thomas and Deputy Director-General of the Public Technology Service Department Lifeng Zhang.

State Information Centre Delegation visits ADM+S

Author ADM+S Centre
Date 3 December 2025

On Thursday 20 November 2025, ADM+S hosted a delegation from the Public Technology Service Department within China’s State Information Centre (SIC).

The delegation comprised 20 officials from across the SIC, including Deputy Director-General of the Public Technology Service Department Lifeng Zhang, and Deputy Director-General of the General Office Xin Lyu.

SIC was established in 1987 to advise the Chinese government on strategies in digital technologies, economy, and diplomacy. The group was hosted at the ADM+S offices at RMIT University by Distinguished Professor Centre Director Julian Thomas, Chief Investigator Professor Haiqing Yu, Associate Investigator Professor Jeffrey Chan, Postdoctoral Research Fellow Dr Jiaxi Hou, and Chief Operating Officer Mr Nick Walsh.

The group discussed the SIC’s role in coordinating economic analysis and information development for the Chinese government, and a range of ADM+S research projects, including the Australian Digital Inclusion Index, which measures the extent to which people in Australia can access, afford and have the ability to benefit from digital technologies.

Chief Investigator Professor Haiqing Yu and Postdoctoral Research Fellow Dr Jiaxi Hou from RMIT’s School of Media and Communication delivered a presentation on the project Language and Cultural Diversity in ADM: Australia in the Asia Pacific, a new project investigating the challenges and opportunities for cultural and linguistic diversity in automated decision-making across Australia and the Asia-Pacific region.

Focusing upon language and cultural diversity as the central concern, the project aims to better understand the ways in which AI and ADM may be utilised to promote diversity and social cohesion across our region, in addition to identifying the roles of bias and manipulation in ADM.

ADM+S Associate Investigator and Professor Jeffrey Chan from the School of Computing Technologies shared insights into the ADM+S Centre’s work in mapping automated decision-making across institutions, such as the project Mapping ADM tools in administrative decision-making in NSW, a partnership between ADM+S and the NSW Ombudsman to map and analyse the use of automated systems in state and local government sectors in NSW.

The group also discussed the Centre’s research on new approaches that aim to combine fairness, transparency and safety guarantees for machine learning based systems in the sharing economy via research projects such as Adaptive, Multi-Factor Balanced, Regulatory Compliant Routing ADM Systems.

Delegation organiser Rachel Wong thanked ADM+S for hosting the visit, “On behalf of the entire visiting group, we sincerely thank you all for your meticulous reception and thoughtful arrangements during the group’s visit this morning. Gaining valuable insights into your research, the group deeply admire your dedication and remarkable achievements.”

SEE ALSO

Hong Kong Baptist University researchers visit ADM+S to explore collaboration opportunities

Distinguished Professor Julian Thomas with delegation of researchers from Hong Kong Baptist University

Hong Kong Baptist University researchers visit ADM+S to explore collaboration opportunities

Author ADM+S Centre
Date 3 December 2025

On Tuesday 2 December, ADM+S was delighted to host a group of researchers from the Hong Kong Baptist University

Professor Daniel Lai, Dean of the Faculty of Arts and Social Sciences at HKBU, along with Professor Kaxton Siu and Faculty Manager Alice Wong, met with Centre Director Distinguished Professor Julian Thomas, Chief Investigator Professor Haiqing Yu, and Postdoctoral Research Fellow Dr Jiaxi Hou, to gain a deeper understanding of the Centre’s research program and discuss potential collaborations and the exchange of ideas. 

The HKBU researchers were particularly interested in two ADM+S research projects: Mapping the Digital Gap and Language and Cultural Diversity in ADM – and how these projects are contributing to the development of knowledge and strategies for responsible, ethical, and inclusive automated decision-making. 

The group also toured the world-class ADM+S offices at RMIT University to see first-hand how our ARC investigators, postdoctoral researchers, students and professional staff from across the Centre’s university nodes are working together in an interdisciplinary, multi-institutional environment to tackle some of the biggest challenges posed by the implementation of automated decision-making and artificial intelligence systems. 

Centre Director Distinguished Professor Julian Thomas said, ”’it was a pleasure hosting Professors Daniel Lai, Kaxton Siu and Manager Alice Wong at the ADM+S Centre.”

“ We are hoping to extend the relationship with HKBU via a range of activities such as student placements, summer schools, and other research collaborations.”

The ADM+S Centre thanks ADM+S Executive Officer Julie Stuart and HKBU Manager Alice Wong for their support and coordination of the visit.

SEE ALSO

New report: First Nations Australians twice as likely to be digitally excluded

Aerial photo of Djarindjin and Lombadina communities, West Kimberely region, Western Australia. The research team visited 12 remote First Nations communities for the study. Image: Daniel Featherstone, RMIT
Aerial photo of Djarindjin and Lombadina communities, West Kimberely region, Western Australia. The research team visited 12 remote First Nations communities for the study. Image: Daniel Featherstone, RMIT.

New report: First Nations Australians twice as likely to be digitally excluded

Author RMIT University Media
Date 3 December2025

Three in four First Nations people living in remote and very remote communities are digitally excluded according to the Mapping the Digital Gap report by RMIT University and Swinburne University of Technology at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S). This means many face significant barriers to accessing and using online services needed for daily social, economic and cultural life.

This 2025 outcomes report draws on three years of fieldwork to compare digital inclusion levels for First Nations people to nationwide Australian Digital Inclusion Index (ADII) scores by remoteness, location, age and other factors.

Drawing on expanded data, the report found a 10.5 point digital gap for First Nations Australians nationally, with this gap more than doubling in the remote communities surveyed for the report.

But it wasn’t all bad news; the Mapping the Digital Gap research found an 8.7 point improvement in digital ability for First Nations people in very remote communities, rising from 45.8 in 2023 to 54.5 in 2025.

This boost suggests First Nations Australians now have greater access to digital connectivity and support to develop online skills needed for work, education, health, banking and other vital services.

Co-investigator Associate Professor Daniel Featherstone from ADM+S at RMIT said people living in remote communities were rapid adopters of digital technology and innovative in finding ways to connect.

“We found digital participation is on the rise, with more people than ever before trying to get online.” Featherstone said.

“Connectivity is an essential service nowadays, especially in remote communities.

“But there are a range of barriers to having affordable and reliable internet access in these communities – largely due to limited or strained infrastructure, low household connectivity and high reliance on pre-paid mobile services.”

Researchers Lyndon Ormond-Parker, Audrey Shadforth and Daniel Featherstone in Djarindjin community.

While access is improving in many remote communities with expanded mobile and Wi-Fi connectivity, it remains the largest contributor to the digital gap, 42.4 points below the national score for non-First Nations Australians.

The gap in access is nearly four times greater than the gap for affordability (11), and more than twice the gap for digital ability (19.3).

Elders, low-income households, and people with limited English or people living with a disability in remote areas continue to face significant barriers to getting online.

The RMIT and Swinburne research team partnered with local First Nations organisations in 12 remote First Nations communities from 2022 to 2024, working with community co-researchers to promote community engagement in the project.

First Nations co-investigator Professor Lyndon Ormond-Parker from ADM+S at RMIT said the community partners and co-researchers were critical to the project’s success.

“All data collected is given back to the community through annual outcomes reports, with digital inclusion plans to support community-led and place-based solutions,” he said.

The team is tracking progress towards Closing the Gap Target 17 under the National Partnership Agreement, which aims for equal digital inclusion for First Nations Australians by 2026.

Daily internet use rising but gaps remain

In 2022, 44% of people in remote First Nations communities visited by the research team used the internet daily, with 20% not online at all.

By 2024, daily usage climbed to 62%, while non-users fell to 14%.

Meanwhile, 95% of non-First Nations Australians are daily users and just 2% are non-users.

Telstra, the study’s industry partner, provides mobile, fixed line phone and broadband services in remote communities, providing essential connectivity.

Lauren Ganley, Head of Telstra’s First Nations Strategy & Engagement, said reliable connectivity was critical to improving digital inclusion in remote communities.

“Quality connectivity can be life-changing for remote First Nations communities, unlocking access to opportunities and growth,” Ganley said.

“Telstra is proud to partner in this important work and help bridge the digital gap, so communities can connect, learn and thrive.”

New dashboard puts data in communities’ hands

Under an expanded national project with support from the Australian Government, the ADII team last month launched the First Nations Digital Inclusion Dashboard – Australia’s first interactive, national data set tracking First Nations digital inclusion.

The dashboard empowers local organisations and communities to access up-to-date data to inform local decision making.

This will give First Nations communities better tools to track progress and advocate for further improvements, ahead of the next Mapping the Digital Gap report in December 2026.

Mapping the Digital Gap: 2025 Outcomes Report is published by the ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University, Swinburne University of Technology and Telstra. (DOI: 10.60836/1dhh-2e31)

SEE ALSO

How Starlink is connecting remote First Nations communities – and creating new divides

Telephone tower and satellite dishes
Daniel Featherstone

How Starlink is connecting remote First Nations communities – and creating new divides

Authors Daniel Featherstone, Kieran Hegarty
Date 3 December2025

In the Cape York community of Wujal Wujal, local service providers used to hold their breath every time a big storm rolled in. Cloud cover could knock out their satellite internet just when they needed it most.

Since installing Starlink’s low Earth orbit (LEO) satellite service, however, everything from video calls to uploading files has become far more reliable – even in heavy rain. People report there is now no lag, whereas with the previous service, Sky Muster, even cloud cover could cause the internet to stop working.

Reliable connectivity is crucial in an emergency. When nearly half the buildings in Wujal Wujal were destroyed by the December 2023 flood following Cyclone Jasper, and the fibre-optic cable was broken, Starlink provided the only reliable communications in the aftermath.

Examples like this help explain why Starlink has grown so quickly in remote Australia. With high speeds, low latency and data that works in wet weather, it has become the preferred option for agencies and businesses frustrated with older technologies. There are now more than 200,000 Starlink subscriptions in Australia, compared with about 80,000 NBN Sky Muster services.

But our research as part of the Mapping the Digital Gap project shows Starlink is creating a new kind of digital divide in remote First Nations communities – not just between cities and the bush, but within communities themselves. A small minority now enjoy fast, reliable Starlink, while First Nations households predominantly use prepaid mobile services, where mobile is available, with high-priced but limited data.

Twice the rate of digital exclusion – and worse in remote communities

The new Mapping the Digital Gap 2025 outcomes report finds First Nations Australians are twice as likely as other Australians to be digitally excluded.

Nationally, using the Australian Digital Inclusion Index measure out of 100, First Nations score on average 63.4, where non–First Nations Australians average 73.9 – a “digital gap” of 10.5 points. In the very remote communities we visited, this gap more than doubles to 24.2, with three in four people digitally excluded.

Access to reliable and affordable connectivity and devices is the biggest driver. Access scores in very remote First Nations communities sit 42.4 points below those of non-First Nations Australians – far larger than gaps for affordability or digital ability.

There is some good news. Digital ability has improved by nearly nine points in two years, and daily internet use has risen from 44% to 62%. But this still lags far behind other Australians, 95% of whom go online daily.

In short, people are trying harder than ever to get online – but face barriers of infrastructure, pricing and limited digital support.

Starlink for agencies, prepaid mobiles for everyone else

Starlink arrived in northern Australia in late 2022 and spread quickly across our research sites. Schools, councils, health services and police adopted it to get around mobile congestion and weather-related dropouts.

As one coordinator in Wadeye said, “We used to just stop working at three … [now] we’ve all been Elon Musked.”

The rapid uptake shows remote communities are often early adopters. In Wilcannia, café owner Shona Cook says they “went straight to Starlink because we know that it works out in regional areas […] everything you need” now runs on it.

But Starlink remains out of reach for most First Nations households. Across sites such as Wilcannia and Wujal Wujal, only 1–2% had adopted it by 2024. Upfront equipment costs of A$500 to A$600 and monthly fees of A$139 are simply unaffordable.

Instead, nearly everyone relies on mobile phones. In 2024, 99% of First Nations mobile users in remote communities were on prepaid plans.

Many households reported spending more than A$280 a month on data, with large households often exceeding A$400 – for slow speeds, data limits and patchy coverage. Those spending the most, relative to income, often get the worst internet.

A new ‘elite’ infrastructure

This pattern is creating a localised divide. Agencies, contractors and a few higher-income residents enjoy fast Starlink. At the same time, most others are left with congested 4G, legacy satellite services and costly, limited prepaid data.

One Wilcannia resident can now send “massive files within two minutes” and stream reliably, but said: “If there was a cheaper way […] we’d definitely look at that.”

Without intervention, Starlink risks becoming “elite” infrastructure: a premium service for those who can pay, while others juggle multiple prepaid services, share phones, and sacrifice speed and reliability just to stay connected.

How to make Starlink part of the solution

Other low Earth orbit satellite internet businesses are entering the market, too. From 2026, the NBN will be using Amazon’s satellites, and Telstra is providing Starlink services and small-cell mobile services via OneWeb. These may improve reliability, but risk widening the divide if plans aren’t affordable.

The best way to avoid this is policies that treat connectivity as an essential service and design solutions around the realities of remote First Nations households. That could include:

  • targeted subsidies or concessional plans for low-income households
  • prepaid-style broadband products
  • community-based access models, such as mesh Wi-Fi or shared infrastructure
  • ongoing digital skills support within community organisations.

The new First Nations Digital Inclusion Dashboard gives communities and policymakers a powerful tool to track progress and push for change.

Closing the Gap Target 17 aims for equal digital inclusion by 2026. Starlink and other low Earth orbit services could play a transformative role – but only if the benefits are shared equitably, not reserved for the few who can pay.The Conversation

Daniel Featherstone, Senior Research Fellow, RMIT University and Kieran Hegarty, Research Fellow, ARC Centre of Excellence for Automated Decision-Making & Society, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Australia’s national AI plan has just been released. Who exactly will benefit?

AI NPU on microchips
Igor Omilaev/Unsplash

Australia’s national AI plan has just been released. Who exactly will benefit?

Authors Jake Goldenfein, Christine Parker, Kimberlee Weatherall
Date 2 December2025

Today, the Albanese Labor government released the long-awaited National AI Plan, “a whole-of-government framework that ensures technology works for people, not the other way around”.

With this plan, the government promises an inclusive artificial intelligence (AI) economy that protects workers, fills service gaps, and supports local AI development.

In a major reversal, it also confirms Australia won’t implement mandatory guardrails for high-risk AI. Instead, it argues our existing legal regime is sufficient, and any minor changes for specific AI harms or risk can be managed with help from a new A$30 million AI Safety Institute within the Department of Industry.

Avoiding big changes to Australia’s legal system makes sense in light of the plan’s primary goal – making Australia an attractive location for international data centre investment.

The initial caution is gone

After the public release of ChatGPT in November 2022 ushered in a generative AI boom, initial responses focused on existential risks posed by AI.

Leading AI figures even called for a pause on all AI research. Governments outlined plans to regulate.

But as investment in AI has grown, governments around the world have now shifted from caution to an AI race: embracing the opportunities while managing risks.

In 2023, the European Union created the world’s leading AI plan promoting the uptake of human-centric and trustworthy artificial intelligence. The United States launched its own, more bullish action plan in July 2025.

The new Australian plan prioritises creating a local AI software industry, spreading the benefit of AI “productivity gains” to workers and public service users, capturing some of the relentless global investment in AI data centres, and promoting Australia’s regional leadership by becoming an infrastructure and computing hub in the Indo-Pacific.

Those goals are outlined in the plan’s three pillars: capturing the opportunities, spreading the benefits, and keeping us safe.

What opportunities are we capturing?

The jury is still out on whether AI will actually boost productivity for all organisations and businesses that adopt it.

Regardless, global investment in AI infrastructure has been immense, with some predictions on global data centre investments reaching A$8 trillion by 2030 (so long as the bubble doesn’t burst before then).

Through the new AI plan, Australia wants to get in on the boom and become a location for US and global tech industry capital investment.

In the AI plan, the selling point for increased Australian data centre investment is the boost this would provide for our renewable energy transition. States are already competing for that investment. New South Wales has streamlined data centre approval processes, and Victoria is creating incentives to “ruthlessly” chase data centre investment in greenfield sites.

Under the new federal environmental law reforms passed last week, new data centre approvals may be fast-tracked if they are co-located with new renewable power, meaning less time to consider biodiversity and other environmental impacts.

But data centres are also controversial. Concerns about the energy and water demands of large data centres in Australia are already growing.

The water use impacts of data centres are significant – and the plan is remarkably silent on this apart from promising “efficient liquid cooling”. So far, experience from Germany and the US shows data centres stretching energy grids beyond their limit.

 

It’s true data centre companies are likely to invest in renewable energy, but at the same time growth in data centre demands is currently justifying the continuation of fossil fuel use.

There’s some requirement for Australian agencies to consider the environmental sustainability of data centres hosting government services. But a robust plan for environmental assessment and reporting across public and private sectors is lacking.

Who will really benefit from AI?

The plan promises the economic and efficiency benefits of AI will be for everyone – workers, small and medium businesses, and those receiving government services.

Recent scandals suggest Australian businesses are keen to use AI to reduce labour costs without necessarily maintaining service quality. This has created anxiety around the impact of AI on labour markets and work conditions.

Australia’s AI plan tackles this through promoting worker development, training and re-skilling, rather than protecting existing conditions.

The Australian union movement will need to be active to make the “AI-ready workers” narrative a reality, and to protect workers from AI being used to reduce labour costs, increase surveillance, and speed up work.

The plan also mentions improving public service efficiency. Whether or not those efficiency gains are possible is hard to say. However, the plan does recognise we’ll need comprehensive investment to unlock the value of private data holdings and public public data holdings useful for AI.

Will we be safe enough?

With the release of the plan, the government has officially abandoned last year’s proposals for mandatory guardrails for high-risk AI systems. It claims Australia’s existing legal frameworks are already strong, and can be updated “case by case”.

As we’ve pointed out previously, this is out of step with public opinion. More than 75% of Australians want AI regulation.

It’s also out of step with other countries. The European Union already prohibits the most risky AI systems, and has updated product safety and platform regulations. It’s also currently refining a framework for regulating high-risk AI systems. Canadian federal government systems are regulated by a tiered risk management system. South Korea, Japan, Brazil and China all have rules that govern AI-specific risks.

Australia’s claim to have a strong, adequate and stable legal framework would be much more credible if the document included a plan for, or clarity about our significant law reform backlog. This backlog includes privacy rights, consumer protection, automated decision-making in government post-Robodebt, as well as copyright and digital duty of care.

Ultimately the National AI Plan says some good things about sustainability, sharing the benefits, and keeping Australians safe even as the government makes a pitch for data centre investment and becoming an AI hub for the region.

Compared with those of some other nations, the plan is short on specificity. The test will lie in whether the government gives substance to its goals and promises, instead of just chasing the short-term AI investment dollar.The Conversation

Jake Goldenfein, Senior Lecturer, Law and Technology, The University of Melbourne; Christine Parker, Professor of Law, The University of Melbourne, and Kimberlee Weatherall, Professor of Law, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

New advances in wearable and ubiquitous computing presented at UbiComp 2025

ADM+S researchers at Ubicomp 2025
ADM+S researchers at Ubicomp 2025 (image provided).

New advances in wearable and ubiquitous computing presented at UbiComp 2025

Author ADM+S Centre
Date 3 December2025

Researchers  Kaixin Ji and Hiruni Kegalle from the ARC Centre of Excellence of Automated Decision-Making and Society at RMIT University recently presented their research at UbiComp 2025, hosted in Espoo, Finland.

The annual conference brings together leading international researchers, designers, and practitioners working across ubiquitous, pervasive and wearable computing.

Hiruni presented “Watch Out! E-scooter Coming Through!: Multimodal Sensing of Mixed Traffic Use and Conflicts Through Riders’ Ego-centric Views” examining rider behaviour across three types of transport infrastructure using eye-tracking, cameras and speed data collected in real-world environments.

Hiruni connected with members of the UbiComp and MobileHCI communities and explored ideas for future collaboration including potential joint studies on cross-cultural perceptions of micro-mobility safety, as well as opportunities to co-author comparative research and explore data-driven design frameworks using shared sensing datasets.

Hiruni said the conference reinforced the growing importance of considering design, development, and deployment aspects of ubiquitous and pervasive computing technologies, as well as understanding the human experiences and social impacts these technologies facilitate.

“ I was particularly inspired by the strong focus on translating research into real-world applications, especially those integrating large language models (LLMs) with sensor data to enable more context-aware and adaptive systems,” said Hiruni.

Kiaxin presented  “SenseSeek Dataset: Multimodal Sensing to Study Information Seeking Behaviors,” which provides resources investigating complex cognitive mechanisms as people interact with information, using consumer-grade passive sensors to record their physiological and behavioral responses.

Kaixin also presented “GLOSS: Group of LLMs for Open-ended Sensemaking of Passive Sensing Data for Health and Wellbeing” an outcome of a collaboration with the UbiWell group at Northeastern University. 

 

Hiruni presenting her research at Ubicomp 2025. Hiruni standing in front of presentation screen with illustration of two people on scooters.
Hiruni presenting her research at Ubicomp 2025 (Image provided).

This research uses an agentic LLM network for open-ended sense-making on passive sensing data. 

During the conference, Kaixin reconnected with collaborators from Northeastern University and Cornell Tech, strengthening ties between her work in wearable sensing and their research on mental wellbeing. 

She also met researchers from Tsinghua University and the Hong Kong University of Science and Technology to discuss applications of the SenseSeek dataset, and spoke with Professor Michael Beigl from the Karlsruhe Institute of Technology about emerging ear-worn devices for monitoring neurological conditions. 

“I was deeply moved by the enthusiasm and supportive spirit of the Ubicomp community,” said Kaixin.

“The researchers are genuinely committed to pursuing work that can make a real impact for users, and they maintain strong connections with industry. Some have even launched startups based on their research outcomes and received venture funding.”

Both Hiruni and Kaixin served as student volunteers throughout the conference. Hiruni co-organised the UbiComp4VRU workshop with colleagues from the University of Kassel and UNSW and Kaixin served on the program committee for the UbiSense workshop.

This research visit was funded by the ARC Centre of Excellence for Automated Decision Making and Society Research Training Program and the ADM+S RMIT node.

SEE ALSO

Australian digital inclusion insights shared with UNDP Malaysia and Malaysian government representatives

United Nations Development Programme (UNDP) logo and people on Zoom call

Australian digital inclusion insights shared with UNDP Malaysia and Malaysian government representatives

Author ADM+S Centre
Date 28 November 2025

On Tuesday 25 November 2025, researchers from the Australian Digital Inclusion Index (ADII) partnered with the United Nations Development Programme (UNDP) Malaysia to deliver a virtual knowledge-sharing session for senior officials across UNDP and the Malaysian Government working to design the country’s new national digital inclusion index.

Hosted by UNDP Malaysia, the session Learning from Australia’s Digital Inclusion Journey brought together more than 40 participants from key national institutions, including the Ministry of Digital and MyDIGITAL Corporation, the Malaysian Communications and Multimedia Commission, the Department of Statistics Malaysia, the Ministry of Economy, the Ministry of Communications, the Implementation Coordination Unit in the Prime Minister’s Department and the Personal Data Protection Department, alongside UNDP staff.

ADM+S and ADII were represented by Distinguished Professor Julian Thomas (RMIT University), Professor Anthony McCosker (Swinburne University of Technology), Dr Kieran Hegarty (RMIT University) and Katy Morrison (RMIT University). The session was facilitated by Yin Wei Chong and Piacarmel Andrews from UNDP.

Sharing Australia’s experience with a national digital inclusion index

The workshop focused on how Australia has used the ADII as a long-term evidence base to track digital inclusion, inform social policy and guide infrastructure and skills investments. Distinguished Professor Thomas highlighted how the Index has had to evolve over time to remain relevant to policy.

“What we’ve found is that we always see new technologies, new challenges and new ways of measuring things better,” he said.

From evidence to policy impact

Participants were particularly interested in how ADII findings have been translated into concrete policy and program interventions. The ADII team discussed examples where the Index has helped:

  • Target state and territory investments in telecommunications and internet infrastructure
  • Support understanding of Closing the Gap Target 17 on First Nations digital inclusion through partnerships with First Nations organisations and communities
  • Guide community and philanthropic initiatives such The Smith Family’s education programs and Good Things Foundation’s digital skills work

Professor McCosker also walked participants through new ADII questions on generative AI and hybrid work, illustrating how emerging technologies can be incorporated into an established framework without losing longitudinal value.

This discussion directly supports Malaysia’s work to establish its own national Digital Inclusivity Index Malaysia (DIIM), a flagship initiative being developed by MyDIGITAL Corporation and UNDP to monitor and address the country’s digital divide.

Strengthening international collaboration on digital inclusion

For the ADII team, the session marked a significant opportunity to share lessons from nearly a decade of digital inclusion research with peers in the region, and to learn from Malaysia’s own ambitions to design a comprehensive, whole-of-government framework for digital inclusion.

SEE ALSO

ADM+S Researchers elected to Australian Academy of the Humanities

Headshots of Jean Burgess and Ramon Lobato

ADM+S Researchers elected to Australian Academy of the Humanities

Author ADM+S Centre
Date 27 November 2025

Distinguished Professor Jean Burgess, Associate Director of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), has been elected to the council of the Australian Academy of the Humanities (AAH), while Associate Investigator Professor Ramon Lobato has been elected as a Fellow.

Election to the Academy is the highest honour in the humanities in Australia, recognising scholars whose work has shaped how we understand ourselves, our histories and cultures.

Distinguished Professor Jean Burgess from the Queensland University of Technology (QUT) node was originally elected to the Academy in 2021 and is a member of the Culture and Communications Studies Section. Her research focuses on the social implications of digital media technologies, platforms, and cultures, as well as new and innovative digital methods for studying them.

Professor Ramon Lobato from Swinburne University is a distinguished media studies expert concerned with the influence and disruption of online video content on audiences, industry and policy. 

“It’s an honour to be elected to the Academy. The research I do with my team here at Swinburne aims to understand how media is changing in the platform age.” Ramon said.

“I’m grateful to the Academy for supporting this kind of cultural research on digital technology.”

Ramon’s current research projects investigate the cultural impacts of subscription streaming services and smart TVs in Australia.

Academy President, Professor Stephen Garton, said that research from the Academy’s Fellows is crucial to building a more resilient and inclusive nation.

“The Academy’s Fellows are at the forefront of understanding global cultural, social and historical foundations…Their work enhances Australia’s ability to navigate global uncertainty, technological disruption and rapid social change,” said Academy President Professor Stephen Garton.

“What distinguishes the Academy is its ability to bring together the very best humanities minds to address the most pressing issues facing Australia. The collective expertise of our Fellows — from First Nations knowledge leadership to digital cultures, ethics, heritage and languages — is a national asset.”

In total 30 new members were elected to the Australian Academy of Humanities Fellowship including Fellows, Corresponding Fellows, and Honorary Fellows. 

Read the full list of new members on the Australian Academy of the Humanities website.

SEE ALSO

ADM+S researcher Dang Nguyen investigates digital transformation across Vietnam

Dang Nguyễn standing in front of a sign saying "FOXCONN"

ADM+S researcher Dang Nguyen investigates digital transformation across Vietnam

Author ADM+S Centre
Date 25 November 2025

ADM+S Research Fellow Dang Nguyen recently visited Vietnam for a fieldwork trip to investigate how digital transformation is reshaping media practice, civic participation and technology infrastructure. Travelling through Hanoi, Bac Ninh and Ho Chi Minh City, Dang conducted interviews, site visits and field observations with journalists, policy specialists and more.

In Hanoi, Dang joined local journalist Lam Le to visit recycling villages in Bac Ninh, where large-scale repair and reuse of discarded electronics takes place. This trip continues ongoing collaboration with ADM+S PI Professor Melissa Gregg and UC Berkeley’s School of Information on what the ‘afterlives’ of hardware look like and how reuse, repair, and carbon reduction reshape the ecologies of digital infrastructure, consumer electronics, and AI.

A large beige industrial sack filled with discarded circuit boards and electronic components
Discarded circuit boards in Bac Ninh / Dang Nguyen

Dang documented seeing discarded circuit boards, hard drives, wiring and components all awaiting processing: “A whole ecosystem of discarded hardware waiting to be reborn,” states Dang.

Dang also met with Khang Nguyen, Regulatory Reforms Attaché at the British Embassy in Hanoi, following Khang’s contributions to the recent Hanoi Convention against Cybercrime. Their discussion explored Vietnam’s digital governance direction, the UK’s decision to sign the convention, and the wider geopolitical implications of emerging regulatory frameworks.

Finally, Dang attended a meeting with Phuong Nguyen, Communications Manager at Oxfam Vietnam, focused on civic participation and digital rights. Phuong noted increasing concern within civil society over how artificial intelligence may restrict civic space. Dang raised the question of how civic actors might instead mobilise AI for public interest outcomes. 

Insights from this trip will support ongoing ADM+S research into digital ecologies, technology governance and civic futures in Southeast Asia on the Language and Cultural Diversity in ADM: Australia in the Asia Pacific project.

Dang and Phuong sit at an outdoor cafe smiling at the camera
Dang Nguyen with Phuong Nguyen from Oxfam Vietnam

SEE ALSO

Thao Phan and Zahra Stardust awarded Discovery Early Career Research Award

An image of Zahra Stardust and Thao Phan's headshots

Thao Phan and Zahra Stardust awarded Discovery Early Career Research Award

Author ADM+S Centre
Date 25 November 2025

ADM+S Affiliates Dr Thao Phan and Dr Zahra Stardust have been awarded a Discovery Early Career Research Award (DECRA) from the Australian Research Council (ARC) for their respective research projects.

Thao Phan, from the Australian National University (ANU) was awarded a DECRA for her project, Model minorities: racial targeting and discrimination in the platform era.

This project aims to investigate the impacts of algorithmic targeting and discrimination on racially marginalised groups in Australia. With a goal to generate new knowledge on local impacts of global social media platforms by piloting innovative social science methods to document and analyse the real-world experience of racial targeting and classification. 

Zahra Stardust, from the ADM+S node at the Queensland University of Technology (QUT), was awarded a DECRA for her project, Safeguarding sexual and reproductive rights online.

This project aims to investigate how online spaces are increasingly hostile for sexual minorities, who face criminalisation and surveillance. By bringing together local and global stakeholders, including sexual health organisations, public interest technologists, human rights lawyers and affected communities, this project investigates how digital platforms can better safeguard sexual and reproductive rights online.

The ARC has announced over $100 million in funding to support winners of the 2026 DECRA. This funding supports projects of 200 Early Career Researchers that address critical knowledge gaps, strengthening Australia’s research capability and global competitiveness.  

ARC Chief Executive Officer Professor Ute Roessner explained that these newly funded projects ensure Australia remains at the forefront of global research and innovation, building a skilled workforce and delivering research backed impacts.  

‘The ARC is proud to be empowering the next generation of research leaders to thrive in supportive environments, collaborate globally, and deliver outcomes that matter,’ Professor Roessner said.

Read the full list of 2026 ARC DECRA recipients project descriptions.

SEE ALSO

Australia is about to ban under-16s from social media. Here’s what kids can do right now to prepare

Three young children looking at mobile phones.
Dolgachov / Getty Images

Australia is about to ban under-16s from social media. Here’s what kids can do right now to prepare

Authors Daniel Angus, Tama Leaver
Date 21 November 2025

If you’re a young person in Australia, you probably know new social media rules are coming in December. If you and your friends are under 16, you might be locked out of the social media spaces you use every day.

Some people call these rules a social media ban for under 16s. Others say it’s not a “ban” – just a delay.

Right now we know the rules will definitely include TikTok, Snapchat, Instagram, Facebook, Threads, Reddit, X, YouTube, Kick and Twitch. But that list could grow.

We don’t know exactly how the platforms will respond to the new rules, but there are things you can do right now to prepare, protect your digital memories, and stay connected.

Here’s a guide for the changes that are coming.

Download your data

TikTok, Instagram, Snapchat and most other platforms offer a “download your data” option. It’s usually buried in the app settings, but it’s powerful.

A data download (sometimes called a “data checkout” or “export”) includes things like:

  • photos and videos you’ve uploaded
  • messages and comments
  • friend lists and interactions
  • the platform’s inferences about you (what it thinks you like, who you interact with most, and the sort of content it suggests for you).

Even if you can’t access your account later, these files let you keep a record of your online life: jokes, friendships, cringey early videos, glow-ups, fandom moments, all of it.

You can save it privately as a time capsule. Researchers are also building tools to help you view and make sense of it.

Downloading your archive is a smart move while your accounts are still live. Just make sure you store it somewhere secure. These files can contain incredibly detailed snapshots of your daily life, so you might want to keep them private.

Don’t assume platforms will save anything for you

Some platforms may introduce official ways to export your content when bans begin. Others may move faster and simply block under-age accounts with little warning.

As one example, Meta – the parent company of Facebook, Instagram and Threads – has begun to flag accounts they think belong to under-16s. The company has also provided early indications that it will permit data downloads after the new rules comes into effect.

For others the situation is less clear.

Acting now, while you can still log in normally, is the safest way to keep your stuff.

4 ways to stay connected

Losing access to the platform you use every day to talk with friends can feel like losing part of your social world. That’s real, and it’s okay to feel annoyed, worried, or angry about it.

Here are four ways to prepare.

1. Swap phone numbers or handles on non-banned platforms now.

Don’t wait for the “you are not allowed to use this service” message.

2. Set up group chats somewhere stable.

Use iMessage, WhatsApp, Signal, Discord, or whatever works for your group and doesn’t rely on age-restricted sign-ups.

3. Keep community ties alive.

Many clubs, fandom spaces, gaming groups and local communities are on multiple sites or platforms (Discord servers, forums, group chats). Get plugged into those spaces.

 

4. Don’t presume you’ll be able to get around the ban.

Teens who get around the ban are not breaking the law. There is no penalty for teens, or parents who help them, if they do get around the ban and have access to social media under 16.

It’s up to platforms to make these new laws work. Not teens. Not parents.

Do prepare, though. Don’t assume you will be able to get around the ban.

Just using a VPN to pretend your computer is in another country, or a wearing rubber mask to look older in an age-estimating selfie, probably won’t be enough.

A note for adults: take big feelings seriously

Most people recognise the social connections, networks and community enabled by social media are valuable – especially to young people.

For some teens, social media may be their primary community and support group. It’s where their people are.

It will be difficult for some when that community disappears. For some it may be even worse.

The ideal role of trusted adults is to listen, validate and support teens during this time. No matter how older people feel, for young people this may be like losing a large part of their world. For many that will be really hard to cope with.

Services like Headspace and Kids Helpline (1800 55 1800) are there to support young people, too.

How to keep your agency in a frustrating situation

A lot of people will find it frustrating that we’re excluding teens, rather than forcing platforms to be built safer and better for everyone. If you feel that way, too, you’re not alone.

But you aren’t powerless.

Saving your data, preparing alternative communication channels, and speaking out if you want to are all ways to:

  • own your digital history
  • stay connected on your own terms
  • make sure youth voices inform how Australia thinks about online life going forward.

You’re allowed to feel annoyed. You’re also allowed to take steps that protect your future self.

If you lose access, you’re not gone – just changing channels

Social media bans for teens will create disruption. But they won’t be the end of your friendships, creativity, identity exploration, or culture.

It just means the map is shifting. You get to make deliberate choices about where you go next.

And whatever happens, the online world isn’t going to stop changing. You’re part of the generation that actually understands that, and that’s a strength, not a weakness.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology and Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S welcomes new Partner Organisation and Investigator from the University of Bristol

University of Bristol Digital Futures Institute red coloured flag outside Centre

ADM+S welcomes new Partner Organisation and Investigator from the University of Bristol

Author ADM+S Centre
Date 20 November 2025

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is delighted to announce the University of Bristol as a new Partner Organisation, and Professor Melissa Gregg as a Partner Investigator to the Centre.

Professor Gregg joins ADM+S from the University of Bristol Digital Futures Institute (BDFI), where she leads research on carbon reduction in computer hardware and software, the history of high tech innovation and the environment, the future of work, and new use cases for artificial intelligence and augmented reality in collaboration with Meta Reality Labs.

Professor Gregg previously served on the ADM+S International Advisory Board (2020 – 2024) and brings extensive experience bridging academia and industry. Before joining the University of Bristol, she spent a decade at Intel, where she led User Experience Research in the Client Computing Group, and contributed to a range of product initiatives, including the research that launched Intel EVO laptops.

As Senior Principal Engineer in Intel’s Software and Advanced Technology Group, she established the first product team focused on carbon emissions reduction for the CTO, and has since worked as a sustainability advisor to Lenovo, ASML and Meta.

Further intersections include the analysis of social media platforms, digital inclusion and the digital gap, inclusive AI, advertising, culture and authenticity, community and DIY technology; and future imaginaries and intersectionality.

ADM+S Director, Distinguished Professor Julian Thomas said that the partnership will further strengthen our international collaboration with the UK, building on shared priorities around responsible innovation and the social and environmental impacts of automation.

“We’re delighted to join the University of Bristol in our research efforts and to welcome Professor Gregg to the Centre as a Partner Investigator,” he said.

“Professor Gregg’s focus on sustainability and her extensive understanding of how technology shapes everyday life from academic and industry perspectives is an invaluable contribution to the Centre’s research.”

Through this partnership both BDFI and the ADM+S aim to connect academic research, industry, government and the community sector to develop responsible, ethical and inclusive automated decision-making systems.

SEE ALSO

How do ‘AI detection’ tools actually work? And are they effective?

A woman weaves with AI software detection graphics surrounding
Image: Elise Racine

How do ‘AI detection’ tools actually work? And are they effective?

Author T.J. Thomson, Aaron Snoswell and James Meese
Date 14 November 2025

As nearly half of all Australians say they have recently used artificial intelligence (AI) tools, knowing when and how they’re being used is becoming more important.

Consultancy firm Deloitte recently partially refunded the Australian government after a report they published had AI-generated errors in it.

A lawyer also recently faced disciplinary action after false AI-generated citations were discovered in a formal court document. And many universities are concerned about how their students use AI.

Amid these examples, a range of “AI detection” tools have emerged to try to address people’s need for identifying accurate, trustworthy and verified content.

But how do these tools actually work? And are they effective at spotting AI-generated material?

How do AI detectors work?

Several approaches exist, and their effectiveness can depend on which types of content are involved.

Detectors for text often try to infer AI involvement by looking for “signature” patterns in sentence structure, writing style, and the predictability of certain words or phrases being used. For example, the use of “delves” and “showcasing” has skyrocketed since AI writing tools became more available.

However the difference between AI and human patterns is getting smaller and smaller. This means signature-based tools can be highly unreliable.

Detectors for images sometimes work by analysing embedded metadata which some AI tools add to the image file.

For example, the Content Credentials inspect tool allows people to view how a user has edited a piece of content, provided it was created and edited with compatible software. Like text, images can also be compared against verified datasets of AI-generated content (such as deepfakes).

Finally, some AI developers have started adding watermarks to the outputs of their AI systems. These are hidden patterns in any kind of content which are imperceptible to humans but can be detected by the AI developer. None of the large developers have shared their detection tools with the public yet, though.

Each of these methods has its drawbacks and limitations.

How effective are AI detectors?

The effectiveness of AI detectors can depend on several factors. These include which tools were used to make the content and whether the content was edited or modified after generation.

The tools’ training data can also affect results.

For example, key datasets used to detect AI-generated pictures do not have enough full-body pictures of people or images from people of certain cultures. This means successful detection is already limited in many ways.

Watermark-based detection can be quite good at detecting content made by AI tools from the same company. For example, if you use one of Google’s AI models such as Imagen, Google’s SynthID watermark tool claims to be able to spot the resulting outputs.

But SynthID is not publicly available yet. It also doesn’t work if, for example, you generate content using ChatGPT, which isn’t made by Google. Interoperability across AI developers is a major issue.

AI detectors can also be fooled when the output is edited. For example, if you use a voice cloning app and then add noise or reduce the quality (by making it smaller), this can trip up voice AI detectors. The same is true with AI image detectors.

Explainability is another major issue. Many AI detectors will give the user a “confidence estimate” of how certain it is that something is AI-generated. But they usually don’t explain their reasoning or why they think something is AI-generated.

It is important to realise that it is still early days for AI detection, especially when it comes to automatic detection.

A good example of this can be seen in recent attempts to detect deepfakes. The winner of Meta’s Deepfake Detection Challenge identified four out of five deepfakes. However, the model was trained on the same data it was tested on – a bit like having seen the answers before it took the quiz.

When tested against new content, the model’s success rate dropped. It only correctly identified three out of five deepfakes in the new dataset.

All this means AI detectors can and do get things wrong. They can result in false positives (claiming something is AI generated when it’s not) and false negatives (claiming something is human-generated when it’s not).

For the users involved, these mistakes can be devastating – such as a student whose essay is dismissed as AI-generated when they wrote it themselves, or someone who mistakenly believes an AI-written email came from a real human.

It’s an arms race as new technologies are developed or refined, and detectors are struggling to keep up.

Where to from here?

Relying on a single tool is problematic and risky. It’s generally safer and better to use a variety of methods to assess the authenticity of a piece of content.

You can do so by cross-referencing sources and double-checking facts in written content. Or for visual content, you might compare suspect images to other images purported to be taken during the same time or place. You might also ask for additional evidence or explanation if something looks or sounds dodgy.

But ultimately, trusted relationships with individuals and institutions will remain one of the most important factors when detection tools fall short or other options aren’t available.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology, and James Meese, Associate Professor, School of Media and Communication, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S welcomes PNG delegation for women’s research capacity-building initiative

Delegates from PNG visit QUT. Group of 18 people standing in front of city background

ADM+S welcomes PNG delegation for women’s research capacity-building initiative

Author ADM+S Centre
Date 17 November 2025

In October, the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) welcomed a delegation of researchers from Papua New Guinea (PNG) as part of the Strongim Risets Kapasiti Bilong Ol Meri (SRKBOM) program — a three-year initiative aimed at strengthening the research and professional capabilities of women in PNG. The program is funded by the Australian Department of Foreign Affairs and Trade, with support from the Australian High Commission in PNG. 

The initiative supports gender equity in research by developing the skills, networks and leadership of women researchers across PNG’s academic, government and professional sectors. The delegation’s visit marks the 50th anniversary of PNG’s independence.

The visit was organised by ADM+S Associate Investigator Prof Janet Roitman and the Australian APEC Study Centre. 

“The delegation’s visit to ADM+S at both RMIT and QUT were highlights of this first phase of the program,” said Prof Roitman.

“The visits offered the delegation extremely useful insights into new tools and resources, but they also involved active engagement, including a discussion of the specific challenges faced by women in research institutions and exchanges regarding the normative implications of GenAI.”

The group of 12 participants included women from universities, government agencies, banking and agriculture, many holding Master’s and PhD qualifications. During their visit, the group met with researchers and professional staff at both RMIT University and QUT. 

Groups of participants sitting at tables.
Researchers from the PNG SRKBOM program taking part in a workshop deliered by QUT's GenAI Lab.

At RMIT, the delegation met with Centre Director Distinguished Prof Julian Thomas and Chief Operating Officer Nick Walsh, who provided an introduction to the Centre of Excellence program and the structure and operation of ADM+S. Matt Warren presented resources relating to digital engagement, and open online tools. 

The delegation participated in a workshop led by Prof Haiqing Yu, alongside Higher Degree Research students from the Language and Cultural Diversity in ADM: Australia in the Asia Pacific project at ADM+S. The session included presentations from two doctoral students, from Indonesia and Vietnam, and a group discussion exploring professional pathways and leadership for women researchers in the Asia-Pacific region.

At QUT, the delegation toured the Garden’s Point and Kelvin Grove campuses, where they met with researchers from the Centre for Decent Work and Industry and the Centre for Justice, and observed demonstrations from the Centre for Robotics. They then took part in an interactive generative AI workshop co-facilitated by ADM+S (including researchers from the Generative Authenticity Project) and QUT’s GenAI Lab team.

SEE ALSO

ADM+S Hackathon navigates the “Wicked Problems” of search

A group of ADM+S members pose on steps

ADM+S Hackathon navigates the “Wicked Problems” of search

Author ADM+S Centre
Date 14 November 2025

Five teams from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) members took part in the centre’s annual Hackathon, this year exploring how search systems enable and constrain diverse social groups navigating complex, real-world challenges. 

Participants focused on developing new methodological approaches to help gain a deeper understanding of how search systems enable and constrain diverse groups facing “wicked problems”.

During the two-day challenge, participants worked in teams to select a “wicked problem” and construct two to three concise personas representing individuals who might seek information related to that issue. 

The Hackathon challenge was developed by Kateryna Kasianenko, Dr Ashwin Nagappa and Dr Oleg Zendel and based on work from the Australian Search Experience 2.0 

“One of the goals behind the hackathon was to get the ADM+S community more comfortable with being uncomfortable in interdisciplinary settings, and we are confident that everyone, from participants to judges, has gotten at least one step closer to this goal,” said Kateryna Kasianenko.

“It was great to see how these perspectives not only co-existed, but informed each other in several projects.” 

Oleg Zendel said that the mix of perspectives made the work exciting and it was valuable to see search through new lenses.

“What stood out to me was how people from different fields approached the same search evaluation challenge in completely different ways,” he said.

On day one, teams were asked to identify a “wicked problem” and produce 2-3 concise representations of people who may be searching for information related to it.

Using data from open online communities, discussions with peers, and insights from external stakeholders, teams generated 15–60 realistic search queries that reflected the behaviours and contexts of their personas. 

The winners of the day one challenge were Khanh Luong (RF, QUT), Kieran Hegarty (RF,  RMIT) and Futoon Abushaqra (Affiliate, RMIT). The team highlighted the wicked problem of the disconnect between children’s curiosity and the age-gated digital systems with search functionality. They proposed to classify children’s search queries based on the level of risk they may present to the child and those around them, illustrating the typology through realistic examples, complemented with detailed examination of search results.

On day two, teams used the queries from day one to either develop an approach to evaluate the search results collected from Google for the queries they produced; or develop a prototype or an approach to collect and evaluate search results from other platforms relevant to the personas.

The winners of the day two challenge were Shuoqi Sun (Student, RMIT), Fletcher Scott (Student, RMIT), Rayane El Masri (Student, QUT), Utami Kusumawati (Affiliate, RMIT) and Kun Ran (Affiliate, RMIT). Their project focused on the information needs around natural disasters, with particular attention to the global/local dimension in both queries and search results. 

Through a mixed-methods approach, the team demonstrated that queries that strongly connect to a particular place still tend to return more general, globalised results. Such results focus on risk reduction strategies rather than enabling communication and decision making specific to a place. This finding highlighted an important gap in search engines’ response to unfolding disasters.

Throughout the Hackathon, mentors and team leads from across the Centre provided support in areas including information retrieval, computational social science, and internet studies. 

Ashwin Nagappa commented, “I think there were several serendipitous moments when participants pivoted and explored new ideas, which led to organic bonding and ideas for publication. 

“It was heartening to see how much everyone valued the two days together.”

The findings, processes and methodological insights from the Hackathon will be documented in a collaborative paper. All participants have been invited to join as co-authors, offering a valuable opportunity for contribution to shared research across the Centre.

The event was organised by the ADM+S Research Training Committee.

SEE ALSO

Digital divide narrows but gaps remain for Australians as GenAI use surges

Bushwalking female, looking for phone reception from the top of a mountain in remote Tasmania, Australia.

Digital divide narrows but gaps remain for Australians as GenAI use surges

Author ADM+S Centre
Date 5 November 2025

The Australian Digital Inclusion Index has found almost half of Australians recently used generative AI tools, raising new opportunities and challenges for digital inclusion.

Usage was highest among students, with 79% reporting recent use, while 69% of Aussies aged 18 to 34 have also engaged with GenAI.

Overall, 46% of Australians reported recently using GenAI.

People living in remote areas were twice as likely to use AI chatbots for social connection or conversations than those in metropolitan areas.

Australians who speak a language other than English at home were more likely to use GenAI, 59% compared to 41% of English-only speakers, likely due to advances in AI-powered translation.

About a third of people with disability have used GenAI, with strong use of these technologies among this group for entertainment and advice.

The study’s Chief Investigator, Distinguished Professor Julian Thomas from RMIT University, said GenAI was creating new digital divides but also presenting fresh opportunities.

“GenAI has the potential to deliver significant benefits for everyone, but its impact will be greatest if it’s implemented fairly and no one is left behind in the digital transformation,” he said.

“People with lower digital skills may be less likely to benefit from AI, while being more exposed to new risks such as scams, misleading content and invasive data practices.

“As technologies like GenAI and new security tools evolve quickly, people need to keep refreshing their digital skills to stay current.”

The most common uses for GenAI were generating text, creating images and creating programming code.

Access and skills improving but persistent barriers remain

A collaboration between ARC Centre of Excellence for Automated Decision-Making and Society, RMIT, Swinburne University of Technology and Telstra, the Australian Digital Inclusion Index measures how Australians access and use digital technologies, factoring in digital skills and affordability.

Australians’ overall skills and confidence to use digital technologies strengthened, rising 8.7 points between 2023 and 2025 to 73.6.

The largest gains were among people aged 75 and over, whose digital ability increased from 23.3 to 41.5, and among those without secondary education, rising from 38.5 to 54.4.

While the findings suggest digital inclusion is improving, about one in five Australians still struggle to fully access, afford and use technology.

Chief Investigator Professor Anthony McCosker from Swinburne said the report showed major gaps between Australians who can fully participate in the digital economy and those being left behind.

“Digital exclusion remains a big challenge, particularly for older Australians, those in remote communities and people experiencing social and economic disadvantage,” he said.

“It’s more than just an inconvenience; digital exclusion cuts people off from vital services and opportunities in education, work and health.”

Regional Australia still lags cities in digital inclusion

The most digitally excluded were older people, those facing social or economic disadvantage and First Nations Australians.

Gaps between those in capital cities and the rest of Australia remain significant, with digital inclusion scores trending downwards with remoteness.

Access, affordability and digital ability scores were below the national average in Tasmania, South Australia and Queensland, while Northern Territory residents faced significant access challenges.

Telstra Chief Sustainability Officer Justine Rowe said the company would use the evidence in the Index to target support where it can have the greatest impact.

“Closing Australia’s digital divide is a focus for Telstra’s Connected Future 30 strategy and we commit to supporting the digital inclusion of 1 million people by FY2030, with at least 200,000 in the Northern Territory, South Australia or Tasmania where there continue to be significant digital inclusion challenges,” she said.

Across Australia, inner-metropolitan areas had the highest levels of digital inclusion, while remote and very remote local government areas had the lowest scores.

The study found many low-income households were unable to afford a home internet connection, leaving them reliant on pre-paid mobile as their main, and often only, way to get online.

Public housing residents, people without secondary education and people with disability faced the greatest challenges in paying for digital services.

There was a significant affordability gap of 13 points between First Nations people and other Australians.

More data specific to mapping the digital gap for First Nations Australians is expected to be released by the ARC Centre of Excellence for Automated Decision-Making and Society in December.

Measuring Australia’s Digital Divide: 2025 Australian Digital Inclusion Index is published by the ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University, Swinburne University of Technology and Telstra. DOI: 10.60836/mtsq-at22

SEE ALSO

Australia is facing an ‘AI divide’, new national survey shows

Greg Plominski/Pixabay

Australia is facing an ‘AI divide’, new national survey shows

Authors Kieran Hegarty, Anthony McCosker, Jenny Kennedy, Julian Thomas, Sharon Parkinson
Date 5 November 2025

In the short time since OpenAI launched ChatGPT in November 2022, generative artificial intelligence (AI) products have become increasingly ubiquitous and advanced.

These machines aren’t limited to text – they can now generate photos, videos and audio in a way that’s blurring the line between what’s real and what’s not. They’ve also been woven into tools and services many people already use, such as Google Search.

But who is – and isn’t – using this technology in Australia?

Our national survey, released today, provides some answers. The data is the first of its kind. It shows that while almost half of Australians have used generative AI, uptake is uneven across the country. This raises the risk of a new “AI divide” which threatens to deepen existing social and economic inequalities.

A growing divide

The “digital divide” refers to the gap between people or groups who have access to, can afford and make effective use of digital technologies and the internet, and those who cannot. These divides can compound other inequalities, cutting people off from vital services and opportunities.

Because these gaps shape how people engage with new tools, there’s a risk the same patterns will emerge around AI adoption and use.

Concerns about an AI divide – raised by bodies such as the United Nations – are no longer speculative.

International evidence is starting to illustrate a divide in capabilities between and within countries, and across industries.

Who we heard from

Every two years, we use the Australian Internet Usage Survey to find out who uses the internet in Australia, what benefits they get from it, and what barriers exist to using it effectively.

We use these data to develop the Australian Digital Inclusion Index – a long-standing measure of digital inclusion in Australia.

In 2024, more than 5,500 adults across all Australian states and territories responded to questions about whether and how they are using generative AI. This includes a large national sample of First Nations communities, people living in remote and regional locations and those who have never used the internet before.

Other surveys have tracked attitudes towards AI and its use.

But our study is different: it embeds questions about generative AI use inside a long-standing, nationally representative study of digital inclusion that already measures access, affordability and digital ability. These are the core ingredients people need to benefit from being online.

We’re not just asking “who’s trying AI?”. We’re also connecting the use of the technology to the broader conditions that enable or constrain people’s digital lives.

Importantly, unlike other studies of AI use in Australia collected via online surveys, our sample also includes people who don’t use the internet, or who may face barriers to filling out a survey online.

Australia’s AI divide is already taking shape

We found 45.6% of Australians have recently used a generative AI tool. This is slightly higher than rates of use identified in a 2024 Australian study (39%). Looking internationally, it is also slightly higher than usage by adults in the United Kingdom (41%), as identified in a 2024 study by the country’s media regulator.

Among Australian users, text generation is common (82.6%), followed by image generation (41.5%) and code generation (19.9%). But usage isn’t uniform across the population.

For example, younger Australians are more likely to use the technology than their elders. More than two-thirds (69.1%) of 18- to 34-year-olds recently used one of the many available generative AI tools, compared with less than 1 in 6 (15.5%) 65- to 74-year-olds.

Students are also heavy users (78.9%). People with a bachelor’s degree (62.2%) are much more likely to use the technology than those who did not complete high school (20.6%). Those who left school in Year 10 (4.2%) are among the lowest users.

Professionals (67.9%) and managers (52.2%) are also far more likely to use these tools than machinery operators (26.7%) or labourers (31.8%). This suggests use is strongly linked to occupational roles and work contexts.

Among the people who use AI, only 8.6% engage with a chatbot to seek connection. But this figure rises with remoteness. Generative AI users in remote areas are more than twice as likely (19%) as metropolitan users (7.7%) to use AI chatbots for conversation.

Some 13.6% of users are paying for premium or subscription generative AI tools, with 18 to 34-year-olds most likely to pay (17.5%), followed by 45 to 54-year-olds (13.3%).

Also, people who speak a language other than English at home report significantly higher use (58.1%) than English-only speakers (40.5%). This may be associated with improvements in the capabilities of these tools for translation or accessing information in multiple languages.

Bridging the divide

This emerging AI divide presents several risks if it calcifies, including disparities in learning and work, and increased exposure for certain people to scams and misinformation.

There are also risks stemming from overreliance on AI for important decisions, and navigating harms related to persuasive AI companions.

The biggest challenge will be how to support AI literacy and skills across all groups. This isn’t just about job readiness or productivity. People with lower digital literacy and skills may miss out on AI’s benefits and face a higher risk of being misled by deepfakes and AI-powered scams.

These developments can easily dent the confidence of people with lower levels of digital literacy and skills. Concern about harms can see people with limited confidence further withdraw from AI use, restricting their access to important services and opportunities.

Monitoring these patterns over time and responding with practical support will help ensure the benefits of AI are shared widely – not only by the most connected and confident.

Kieran Hegarty, Research Fellow, ARC Centre of Excellence for Automated Decision-Making & Society, RMIT University; Anthony McCosker, Professor of Media and Communication, Director, Social Innovation Research Institute, Swinburne University of Technology; Jenny Kennedy, Associate Professor, Media and Communications, RMIT University; Julian Thomas, Distinguished Professor of Media and Communications; Director, ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University, and Sharon Parkinson, Senior Research Fellow, Centre for Urban Transitions, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

NZ Ministry of Regulation consults with ADM+S researchers on AI regulation

Ministry for Regulation sign with blurred people walking past in the background

NZ Ministry of Regulation consults with ADM+S researchers on AI regulation

Author ADM+S Centre
Date 3 November 2025

On 24 October 2025, ADM+S researchers Yunus Yigit, PhD candidate, and Prof Paul Henman from the University of Queensland (UQ), together with Assoc Prof Pedro Fidelman from the UQ Centre for Policy Futures, recently took part in an invited consultation with the New Zealand Ministry for Regulation on the development of AI guidance for the regulatory sector.

The Ministry is leading this initiative to develop practice-focused guidance that supports regulatory leaders in understanding how artificial intelligence can be applied safely, proportionately, and effectively within regulatory contexts. 

This work seeks to encourage innovation across regulatory systems while ensuring responsible and well-informed adoption of AI technologies. The resulting guidance will form part of the Ministry’s broader series on regulatory innovation.

As part of the consultation process, researchers, regulators, and technology experts were invited to share their perspectives to ensure the guidance reflects real-world challenges and opportunities. 

The invitation to participate arose as a result of Yunus’ PhD study of the use of AI and Machine Learning in Australia’s Independent Regulatory Agencies. The ADM+S were also able to share related work of the Centre.

“It was a great opportunity to contribute insights from my PhD research on how regulatory agencies are approaching AI,” said Yunus.

“The consultation reflected a growing recognition that responsible adoption of AI in regulation requires collaboration between researchers, policymakers, and practitioners.”

SEE ALSO

ABC’s deepfake election news story finalist in Walkley Award for Digital Media Innovation

"We cloned senator Jacqui Lambie's voice with AI to show you what a deepfake election could look like"
Screenshot from ABC Newsstory https://www.abc.net.au/news/2025-02-28/jacqui-lambie-ai-generated-voice-election-and-deepfakes/104986434

ABC’s deepfake election news story finalist in Walkley Award for Digital Media Innovation

Author ADM+S Centre
Date 28 October 2025

ADM+S researcher Devi Mallal alongside her ABC News colleague Matt Martino has been named a finalist in the 2025 Walkley Awards for Excellence in Journalism in the category of Digital Media: Innovation Journalism for their investigative story, We cloned senator Jacqui Lambie’s voice with AI to show you what a deepfake election could look like.

The interactive investigation, produced by ABC News Verify, explores the growing threat of AI-generated misinformation in Australian politics by demonstrating how easy it can be to create a convincing deepfake voice. 

The piece, which features a synthetic version of Senator Jacqui Lambie’s voice created with her consent, reveals the potential risks such technologies pose to trust and authenticity in democratic processes.

The nomination places ABC News alongside finalists from Guardian Australia and Nine’s The Age, The Sydney Morning Herald in recognising innovative journalism in digital media.

Now in its 70th year, the Walkley Awards celebrate Australia’s excellence in journalism across all media forms, setting the national standard for excellence in reporting, storytelling, and innovation.

Shona Martyn, CEO of the Walkley Foundation, said this year’s finalists represent the best of contemporary journalism.

 “For 70 years, the Walkley Awards have recognised excellence in Australian journalism. The awards have expanded to cater for new technologies and styles of journalistic endeavour. 

“What hasn’t changed is the commitment to quality public interest journalism as exemplified by the finalists announced today.”

The winners of the 2025 Walkley Awards will be announced at a gala dinner at the International Convention Centre (ICC) in Sydney on Thursday, 27 November.

Read more about the finalists on the Walkley Foundation website

SEE ALSO

ADM+S researchers secure ARC Discovery Project funding for 2026

Abstract pink and purple mesh

ADM+S researchers secure ARC Discovery Project funding for 2026

Author ADM+S Centre
Date 28 October 2025

The Australian Research Council (ARC) has announced more than $342 million in funding for 536 new projects under the 2025 ARC Discovery Projects scheme. 

Nine of the funded projects include contributions from ARC Centre of Excellence for Automated Decision-Making and Society members, showcasing the Centre’s commitment to advancing impactful, multidisciplinary research.

ARC Acting Chief Executive Officer, Dr Richard Johnson, said the ARC Discovery Projects scheme supports excellent basic and applied research to expand Australia’s knowledge base and research capability. 

“Discovery grants support individual researchers and research teams in research projects that provide economic, commercial, environmental, social and/or cultural benefits to the Australian community,” Dr Johnson said. 

The nine projects involving ADM+S members reflect research excellence across diverse fields, from Generative AI models to ‘shadow money’ and the use of smartphones and social media to support learning.

Projects involving ADM+S members include the following (ADM+S researchers in bold):

A Cultural History of Workplace Fatigue
Assoc Prof Elizabeth Stephens, Prof Alison Downham Moore, Dr Christopher O’Neill, Prof Melissa Gregg
This project aims to investigate how the historical and cultural construction of workplace fatigue shapes the design and implementation of fatigue management technologies in an age of AI.

Atmospheres of Wellbeing: Awareness and Action Towards Better Air Quality
Professor Deborah Lupton,
 Dr David Rousell
The planet is currently facing an air quality crisis. This social research project aims to i) identify the heuristics and practices Australians use to perceive, understand and act on air quality; and ii) formulate creative approaches to environmental education for better awareness and action in individual, community and organisational contexts. 

Audiences, equity, and the future of free-to-air television
Prof Ramon Lobato, Hon. Professor Jock Given, Professor Catherine Johnson, Dr Alexa Scarlata
Australia’s free-to-air television industry is in structural decline, with concerning implications for media access, emergency communications and social cohesion. This project aims to develop novel methods to identify those Australians most affected, to understand how their experience of television will change, and to provide options to ensure widely-endorsed public policy goals are met in a radically different media landscape. 

Contextualised Commonsense Reasoning for Human Behaviour Analysis
Prof Maurice Pagnucco, Assoc Prof Yang Song, Prof Gerhard Lakemeyer
Commonsense reasoning has long been a fundamental challenge in artificial intelligence (AI). One of the major lessons from 70 years of research in AI is that context matters. This project pioneers a contextualised approach to commonsense reasoning; plans and contexts are tailored to specific behaviours and individuals and updated dynamically over time. 

Educational affordances of young people’s smartphone and social media use
Prof Neil Selwyn, Dr Clare Southerton, Dr Selena Nemorin
This project aims to investigate how Australian teenagers are using smartphones and social media to support learning. This project expects to generate significant new knowledge about young people’s capacity to engage productively with these technologies in light of growing institutional restrictions and bans.

Ethical, social and regulatory implications of informal sperm donation
Prof Catherine Mills, Assoc Prof Neera Bhatia, Dr Karin Hammarberg, Dr Molly Johnston, Dr Giselle Newton
The project expects to generate new knowledge to address the the informal provision of sperm via the internet, while also improving the formal and regulated system of sperm donation.

Foreign conflicts, domestic divides: Advancing a deliberative response<
Prof Selen Ercan, Prof John Dryzek, Dr Jordan McSwiney, Dr Ehsan Dehghan, Dr Sofya Glazunova, Dr Kurt Sengul
The project will extend the application of deliberative democracy to address de-territorialized conflicts in multicultural societies.

Reduce hallucination in large language models via knowledge-based reasoning
Prof Xiuzhen (Jenny) Zhang, Prof Jeffrey Chan, Dr Estrid (Jiayuan) He, Prof Erik Cambria
This project aims to address the critical challenge of hallucination — a phenomenon where generative AI models produce information that appears plausible but is factually incorrect — with a focus on news fact-checking. 

Shadow Money: A Comparative Analysis
Prof Janet Roitman, Prof Ellie Rennie, Assoc Prof Tatiana Dancy, Assoc Prof Fabio Mattioli, Dr Julia Tomassetti, Dr Christina Harris, Prof Kean Birch
This project aims to understand how new forms of “shadow money” – or digital tokens created by non-bank financial actors – are reshaping systems of exchange. The project expects to generate new knowledge in the area of digital economies. 

Read the full list of 2026 ARC Discovery Project recipients project descriptions 

SEE ALSO

Most Australian government agencies aren’t transparent about how they use AI

Simplistic illustration of a server rack with wires trailing out of it. A yellow sticky note is taped to the rack with a drawing of cartoon sparkles.

Most Australian government agencies aren’t transparent about how they use AI

Authors José-Miguel Bello y Villarino, Alexandra Sinclair and Kimberlee Weatherall
Date 27 October 2025

A year ago, the Commonwealth government established a policy requiring most federal agencies to publish “AI transparency statements” on their websites by February 2025. These statements were meant to explain how agencies use artificial intelligence (AI), in what domains and with what safeguards.

The stated goal was to build public trust in government use of AI – without resorting to legislation. Six months after the deadline, early results from our research (to be published in full later this year) suggest this policy is not working.

We looked at 224 agencies and found only 29 had easily identifiable AI transparency statements. A deeper search found 101 links to statements.

That adds up to a compliance rate of around 45%, although for some agencies (such as defence, intelligence and corporate agencies) publishing a statement is recommended rather than required, and it is possible some agencies could share the same statement. Still, these tentative early findings raise serious questions about the effectiveness of Australia’s “soft-touch” approach to AI governance in the public sector.

Why AI transparency matters

Public trust in AI in Australia is already low. The Commonwealth’s reluctance to legislate rules and safeguards for the use of automated decision making in the public sector – identified as a shortcoming by the Robodebt royal commission – makes transparency all the more critical.

The public expects government to be an exemplar of responsible AI use. Yet the very policy designed to ensure transparency seems to be ignored by many agencies.

With the government also signalling a reluctance to pass economy-wide AI rules, good practice in government could also encourage action from a disoriented private sector. A recent study found 78% of corporations are “aware” of responsible AI practices, but only 29% have actually “implemented” them.

Transparency statements

The transparency statement requirement is the key binding obligation under the Digital Transformation Agency’s policy for the responsible use of AI in government.

Agencies must also appoint an “accountable [AI] official” who is meant to be responsible for AI use. The transparency statements are supposed to be clear, consistent, and easy to find – ideally linked from the agency’s homepage.

In our research, conducted in collaboration with the Office of the Australian Information Commissioner, we sought to identify these statements, using a combination of automated combing through websites, targeted Google searches, and manual inspection of the list of federal entities facilitated by the information commissioner. This included both agencies and departments strictly bound by the policy and those invited to comply voluntarily.

But we found only a few statements were accessible from the agency’s landing page. Many were buried deep in subdomains or required complex manual searching. Among agencies for which publishing a statement was recommended, rather than required, we struggled to find any.

More concerningly, there were many for which we could not find the statement even where it was required. This may just be a technical failure, but given the effort we put in, it suggests a policy failure.

A toothless requirement

The transparency statement requirement is binding in theory but toothless in practice. There are no penalties for agencies that fail to comply. There is also no open central register to track who has or has not published a statement.

The result is a fragmented, inconsistent landscape that undermines the very trust the policy was meant to build. And the public has no way to understand – or challenge – how AI is being used in decisions that affect their lives.

How other countries do it

In the United Kingdom, the government established a mandatory AI register. But as the Guardian reported in late 2024, many departments failed to list their AI use, despite the legal requirement to do so.

The situation seems to have slightly improved this year, but still many high-risk AI systems identified by UK civil society groups are still not published on the UK government’s own register.

The United States has taken a firmer stance. Despite anti-regulation rhetoric from the White House, the government has so far maintained its binding commitments to AI transparency and mitigation of risk.

Federal agencies are required to assess and publicly register their AI systems. If they fail to do so, the rules say they must stop using them.

Towards responsible use of AI

In the next phase of our research, we will analyse the content of the transparency statements we did find.

Are they meaningful? Do they disclose risks, safeguards and governance structures? Or are they vague and perfunctory? Early indications suggest wide variation in quality.

If governments are serious about responsible AI, they must enforce their own policies. If determined university researchers cannot easily find the statements – even assuming they are somewhere deep on the website – that cannot be called transparency.


The authors wish to thank Shuxuan (Annie) Luo for her contribution to this research.The Conversation

José-Miguel Bello y Villarino, Senior Research Fellow, Sydney Law School, University of Sydney; Alexandra Sinclair, Postdoctoral Research Fellow, Sydney Law School, University of Sydney, and Kimberlee Weatherall, Professor of Law, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Google’s AI Mode: Is AI changing what we see online?

Close up of an eye with reflection of Google logo

Google’s AI Mode: Is AI changing what we see online?

Author ADM+S
Date 23 October 2025

Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) are examining how Google’s integration of AI into search is changing the way Australians find, trust, and interact with information online.

From 8th October 2025, Google’s new feature AI Mode is available for Australians. This comes nearly a year after Google rolled out the AI Overviews, AI generated search summaries that appeared on top of the classic Google search results page.

AI Mode takes the AI integration further and allows users to ask complex, conversational questions, from planning a walking tour to troubleshooting household tasks and receive multimodal (text, images, videos) AI-generated summaries instead of traditional search results.

The feature builds on Google’s existing search infrastructure while adding a full-screen, interactive experience.

In the latest episode of the Automated Societies podcast Google’s new AI Mode: Is AI changing what we see online, ADM+S researchers Dr Dr Ashwin Nagappa, Dr Oleg Zendel, and Ms Sara Al Lawati spoke about what these changes mean for users, information ecosystems, and independent publishers.

“AI Mode isn’t replacing search, it’s extending it,” said Oleg Zendel. “It adds a conversational layer on top of regular results, letting users ask follow-ups while still keeping traditional links in play.”

The researchers also discussed the potential impact on smaller publishers. For example, an independent site reviewing air purifiers, Housefresh, temporarily lost 95% of its traffic when Google’s algorithms began favouring AI-generated summaries and larger platforms.

Dr Nagappa highlighted the broader implications:

“AI tools summarising content could disrupt established revenue models that support information or content producers. If independent creators lose visibility, this reduces incentives to produce high-quality content, creating a feedback loop that may affect the diversity and quality of future AI outputs.”

The team is studying how Australians interact with AI-integrated search systems. Sara Al Lawati spoke about her research using eye-tracking and controlled platforms to understand search behaviour, query patterns, and engagement with AI-generated content.

“As AI summarizes and interprets content directly, it could reshape how audiences reach and trust their work. However, this has yet to be researched on a wider scale. 

“I think it’s important for us as researchers to understand the impact it has on small and independent publishers before it becomes an issue that discourages them from producing content.”

The researchers emphasised the need for thoughtful policy, design, and public awareness measures to ensure search remains open, fair, and accountable. Suggestions include providing users with access to multiple perspectives, clear links to original sources, and education on AI biases and verification techniques.

Listen to the podcast Google’s new AI Mode: Is AI changing what we see online

Read more about the Australian Search Experience 2.0 Project

Read the four part series on Search Experience on ADM+S Medium

SEE ALSO

ADM+S researchers awarded 2025 Queensland-Bavaria Collaborative Research Grants

Pictured Distinguished Prof Jean Burgess, Kateryna Kasianenko, Prof Axel Bruns, Brett Tweedie, Dr Aaron Snoswell, Shir Weinbrand, Dr Ashwin Nagappa and Prof Daniel Angus.
Clockwise from top left: Distinguished Prof Jean Burgess, Kateryna Kasianenko, Prof Axel Bruns, Brett Tweedie, Dr Aaron Snoswell, Prof Daniel Angus, Dr Ashwin Nagappa and Shir Weinbrand.

ADM+S researchers awarded 2025 Queensland-Bavaria Collaborative Research Grants

Author ADM+S
Date 23 October 2025

ADM+S researchers from QUT have been awarded more than $100,000 under the 2025 Queensland-Bavaria Collaborative Research Program, supporting research in the use of AI in news personalisation and online search. 

The program, jointly funded by the Queensland Government and the Bavarian State Government, aims to foster international research collaborations and translate research into industry, environmental and societal benefits.

Among the successful projects, those led by researchers from the ARC Centre of Excellence for Automated Decision-Making and Society at QUT included:

Personalised news: balancing editorial and audience values in AI alignment
QUT and Technical University of Munich, with ABC and Bayerischer Rundfunk

ADM+S researchers Dr Aaron Snoswell and Distinguished Prof Jean Burgess have been awarded a $109,933 development grant to develop new AI tools that align news personalisation with editorial integrity, helping public service media deliver trusted, values-aligned journalism in the GenAI era.

This project is part of the ADM+S project: Evaluating Automated Cultural Curating and Ranking Systems with Synthetic Data.

Interrogating the AI turn in search: pilot studies comparing AI summaries in German and Australian search
QUT and Ludwig Maximilian University of Munich

The Australian Search Experience 2.0 (ASE 2.0), led by ADM+S Chief Investigator Prof Axel Bruns, will analyse how AI-generated Information Summaries (AIIS) in search engines shape public understanding of key issues, comparing results in Germany and Australia to inform responsible AI governance and information quality online.

Through the Queensland-Bavaria Seed grant, Prof Bruns, along with ADM+S colleagues Prof Daniel Angus, Dr Ashwin Nagappa, Kateryna Kasianenko, Shir Weinbrand and Brett Tweedie, extends the scope of ASE 2.0 into an international context. 

Queensland Chief Scientist Professor Kerrie Wilson said the Queensland–Bavaria Collaborative Research Program bridges the gap between some of the top research institutions in the world.

“By combining forces and forging partnerships through this program, we can further elevate the international reputation of our research institutions for producing high quality, innovative solutions to our shared challenges and ambitions.

“I am excited to see the groundbreaking research projects supported through this program, aligning with the Future Queensland Science Strategy to promote innovation, tackle global challenges and establish Queensland as a hub for scientific excellence.”

SEE ALSO

Exploring AI in the majority world at the University of Amsterdam

Anand Badola pictured wtih other researchers at The University of Amsterdam. 16 people in the photos.
Anand Badola wtih other researchers at The University of Amsterdam.

Exploring AI in the majority world at the University of Amsterdam

Author ADM+S
Date 15 October 2025

ADM+S researcher Anand Badola recently attended the workshop “Publics, Debates, Everyday Injustice, and AI in the Majority World”, held at the University of Amsterdam. The workshop was organised by Roanne van Voorst (UvA), Nafis Hasan (UvA), Sagnik Dutta (Tilburg University), and Siddharth Peter De Souza (University of Warwick).

During the event, Anand presented a paper co-authored with QUT colleague Shubhangi Heda titled “Participative Orientalism and GenAI: A Case Study of Jugalbandi Chatbot as Postcolonial Imaginaries as Technological Governance.” The paper conceptualises a novel framework of participative orientalism to better understand the overlapping imaginaries of GenAI in the postcolonial context of India. 

The workshop also provided rich opportunities for networking and knowledge exchange. Anand connected with researchers from around the world, including Gayatri Nair from Indraprastha Institute of Information Technology Delhi, who presented research on platform labour in India, and Wasem Hassan from the London School of Economics, who explored AI and diagnostic authority in Egyptian health clinics. 

These discussions opened potential avenues for future collaboration, particularly around understanding the impact and diverse trajectories of GenAI in the Majority World.

Reflecting on the workshop, Anand emphasised the importance of incorporating diverse experiences into AI research. 

“Meeting people from different backgrounds reaffirmed the importance of listening and incorporating diverse experiences that societies have with technology, especially from the Majority world,” he said.

Key takeaways from the workshop included understanding diverse experiences of GenAI that people and communities are having in the Majority world and how can researchers and the academic community better understand this change.

This research visit was funded by the ARC Centre of Excellence for Automated Decision Making and Society Research Training Program.

SEE ALSO

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

Andres Aleman/Unsplash

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

Author TJ Thomson
Date 15 October 2025

How do computers see the world? It’s not quite the same way humans do.

Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.

As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.

My latest research, published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models “see” – and discovered a bright, sensational world of generic images quite different from the human visual realm.

This image features a pixelated selfie featuring an individual with long brown hair and a fringe. The person has their tongue out and is smiling too. Most of the parts of the image are pixelated with red and yellow squares focusing on certain parts of the
Algorithms see in a very different way to humans.
Elise Racine / Better Images of AI / Emotion: Joy, CC BY

Comparing human and computer vision

Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret these signals into images we see.

Our vision focuses on key aspects such as colour, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.

Computers work very differently. They process images by standardising them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.

A screenshot of a CAPTCHA test asking a user to select all images with a bus.
Solving CAPTCHAs helps prove you’re human and also helps computers learn how to ‘see’.
CAPTCHA

You’ve likely helped computers learn how to “see” by completing online CAPTCHA tests.

These are typically used to help computers differentiate between humans and bots. But they’re also used to train and improve machine learning algorithms.

So, when you’re asked to “select all the images with a bus”, you’re helping software learn the difference between different types of vehicles as well as proving you’re human.

Exploring how computers ‘see’ differently

In my new research, I asked a large language model to describe two visually distinct sets of human-created images.

One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.

I fed the descriptions back into an AI tool and asked it to visualise what it had described. I then compared the original human-made images to the computer-generated ones.

The resulting descriptions noted the hand-drawn images were illustrations but didn’t mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.

Cultural context was largely devoid from the descriptions. The AI tool either couldn’t or wouldn’t infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools’ training data.

While colour is vital to human vision, it too was largely ignored in the AI tools’ image descriptions. Visual depth and perspective were also largely ignored.

The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.

Two similar but different black and white illustrations of a bookshelf on wheels.
The AI-generated images were much more boxy than the hand-drawn illustrations, which used more organic shapes and had a different relationship between positive and negative space.
Left: Medar de la Cruz; right: ChatGPT

The AI images were also much more saturated than the source images: they contained brighter, more vivid colours. This reveals the prevalence of stock photos, which tend to be more “contrasty”, in AI tools’ training data.

The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.

A photo of people with guns driving through a desert and a generated photorealistic image of several cars containing peopl with guns driving through a desert.
The AI-generated images were more sensationalist and contrasty than the human-created photographs.
Left: Ahmed Zakot; right: ChatGPT

The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them as less authentic and engaging.

Deciding when to use human or computer vision

This research supports the notion that humans and computers “see” differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.

While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.

Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging than computer-generated attempts.

However, the capabilities of AI can make it an attractive option for quickly labelling large data sets and helping humans categorise them.

Ultimately, there’s a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Decentralised Technologies and Global Chinese Communities: upcoming symposium

A city scape

Decentralised Technologies and Global Chinese Communities: upcoming symposium

Author ADM+S
Date 8 October 2025

Leading international researchers will come together for the symposium Decentralised Technologies and Global Chinese Communities, co-hosted by The University of Hong Kong and the ARC Centre of Excellence for Automated Decision-Making and Society at Hong Kong University in-person and online on 27 October.

Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) will explore how decentralised technologies, such as blockchain, DeFi, DAOs and cryptocurrencies, are transforming global Chinese communities.

Speakers will examine how these communities are reimagining networks, identities, and cultural practices through decentralisation, often challenging Western-centric narratives and fostering innovative, community-based models rooted in Chinese cultural and political contexts. 

Topics include grassroots experimentation, state-aligned visions of decentralisation, and the development of infrastructure, from mining operations to digital currencies, that underpin these technologies’ social and economic dimensions.

Speakers include ADM+S Researchers Prof Ellie Rennie, Prof Janet Roitman and Haiqing Yu, who will play a key role in shaping these conversations.

They will be joined by international speakers whose work offers critical global perspectives on decentralisation and Chinese networks, including Dr Nicholas Loubere, Associate Professor at Lund University and co-editor of the Made in China Journal and Dr Wang Jing, Assistant Professor at NYU Shanghai.

Additional speakers will represent leading institutions including Beijing Normal University, China Academy of Art, Chinese University of Hong Kong, City University of Hong Kong, Fudan University, Haian Normal University, Hong Kong Shue Yan University, Renaissance College Hong Kong, The University of Chicago, Utrecht University, and Web3 Harbour. 

This event brings together leading researchers in science and technology studies, media, communication and cultural analysis to examine how decentralised systems are reshaping practices across Chinese diasporic contexts. 

We invite you to attend this event in-person or online. Registration closes on 23 October 2025.

View the event program

Register to attend online

Register to attend in-person

SEE ALSO

Does AI pose an existential risk? We asked 5 experts

Dominos falling and a hand blocking them
Canva/ Kanchanachitkhamma

Does AI pose an existential risk? We asked 5 experts

Author Aaron Snoswell, Niusha Shafiabady, Sarah Vivienne Bentley, Seyedali Mirjalili, Simon Coghlan
Date 6 October 2025

There are many claims to sort through in the current era of ubiquitous artificial intelligence (AI) products, especially generative AI ones based on large language models or LLMs, such as ChatGPT, Copilot, Gemini and many, many others.

AI will change the world. AI will bring “astounding triumphs”. AI is overhyped, and the bubble is about to burst. AI will soon surpass human capabilities, and this “superintelligent” AI will kill us all.

If that last statement made you sit up and take notice, you’re not alone. The “godfather of AI”, computer scientist and Nobel laureate Geoffrey Hinton, has said there’s a 10–20% chance AI will lead to human extinction within the next three decades. An unsettling thought – but there’s no consensus if and how that might happen.

So we asked five experts: does AI pose an existential risk?

Three out of five said no. Here are their detailed answers.

The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Building international connections on Generative AI and Authenticity

A group of 9 researchers from the Digital Good Network standing for a photo.
Phoebe Matich with members of the Digital Good Network at The University of Sheffield

Building international connections on Generative AI and Authenticity

Author ADM+S Centre
Date 10 October 2025

ADM+S researcher Dr Phoebe Matich recently travelled to the United Kingdom on behalf of the Generative Authenticity project to present at the 2025 Future of Journalism Conference in Cardiff, Wales, and to meet with international research partners working at the intersection of generative AI, authenticity, and public service media.

While in the UK, Dr Phoebe Matich met with colleagues from the Responsible Innovation Centre for Public Media Futures, based within the BBC’s Research and Development teams at Media City, Salford, Manchester, and the ESRC Digital Good Network at the University of Sheffield. The visit aimed to find points of connection between our projects and topic areas and present emerging conceptual work from the ADM+S Generative Authenticity project

At the Future of Journalism Conference, Dr Matich presented recent research from the Generative Authenticity project analysing how generative AI technologies configures authenticity issues in two instances of AI-generated media “witnessing”.

The presentation formed part of a broader discussion on the ambivalent and context-dependent nature of generative AI, and drew strong interest from researchers from Palestine, Canada, and Ireland.

“There’s a clear interest in the ambivalence of GenAI and its contingency on uses that may be benevolent in some ways, as well as malevolent use-cases,” Matich said.

 

During the visit, Dr Matich also met with Professor Stephen Hutchings and his team of misinformation researchers at the University of Manchester.

Conversations with the BBC’s Responsible Innovation Centre and the University of Sheffield’s Digital Good Network further highlighted shared interests in trust, consent, and authenticity in the context of generative AI and public service media.

Dr Matich reflected, “it’s crucial to tread carefully around the multilayered ethical questions being raised by genAI technologies, which need to be deconstructed rather than taken for granted, and remember that trust in the media is a process rather than a static object at any given moment.”

Dr Matich noted that the trip offered valuable insights into the continued relevance of international approaches and methodologies, including audience research, surveys, interviews, and focus groups.

Key takeaways from the visit include emerging discussions around confidentiality as a potential use-case for generative AI, and the growing significance of “authenticity infrastructure” such as C2PA in journalists’ verification practices. The trip also highlighted the distinction between normative and descriptive approaches to studying generative AI’s role in media and journalism.

The visit was funded by the ADM+S Generative Authenticity project.

SEE ALSO

Research seeking young Australians to take part in a digital advertising study

Colourful background with hands holding mobile phones with blank screens
WeAre/GettyImages

Research seeking young Australians to take part in a digital advertising study

Author ADM+S Centre
Date 9 October 2025

Researchers from the Australian Ad Observatory at the ARC Centre of Excellence for Automated Decision-Making and Society are inviting young Australians aged 16 to 24 to take part in a national study examining digital advertising on popular social media platforms including Facebook, Instagram, TikTok, and YouTube.

The study aims to better understand the types and patterns of ads Australians are exposed to online, contributing to research on how digital advertising shapes user experiences, preferences and consumption.

 

Participants who use an Android phone and spend more than an hour a day on social media are eligible to take part. Participants who complete a 10-day ad collection period will receive an honorarium to recognise their voluntary contribution of time and knowledge.

To register your interest and find out if you are eligible visit the survey here.

Ethics approval was provided by UQ HREC [2023/HE001882]

SEE ALSO

Researchers investigate LLMs for search systems during Amsterdam research visit

Nuha Abu Onq and Chenglong Ma stand in front of a large sign saying "LAB42"
Image: Yujie Lyu

Researchers investigate LLMs for search systems during Amsterdam research visit

Author ADM+S Centre
Date 8 October 2025

In early July, ADM+S researchers Nuha Abu Onq and Chenglong Ma visited the Information Retrieval Lab (IRLab) at the University of Amsterdam, Netherlands, organised by ADM+S Partner Investigator Prof Maarten de Rijke. Nuha and Chenglong attended a series of research conferences and collaborative meetings, creating a valuable opportunity for cross-institutional exchange.

On 11 July, both Nuha and Chenglong gave invited talks at IRLab:

  • Chenglong Ma presented “PUB: An LLM-Enhanced Personality-Driven User Behaviour Simulator for Recommender System Evaluation,” introducing a simulator that infers personality traits from user behaviour logs and uses those to produce synthetic interaction data that better mirrors real user diversity.
  • Nuha Abu Onq presented “Classifying Term Variants in Query Formulation,” analysing how users formulate diverse search queries, especially how cognitive complexity of underlying information needs affects query variation and the strategies people employ.

During the visit, Nuha and Chenglong had productive discussions with other researchers about topics like Large Language Models (LLM) for Evaluation in IR. They both attended the SIGIR (Special Interest Group on Information Retrieval) 2025 conference, including participating in the LLM4Eval workshop.

“At SIGIR’25, we considered several approaches for designing prompts to apply LLMs to categorisation tasks, aiming both to simplify future research and to support the training of models for automated categorisation,” Nuha said.

“Additionally, we discussed extending our work on personality traits to investigate how these traits might influence variations in user search behaviour.” 

Image: Yujie Lyu

Nuha and Chenglong mention that one of the key takeaways was exploring the value of open, reproducible and user-centred research practices. The IRLab team’s emphasis on making code and data publicly available and combining technical methods with user studies provided important insight.

Chenglong and Nuha have plans to apply these approaches in their own work. Smaller, well-designed user studies were shown to be highly valuable for informing the development of trustworthy AI systems.

“Carefully designed small-scale user studies can provide valuable insights for future LLM-based search systems, as they can be validated against real user search interactions.” Nuha said.

Nuha and Chenglong recognised the need to bridge academic research with real-world applications, especially when it comes to fairness and evaluation in commercial search and recommendation systems. 

This visit was funded by the ARC Centre of Excellence for Automated Decision-Making and Societies’ Research Training Grant,

SEE ALSO

ADM+S researcher informs Senate inquiry of ‘astroturfing’ and hidden political advertising online

Australian Parliament in Canberra at sunset.
Mlenny/GettyImages

ADM+S researcher informs Senate inquiry of ‘astroturfing’ and hidden political advertising online

Author ADM+S Centre
Date 7 October 2025

Prof Daniel Angus from the ARC Centre of Excellence for Automated Decision-Making and Society has told a parliamentary inquiry that Australia urgently needs stronger rules to ensure observability around online political advertising, warning that well-financed lobby groups are using covert tactics to shape public opinion.

Prof Angus highlighted the Centre’s research into digital advertising and the risks posed by what he described as “blind spots in platform observability.”

“Our research revealed how coordinated, well-financed actors can quietly steer public debate while evading meaningful disclosure and regulatory scrutiny,” he said. “Similar tactics operate year-round across major policy debates including energy policy.”

As part of the Centre’s Australian Ad Observatory, researchers recruited more than 100 citizens who used a custom mobile app to donate the ads they encountered on Facebook, Instagram and TikTok. Within four weeks, over 22,000 ads were collected, including hundreds of political advertisements.

Prof Angus explained that some campaigns use astroturfing tactics, where lobby groups disguise themselves as grassroots community organisations to sway public opinion.

 “It’s political advertising in disguise – what looks like a local community group may in fact be a well-funded industry or lobby organisation,” he said.

Prof Daniel Angus on screen with Australian Government logo in right hand top corner
Prof Daniel Angus providing testimonial at the Senate Select Committee on Information Integrity on Climate Change and Energy.

The inquiry heard examples from the last federal election, including groups such as Australians for Natural Gas and Mums for Nuclear, which appeared suddenly during the campaign period and spent tens to hundreds of thousands of dollars on advertising.

The ADM+S submission calls for:

  • National and political advertising provisions;
  • Strengthening of real-time disclosure of third-party funding;
  • Mandated platform data access modelled after but extending the European Digital Services Act, the DSA, and;
  • Sustained public investment in independent monitoring to safeguard democratic transparency.

Prof Angus emphasised that such reforms are essential to safeguard democratic transparency.

“Without greater observability, we risk allowing hidden interests to distort public debate in ways that Australians cannot easily detect or contest,” he said.

SEE ALSO

How people are assessed for the NDIS is changing. Here’s what you need to know

Two people in a room facing each other talking in therapy session.
andreswd/Getty Images

How people are assessed for the NDIS is changing. Here’s what you need to know

Authors Georgia van Toorn and Helen Dickinson
Date 1 October 2025

The government has announced a new tool to assess the needs of people with disability for the National Disability Insurance Scheme (NDIS).

Instead of a having to gather and submit medical reports, new applicants and existing participants being reassessed will have an interview with an National Disability Insurance Agency (NDIA) assessor.

The government says the new process will make support planning simpler, fairer and more accessible.

But last week’s announcement has left important questions unanswered. Most notably, how will the outcome of these assessments determine the level of support someone gets? And what evidence will be used in place of doctors’ reports?

With minimal consultation so far and little transparency, confidence in the new system is already low.

What’s changing?

The independent NDIS review reported to the federal government in December 2023 and recommended a raft of reforms. It found current processes for assessing people for the NDIS supports are unfair and inefficient. Gathering evidence from treating doctors and allied health professionals can be time-consuming, due to long wait times for appointments. Appointments can also be expensive.

As a result, those with the ability and means to collect or purchase additional information are favoured in this process. It also means the scheme often focuses on medical diagnosis and not on the functional impairments that arise from these diagnoses.

From mid-2026, participants aged over 16 will have their needs assessed by an NDIA assessor. This shifts the role of gathering and interpreting information to the agency.

Assessors will be an allied health professional, such as an occupational therapist or social worker, who will use an assessment tool called the Instrument for the Classification and Assessment of Support Needs version 6, or I-CAN.

I-CAN measures support needs across 12 areas of daily life, including mobility, self-care, communication, relationships, and physical and mental health. Each area is scored on two scales: how often support is needed, and the intensity of the support required.

The assessment, based on self-reported information, is expected to take around three hours.

What we still don’t know

With medical reports no longer required, it’s unclear what kinds of evidence, beyond the information collected through the assessment, will inform the planning process.

The other big unknown is how the I-CAN assessment will translate into setting a budget for participants. This is crucial, as a person’s budget determines the supports they can access. And this shapes their ability to live independently and pursue their goals.

Currently, budget size is determined by identifying the range of supports a person needs and is built line by line. But the NDIS review recommended more flexibility. Instead of getting separate amounts for therapy, equipment and support workers, the review argued a participant should get one overall budget they can use across all their needs.

While the idea of flexibility sounds promising, it means little without an adequate budget.

Potential conflicts also arise when the NDIA both judges need and allocates funding, but has an incentive to contain costs.

Recent reforms to operational rules about what should be included as an NDIS support will also constrain this flexibility.

Standardisation at what cost?

These changes are partly aimed at controlling NDIS spending through a more standardised and efficient planning process.

They echo the Morrison government’s failed attempt in 2021 to introduce “independent assessments”. Disability groups, the Labor opposition, and state and territory ministers rejected the move, and the government abandoned the plan.

There is a risk the new approach could reduce support and fail to expand choice. Rather than providing the flexibility participants seek, rigid assessments and points-based formulas can easily be repurposed to cap budgets.

The United Kingdom’s experience suggests this is a very real possibility for individualised funding schemes such as the NDIS.

In recent months, a number of NDIS participants have already had their eligibility for the scheme re-assessed or their funding reduced. The concern is that unless this new process is carefully co-designed and implemented, we may see more cuts.

Disability groups also fear that if aspects of the planning process are automated, algorithms could turn nuanced support needs into rigid calculations. Campaign groups have called on the government to halt the use of algorithms, which are already being used in NDIS support planning.

As George Taleporos, the independent chair of Every Australian Counts, has stressed:

The NDIS must never reduce us to data points in a secret algorithm – people with disability are not numbers, we are human beings, and our rights must remain at the heart of the Scheme.

Will some groups be disadvantaged by the change?

The new framework was developed without meaningful input from NDIS participants, families and carers, and advocacy groups are concerned the tool may not be fit for purpose for some groups.

A self-report tool such as I-CAN poses particular risks for autistic people with complex communication needs, high support requirements, and those who rely on masking to navigate social situations. Each of these factors raises the risk the tool won’t capture real support needs.

For culturally and linguistically diverse communities and First Nations people with disability, these issues are compounded by language, cultural and accessibility barriers.

A three-hour-long interview will place a heavy cognitive and emotional load on all NDIS participants. It’s possible this could compromise the accuracy of responses.

Some people in the disability community have called for the ability for participants to be able to bring additional evidence from the professionals who know them well to the assessment process, so it doesn’t miss important information about them.

While we await more detail, it’s crucial the government consults closely with the disability community to ensure people with disability are not left worse off.

Georgia van Toorn, Research Fellow, ARC Centre of Excellence for Automated Decision-Making and Society, UNSW Sydney and Helen Dickinson, Professor, Public Service Research, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

We teach young people to write. In the age of AI, we must teach them how to see

Person with back towards camera facing mountain range
Vikas Anand Dev/ Upsplash

We teach young people to write. In the age of AI, we must teach them how to see

Authors T.J. Thomson, Daniel Pfurtscheller, Katharina Christ, Katharina Lobinger, Nataliia Laba
Date 1 October 2025

From the earliest year of school, children begin learning how to express ideas in different ways. Lines across a page, a wobbly letter, or a simple drawing form the foundation for how we share meaning beyond spoken language.

Over time, those first marks evolve into complex ideas. Children learn to combine words with visuals, express abstract concepts, and recognise how images, symbols and design carry meaning in different situations.

But generative artificial intelligence (AI), software that creates content based on user prompts, is reshaping these fundamental skills. AI is changing how people create, edit and present both text and images. In other words, it changes how we see – and how we decide what’s real.

Take photos, for example. They were once seen as a “mirror” of reality. Now, more people recognise their constructed nature.

Similarly, generative AI is disrupting long-held assumptions about the authenticity of images. These can appear photorealistic but can depict things or events that never existed.

Our latest research, published in the Journal of Visual Literacy, identifies key literacies at each stage of the AI image generation process, from selecting an AI image generator to creating and refining content.

As the way people make images changes, knowing how generative AI works will let you better understand and critically assess its outputs.

Textual and visual literacy

Literacy today extends beyond reading and writing. The Australian Curriculum defines literacy as the ability to “use language confidently for learning and communicating in and out of school”. The European Union broadens this to include navigating visual, audio and digital materials. These are essential skills not only in school, but for active citizenship.

These abilities span making meaning, communicating and creating through words, visuals and other forms. These abilities also require adapting expression to different audiences. You might text a friend informally but email a public official with more care, for example. Computers, too, demand different forms of literacy.

In the 1960s, users interacted with computers through written commands. By the 1970s, graphical elements like icons and menus emerged, making interaction more visual.

Generative AI is often a mix between these two approaches. Some technologies, like ChatGPT, rely on text prompts. Others, like Adobe’s Firefly, use both text commands and button controls.

The user interface of Adobe Firefly shows eight photorealistic images, generated by AI, seemingly depicting the Sydney Opera House in Sydney Harbour.
Adobe Firefly provides a suite of options for adjusting visual output, including whether the visual style is photorealistic, whether the image orientation is square, horizontal, or vertical, and whether any visual effects are desired.
T.J. Thomson

Software often interprets or guesses user intent. This is especially true for minimalistic prompts, such as a single word or even an emoji. When these are used for prompts, the AI system often returns a stereotypical representation based on its training data or the way it’s been programmed.

Being more specific in your prompt helps to arrive at a result more aligned with what you envisioned. This highlights that we need “multimodal” literacies: knowledge and skills that cut across writing and visual modes.

What are some key literacies in AI generation?

One of the first generative AI literacies is knowing which system to use.

Some are free. Others are paid. Some might be free but built on unethical datasets. Some have been trained on particular datasets that make the outputs more representative or less risky from a copyright infringement perspective. Some support a wider range of inputs, including images, documents, spreadsheets and other files. Others might support text-only inputs.

After selecting an image generator, you need to be able to work with it productively.

If you’re trying to make a square image for an Instagram post, you’re in luck. This is because many AI systems produce images with a square orientation by default. But what if you need a horizontal or vertical image? You’ll have to ask for that or know how to modify that setting.

What if you want text included in your image? AI still struggles with rendering text, similarly to how early AI systems struggled with accurately representing human fingers and ears. In these cases, you might be better off adding text in a different software, such as Canva or Adobe InDesign.

Many AI systems also create images that lack specific cultural context. This lets them be easily used in wider contexts. Yet it might decrease the emotional appeal or engagement among audiences who perceive these images as inauthentic.

A humanoid robot holds a newspaper with a headline about the economy.
AI often struggles with rendering text. Here’s how AI did with a request to create an image that included this headline, ‘Give the A.I. Economy a Human Touch.’
The authors via Midjourney, CC BY-NC-SA

Working with AI is a moving target

Learning AI means keeping pace with constant change. New generative AI products appear regularly, while existing platforms rapidly evolve.

Earlier this year, OpenAI integrated image generation into ChatGPT and TikTok launched its AI Alive tool to animate photos. Meanwhile, Google’s Veo 3 made cinematic video with sound accessible to Canva users, and Midjourney introduced video outputs.

These examples show where things are headed. Users will be able to create and edit text, images, sound and video in one place rather than having to use separate tools for each.

Building multimodal literacies means developing the skills to adapt, evaluate and co-create as technology evolves.

If you want to start building those literacies now, begin with a few simple questions.

What do I want my audience to see or understand? Should I use AI for creating this content? What is the AI tool producing and how can I shape the outcome?

Approaching visual generative AI with curiosity, but also critical thinking is the first step toward having the skills to use these technologies intentionally and effectively. Doing so can help us tell visual stories that carry human rather than machine values.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The Australian Internet Observatory: progress, tools and partnerships

Australian Internet Observatory. Logos for Australian Research Data Commons, Australian Government, National Infrastructure for Australia and ARC Centre of Excellence for Automated Decision-Making and Society. Image of laptop partly open with purple and red glows of colour.

The Australian Internet Observatory: progress, tools and partnerships

Author ADM+S Centre
Date 1 October 2025

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) celebrates a year since the launch of the Australian Internet Observatory (AIO), an initiative of the ADM+S in collaboration with researchers and research centres, university partners and organisations across Australia and internationally.

Since it began, the AIO has worked to build research infrastructure that supports the independent, ethical, and large-scale study of digital platforms. Over the past year it has focused on developing technical tools, collaborative research models, and international partnerships, laying the groundwork for a more transparent and accountable digital environment.

The Mobile Observation Toolkit
This toolkit, developed through the Australian Ad Observatory project at ADM+S, has been used to map emerging trends in political advertising across Facebook, Instagram, and TikTok. In the lead-up to the 2025 Australian federal election this May, this toolkit helped researchers examine third-party advertising that often masquerades as grassroots activism. This addresses critical risks to transparency and democratic accountability. 

The Data Download Package (DDP) is another data donation approach which enables users to securely contribute their personal data from digital platforms. These methods extend data access beyond traditional APIs and offer researchers robust alternatives for platform observability. 

Australian Social media monitoring
The AIO have also been working on a range of existing API tools with updates to social media collection tools. The Australian social media dashboard includes new visualisations to large datasets and a secure and easy login system (CILogon) researchers can access using their university address. The AIO has also upgraded the Realtime Analytics Platform for Interactive Datamining (RAPID) which provides real time access to social media data from around the world.

Web Archiving tool

Currently in Beta, the installable Web Archiving tool can record and store websites for research and managing archives’ metadata. 

27 members from the Australian Internet Observatory sstanding outside with Melbourne city in the background
Members of the Australian Internet Observatory at the AIO workshop held in May 2025 at RMIT University.

Over the past year, the AIO has expanded its community engagement, international presence, and collaborative networks through a range of activities. From the creative “Data Mystics” stall at the Woodford Folk Festival, where the public explored digital identities and data ethics, to international conferences in Germany, Denver, Singapore, and Zurich.

The AIO now has 15 team members based at 6 partner universities, as well as 10 partner and research leads engaged in the project.

Team members of the AIO will be present at various conferences over the next few months including the ARDC Skills summit, Data Donation Symposium Germany, eResearch 2025, Australian Political Science Association (APSA), Australian and Aotearoa New Zealand Communication Association (AANZCA), the Humanities, Social Sciences and Indigenous Research Data Commons (HASS&I RDC) Symposium, and the Digital Humanities Australia conference. 

As digital platforms evolve in complexity and influence, the Australian Internet Observatory aims to provide an enduring foundation for research that is technically robust, ethically sound, and democratically relevant. 

AIO is a co-investment partnership with the Australian Research Data Commons (ARDC) through the HASS and Indigenous Research Data Commons (DOI: 10.3565/hjrp-b141). The ARDC is enabled by the Australian Government’s National Collaborative Research Infrastructure Strategy (NCRIS).

Read more about AIO’s First Year in Review: The AIO’s progress, tools, and partnerships

SEE ALSO

Entangled: A new documentary to explore AI, climate change and conservation

Text: Entangled with image of whale and diver underwater

Entangled: A new documentary to explore AI, climate change and conservation

Author ADM+S Centre
Date 26 September 2025

A new Australian documentary is set to ask one of the most pressing questions of our time: Could the rise of AI nurture — rather than exploit — the living world? 

Entangled, directed by filmmaker Jeni Lee from ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), will explore technology’s entanglement with climate change and conservation. The long-form documentary is scheduled for completion in late 2026 with release anticipated in late 2026 or early 2027.

Lee is currently editing two short case study films that will premiere this year ahead of the long form film release. 

AI is often seen as a solution to global challenges, but its hidden costs are mounting. From minerals mined, vast energy and water consumption to offshore e-waste dumping and outsourced labour, its lifecycle raises serious ethical and environmental concerns. The dominant response to climate change, tech-driven fixes, often deepens inequality and disconnects us from nature rather than addressing root issues.

“We cannot consume our way out of the climate crisis, but we can rethink AI. By prioritising sustainability, ethics, and human values such as care, technology can serve both people and the planet rather than depleting them,” Lee said.

The documentary will bring together expert interviews, scientific research, and stories from across Australia to argue for a shift away from endless growth and toward interconnection by embedding intentional design, ethical oversight, and inclusive collaboration.

The film spotlights::

  • Listening to Whales
    Marine scientist Dr Olaf Meynecke is uniting citizen scientists, Google AI, and marine biology to decode whale migration paths. His research uses underwater microphones to track humpback whales, revealing how they respond to climate change and noise pollution—insights that could help protect ocean life.

    In 2025, Dr Olaf will lead a team embedding hydrophones along Australia’s East Coast, using AI to translate whale communication. His mission is clear: by truly listening to nature, we can collaborate with technology and other species to safeguard our oceans.
  • Watching Over Forests
    Wilderness Society scientist Rachel Fletcher helped develop Watch on Nature, a citizen-powered platform exposing deforestation by the beef, paper, timber, and mining industries. Using Sentinel-2 satellite imagery, citizen scientists track land clearing in real time, with drones verifying illegal activity to hold governments and corporations accountable. Now, the Wilderness Society is developing an automated tool to scale up vegetation change detection.

    Olaf and Rachel understand that AI is not a magic bullet. They approach AI tools with a critical eye, weighing both the challenges and benefits of integrating these emerging technologies into their work.

Research Team
Filmmaker Jeni Lee; Produced by Bianca Vallentine; Research consultant: Sarah Pink; Story Consultant Ashlee Page

This documentary is part of the ADM+S project ADM, Ecosystems and Multispecies Relationships.

For media inquiries, contact Jeni Lee, jeni.lee@monash.edu

SEE ALSO

The amount of personal info Australian renters have to hand over is ‘staggering’

Rent sign in front yard
Credit: Getty Images

The amount of personal info Australian renters have to hand over is ‘staggering’

Author Lina Przhedetsky
Date 28 August 2025

The New South Wales government has introduced a bill to better protect renters’ personal information when they apply for properties.

But other Australian states and territories are lagging behind, leaving many renters with little choice but to hand over excessive amounts of personal information when they apply for properties.

Two people at a table going through papers with an open laptop
The amount of information collected during rental applications is staggering. Picture: Getty Images

Too much information

As median rents continue to climb, and the national vacancy rate hovers around 1.2 per cent, renters report feeling pressured to use third-party rental apps when applying for a property.

Although these apps are presented as a convenient way to apply for properties, the amount of information they collect about renters is staggering.

People applying to rent a property have reported being asked to hand over marriage certificates and medical histories, provide excessive information about their lifestyle, and even take personality assessments.

Issues resulting from the widespread use of third-party rental apps are well-documented. These include high-profile data breaches, invasive questions sent to applicants’ employers and the unlawful collection of almost $AU50,000 in fees from NSW renters.

The protections in place to safeguard renters’ personal information are, by and large, inadequate.

A better deal

In August 2023, National Cabinet agreed on a ‘A Better Deal for Renters’ which committed all states and territories to introducing improved protections for renters’ privacy and standardising application processes.

This commitment is particularly important because progress appears to have stalled on both the Federal Government’s second tranche of privacy reforms, and the introduction of mandatory guardrails for safe and responsible AI.

State and territory governments have an important opportunity to plug key gaps in renter protection by limiting the amount of information that is collected about renters, restricting how this information can be used and placing stricter limits on how long it is stored.

Despite this commitment, state and territory responses have been inconsistent.

A computer screen showing a Submit button with terms and conditions and a privacy policy
Many real estate agencies and rental application platforms are not subject to the Privacy Act. Picture: Getty Images

South Australia, Queensland and Victoria have introduced updated protections, which have gone some way to improving each jurisdictions’ legislation – but there remain loopholes that risk exploitation.

At the time of writing, the NSW bill remains in limbo.

The NSW legislation, if passed in its current form, would significantly improve existing protections for the state’s renters and offer a model for other jurisdictions to follow.

It would do this by severely restricting the amount of personal information, including documents, that renters are asked to provide when applying for a property, and requiring the use of prescribed application forms.

If the regulations are designed correctly, they would prevent renters from being asked inappropriate questions, or being asked to hand over unnecessary information – like details of their hobbies or social media accounts.

The NSW bill also promises to increase penalties for breaches and empower the Civil and Administrative Tribunal to make orders for compensation in specific circumstance where tenants have suffered economic loss.

These changes are intended to deter bad behaviours and provide redress to tenants in a sector that’s previously been referred to as the ‘Wild West’.

Additionally, the bill would apply the Australian Privacy Principles to landlords, agents, and other people dealing with tenants’ personal information.

Currently, many real estate agencies and rental application platforms are not subject to the Privacy Act due to the small-business exemption.

Although there’s talk of removing this exemption, the Federal Government is yet to close this loophole, making it an opportune time for states and territories to plug the gap.

A couple talking to a real estate agent inside a house
With many AI systems, it’s impossible for applicants to know if they are being assessed fairly. Picture: Getty Images

Don’t ignore AI

The proposed NSW reforms offer a significant improvement for renters, but they don’t fully address key issues when it comes to the use of AI in the rental sector.

Although the proposed legislation would require agents to disclose when AI-generated or digitally-modified images are used in rental listings, it does not address the use of AI in tenant assessments.

There has been growing concern about the way these platforms use ‘black box’ artificial intelligence systems to evaluate applicants.

Often, neither the applicants or real estate agents know exactly how these algorithms score, rate and rank applicants – making it impossible to know whether they are being used fairly.

NSW must show the way

As it works its way through the parliament, there is a risk that the protections the NSW bill offers renters will be watered down before it passes into law, or that the regulations it delegates are poorly designed.

But for renters in NSW and around the country, it’s crucial that the bill passes in its current form, and the regulations it enables must be designed to effectively protect renters’ data.

Other states and territories should pay close attention to the NSW reforms but should also consider taking aim at AI-powered tenant assessments.

There is a long way to go before the collection, use and storage of renters’ information is regulated effectively – and action must be taken now.

 

SEE ALSO

Stories of Country shared on UQ Cultural Landscape Tour

ADM+S Members with Alex Bond on the land of the Jagera and Turrbal people at UQ . Image credit: Rebecca Ralph
ADM+S Members with Alex Bond on the land of the Jagera and Turrbal people at UQ . Image credit: Rebecca Ralph

Stories of Country shared on UQ Cultural Landscape Tour

Author ADM+S Centre
Date 24 September 2025

ADM+S members gained a deeper understanding of the traditional and ongoing significance of Indigenous Country during an Aboriginal Cultural Landscape Walking Tour of The University of Queensland’s St Lucia campus on 23 September 2025.

Hosted by the ADM+S nodes at UQ and QUT, the tour was guided by cultural educator Alex Bond, who strongly identifies with the Kabi Kabi people of south-east Queensland and descent links with the Waka Waka (Burnett River) and Kaanju (Cape York) and Kumu (Dirranbandi) peoples. 

With more than a decade of experience leading cultural landscape tours, Alex brought an extensive knowledge of Aboriginal culture, history, and Country in South-East Queensland.

During the walk around the lakes precinct, Alex highlighted the traditional significance of key locations across the campus and shared stories of important events that took place during the colonial period. 

He also explained how Aboriginal connections to the landscape continue today, describing cultural practices, longstanding relationships with waterways, and the resilience of community knowledge.

Participants learned how Indigenous people traditionally used local flora for food, tools, and technologies, including the specific trees used for medicinal purposes and crafting boomerangs.

The St Lucia tour follows other cultural learning activities undertaken by ADM+S members in Melbourne this year, including the Koorie Heritage Trust Walk, She Shapes History Tour and the exhibition  65,000 Years: A Short History of Australian Art, presented by the Potter Museum of Art.

SEE ALSO

Call for researchers: New ADM+S Podcast series on AI, ethics and society

Podcast microphone

Call for researchers: New ADM+S Podcast series on AI, ethics and society

Author ADM+S Centre
Date 19 September 2025

We’re looking for researchers, thought leaders, and experts to join a new 5-episode series to be featured on the ADM+S Podcast. The series on AI, Ethics and Society, will be hosted by Rayane El Masri, an ADM+S PhD student and recipient of the 2025 Marc Sanders Foundation Philosophy in Media podcasting fellowship.

This is an excellent opportunity to share your insights with a wide audience and contribute to important conversations shaping the future of technology. 

Rayane’s research delves into how care (using Joan Tronto’s political Care Theory) can help address the value alignment problem in generative AI, a key issue in the development of AI technologies today. Drawing on her recent intensive training in podcasting from renowned media figures such as Robert Krulwich, Mia Lobel, and Barry Lam, Rayane is bringing a fresh perspective to critical discussions on AI, ethics, and society. 

The podcast will focus on a variety of important and timely topics, including:

  • AI, Care, and Mental Health
  • The Politics of AI Harm: Accountability, Responsibility, and Remedies
  • AI and Authorship
  • Decolonising AI Knowledge: Beyond Western-Centric Frameworks
  • Feminist Infrastructures: Rethinking AI Design
  • AI in the Global South: Opportunities, Challenges, and Alternatives

Interested?
If your research or work aligns with any of the following topics above, we want to hear from you. Please email adms@rmit.edu.au with your selected topic/s of interest by 30 September 2025.

Read more information about the topics.

SEE ALSO

Viral violent videos on social media are skewing young people’s sense of the world

A person using a smartphone

Viral violent videos on social media are skewing young people’s sense of the world

Author Samuel Cornell and T.J. Thomson
Date 17 September 2025

When news broke last week that US political influencer Charlie Kirk had been shot at an event at Utah Valley University, millions of people around the world were first alerted to it by social media before journalists had written a word.

Rather than first seeing the news on a mainstream news website, footage of the bloody and public assassination was pushed directly onto audiences’ social media feeds. There weren’t any editors deciding whether the raw footage was too distressing, nor warnings before clips auto-played.

Australia’s eSafety commissioner called on platforms to shield children from the footage, noting “all platforms have a responsibility to protect their users by quickly removing or restricting illegal harmful material”.

This is the norm in today’s media environment: extreme violence often bypasses traditional media gatekeepers and can reach millions of people, including children, instantly. This has wide-ranging impacts on young people – and on society at large.

A wide range of violence

Young people are more likely than older adults to come across violent and disturbing content online. This is partly because they are more frequent users of platforms such as TikTok, Instagram and X.

Research from 2024 from the United Kingdom suggests a majority of teenagers have seen violent videos in their feeds.

The violence young people see on social media ranges from schoolyard fights and knife attacks to war footage and terrorist attacks.

The footage is often visceral, raw and unexpected.

A wide range of harms

Seeing this kind of violent footage on social media can make some children not want to leave the house.

Research also shows engaging with distressing media can cause symptoms similar to trauma, especially if the violence feels close to our own lives.

Research shows social media is not simply a mirror of youth violence but also a vector for it, with bullying, gang violence, dating aggression, and even self-directed violence playing out online. Exposure to these harms can have a negative effect on young people’s mental health, behaviour and academic performance.

For others, violent content on social media risks “desensitisation”, where people become so used to suffering and violence they become less empathetic.

Communication scholars also point to cultivation theory – the idea in this case that people who consume more violent content begin to see the world as potentially more dangerous than it really is.

This potentially skewed perception can influence everyday behaviour even among those who do not directly experience violence.

A long history of violence

Violence distributed by media is as old as media itself.

The ancient Greeks painted their pottery with scenes of battles and slaying. The Romans wrote about their gladiators. Some of the first photographs ever taken were of the Crimean War. And in the second world war, people went to the cinema to watch newsreels for updates on the war.

The Vietnam war was the first “television war” – images of violence and destruction were beamed into people’s homes for the first time. Yet television still involved editorial judgement. Footage of violence was cut, edited, narrated and contextualised.

Seeing violence as if you were there has been transformed by social media.

Now, footage of war, recorded in real time on phones or drones, is uploaded to TikTok or YouTube and shared with unprecedented immediacy. It often appears without any additional context – and often isn’t packaged any differently to a video of, say, somebody walking down the street or hanging out with friends.

War influencers have emerged – people who post updates from conflict zones, often with no editorial training, unlike war journalists. This blurs the line between reporting and spectacle. And this content spreads rapidly, reaching audiences who have often not sought it.

Israel’s military even uses war influencers to “thirst trap” social media users for propaganda purposes. A thirst trap is a deliberately eye-catching, often seductive, social media post designed to attract attention and engage users.

How to opt out of violence

There are some practical steps that can be taken to reduce your chances of encountering unwanted violent content:

  • turn off autoplay. This can prevent videos from playing unprompted
  • use mute or block filters. Platforms such as X and TikTok let you hide content with certain keywords
  • report disturbing videos or images. Flagging videos for violence can reduce how often they are promoted
  • curate your feed. Following accounts that focus on verified news can reduce exposure to random viral violence
  • take a break from social media, which isn’t as extreme as it sounds.

These actions aren’t foolproof. And the reality is that users of social media have very limited control over what they see. Algorithms still nudge users’ attention toward the sensational.

The viral videos of Kirk’s assassination highlight the failures of platforms to protect their users. Despite formal rules banning violent content, shocking videos slip through and reach users, including children.

In turn, this highlights why more stringent regulation of social media companies is urgently needed.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

From ‘Doctor Google’ to Data-Driven Care: The Future of Digital Health Literacy

Illustration of health care practitioner with patient.

From ‘Doctor Google’ to Data-Driven Care: The Future of Digital Health Literacy

Author Swinburne University
Date 10 September 2025

Anyone who has turned to ‘Doctor Google’ when facing health problems knows the tidal wave of information unleashed when they hit enter, particularly for sexual and reproductive health.

A new first-of-its kind Digital and Data Capabilities for Sexual and Reproductive Health platform aims to tackle this problem, part of a groundbreaking four-year study to transform sexual and reproductive health digital capabilities.

Led by ARC Future Fellowship Prof Kath Albury from the ARC Centre of Excellence for Automated Decision-Making and Society at Swinburne University, the accessible online education and support tool was created in consultation with over 100 Australian health professionals, to meet the needs of diverse communities and workforces.

“Sexual and reproductive health data is especially sensitive, and social media platforms are not always safe spaces for health promotion,” says Prof Albury.

This project aims to support the development of new strategies to address the good, bad and ugly of digital information and transformation.

“From websites to social media and online forums, digital platforms and technologies are central to contemporary clinical services and health promotion. They’re also important sources of information and peer-support.”

As organisations increasingly adopt digital technologies and move into online spaces, Professor Albury’s research provides comprehensive resources for navigating the complex landscape of digital policy and practice .

As part of a larger four-year project, the research-backed and data-driven platform is an essential piece in the puzzle for the current health workforce.

“We often talk about the need for ‘digital health literacy’ for health consumers, but our participants told us that the current HIV, sexual and reproductive health workforce have a real appetite for building their own digital and data capabilities,” she says.

“Our pilot workshops with health professionals demonstrated that shared vocabularies for talking about digital tech can really support organisational strategy and individual skill-building.”

ADM+S will be hosting an online interactive symposium to launch the Digital and Data Capabilities for Sexual and Reproductive Health Project Final Report and Website on Thursday 11 September, 10am- 12pm.

To register for this event visit Final Report and Website Launch: Digital and Data Capabilities for Sexual and Reproductive Health

SEE ALSO

Gaming GenAI workshop explores the possibilities and challenges of AI in library services

GenAI Arcade website on laptop screen

Gaming GenAI workshop explores the possibilities and challenges of AI in library services

Author ADM+S Centre
Date 1 September 2025

Last week, researchers from the ARC Centre of Excellence for Automated Decision-Making and Society at the QUT GenAI Lab partnered with the State Library of Queensland (SLQ) to deliver the inaugural Gaming GenAI workshop for library staff, exploring the possibilities and challenges of generative AI in public interest institutions.

The workshop, conducted in affiliation with the ARC Centre of Excellence for Automated Decision-Making and Society, provided a hands-on space to discuss how GenAI can be used responsibly, innovatively, and inclusively in library services and operations.

Over three hours, participants explored a brief history of AI and the political economy of generative AI before diving into a possibilities matrix, which mapped useful and less useful AI applications against models of responsibility and irresponsibility.

A key highlight was experimenting with the GenAI Arcade platform, using interactive “games” to examine issues such as knowledge limitations, environmental impacts, and alignment with cultural values. The session concluded with a lively discussion on AI-related disruptions in libraries and the critical role public institutions will play in shaping the future of AI.

Will He, Kevin Witzenberger, Jean Burgess and Aaron Snoswell (L-R) at the State Library Queensland Gaming GenAI workshop.
Will He, Kevin Witzenberger, Jean Burgess, Anna Raunik, (Executive Director, Content, State Library of Queensland) and Aaron Snoswell (L-R) at the State Library Queensland Gaming GenAI workshop.

The workshop was delivered by ADM+S researchers from the GenAI Lab at QUT: Kevin Witzenberger (Affiliate), William He (Affiliate), Aaron Snoswell (Associate Investigator) and Jean Burgess (Associate Director).

ADM+S and QUT’s GenAI lab thanks SLQ staff for their support in hosting the workshop. This work contributes to the ADM+S Critical Capabilities for Inclusive AI project.

Visit the GenAI Arcade, a project in continuous development.

SEE ALSO

ADM+S PhD student shares research on data frictions of clinical sexual health services

Caitlin Learmonth stands in front of her poster presentation

ADM+S PhD student shares research on data frictions of clinical sexual health services

Author ADM+S Centre
Date 2 September 2025

Caitlin Learmonth, PhD student at the Swinburne University node of the ARC Centre of Excellence for Automated Decision-Making and Society, recently travelled to Montreal, Canada to share her research at two major events.

At the STI & HIV World Congress Cailtin presented a research poster on data frictions in the provision of clinical sexual health services. 

Her work highlighted how current guidelines and funding mechanisms often fail to meet the needs of sexual health consumers who fall outside of population-based sampling categories, such as those in consensually non-monogamous (CNM) communities.

“Using my research’s critical lens of consensually non-monogamous sexual health consumers, I showed how current guidelines and funding mechanisms fail to meet the needs of some sexual health consumers falling outside of population-based sampling categories.” Caitlin said.

At the STI & HIV World Congress, Caitlin met with academics from the School of Public Health at the University of British Columbia, strengthening international research networks in her field.

In addition, Caitlin  gave a presentation at DIGS Lab (Digital Intimacy, Gender & Sexuality Research Lab) at Concordia University.  She  provided an overview of her PhD project, which explores the data practices informing clinical sexual health services and the strategies used to navigate restrictions by CNMconsumers and healthcare providers. Caitlin also presented the strategies used by consumers and healthcare providers for navigating digital health systems to access sexual health services and engaged with fellow PhD students, post-doctoral researchers, and senior academics engaged in related fields.

Caitlin notes this research trip provided her with a reminder of the value of social research in health sciences.

“Learning how to communicate my research to different audiences, health and medical at the conference, and media, communication and cultural studies at DIGS Lab, has helped me explain and conceptualise my research in my writing and other academic outputs.” said Caitlin.

This research trip was co-funded by ADM+S Research Training and Swinburne University.

SEE ALSO

Voice AI and authenticity: current issues and emerging challenges

Close-up of young woman talk with virtual digital voice recognition assistant. African female using voice assistant on smartphone.
Credit: Luis Alvarez/Getty Images

Voice AI and authenticity: current issues and emerging challenges

Author ADM+S Centre
Date 1 September 2025

Voice technologies are rapidly being integrated into generative AI systems and applications, from customer service chatbots to media production tools.

A new working paper from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) examines how these developments are reshaping everyday life and the profound questions they raise about authenticity.

KEY HIGHLIGHTS

  • Understanding Voice AI
  • Surveys the rise of AI-driven voice technologies
  • Explores how they're used in everyday communication and media
  • Highlights questions and debates about authenticity and media
  • Risks and Challenges
  • Deception in communication and media
  • Shifts in cultural interaction with voice
  • Digital inequality and labour impacts in knowledge and creative work
  • Responses and Future Directions
  • Technical and design approaches to support authenticity
  • Ethical, legal and accountability frameworks for voice AI
  • Critical directions for research, policy, and community empowerment

The paper, Voice AI and authenticity: current issues and emerging challenges surveys the historical evolution of voice AI, reviews the current state of research, and outlines emerging responses to the ethical, legal, cultural and practical challenges it presents. 

While synthetic voices promise benefits in communication, media and digital service provision, they also bring heightened risks of deception, shifts in how culture and information are valued, and concerns over digital inequality and labour displacement.

The paper highlights the urgent need for robust frameworks and regulatory responses to guide the responsible development and use of synthetic voices. By addressing the risks alongside the opportunities, the working paper provides researchers, professionals and the wider community with a timely resource for navigating one of the most contested frontiers of generative AI.

Read the full paper Voice AI and authenticity: current issues and emerging challenges.

SEE ALSO

AI companions raise questions of connection, safety and responsibility

Young girl with pink hair looking at Replika AI companion on mobile phone

AI companions raise questions of connection, safety and responsibility

Author ADM+S Centre
Date 21 August 2025

The rise of AI “companions” is transforming how people experience friendship and intimacy, offering comfort to some while raising serious ethical and safety concerns.

ABC’s 7.30 recently profiled Australians turning to digital companions for support, from casual conversations to long-term romantic partnerships. 

For users like Fiona, an AI friend provided encouragement and judgement-free chats when human connection was out of reach. For Hayley, who has struggled to form traditional relationships, her AI partner “Miles” has become a vital source of affirmation and companionship.

But experts warn that alongside these benefits are significant risks. A recent US study found that one in three teenagers had confided in an AI companion rather than a human, with some relationships ending in tragedy.

Dr Henry Fraser, a legal scholar at QUT, and Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) cautions that the technology is moving faster than its safeguards:

“We’ve seen some people who have perceived themselves to be in relationship to a chatbot and then encouraged by the chatbot have harmed themselves, have gone and tried to harm others,” said Dr Fraser.

“And I suspect that’s just the tip of the iceberg in terms of some of the negative effects.

“The ethos, especially in Silicon Valley, has been move fast and break things, but the kinds of things that you can break now are much more tangible. A more sober responsible attitude is desperately, desperately needed right now.”

Through the Regulatory Project at the ADM+S, Fraser and colleagues are examining the broad range of regulatory questions raised by automated decision-making systems (ADMs), their supply chains, and deployments,as well as the potential for ADMs themselves to be used as regulatory tools.

Watch the full ABC 7.30 episode: Can Ai ‘companions’ replace real friendships?

SEE ALSO

2024 Annual Report released: National research impacts and $14.4m in research investment

Text: Annual Report 2024. Logos: ADM+S and Australian Government Research Council. Background image: A group of ADM+S members at a workshop.

2024 Annual Report released: National research impacts and $14.4m in research investment

Author ADM+S Centre
Date 19 August 2025

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has released its 2024 Annual Report, showcasing major contributions to responsible, ethical, and inclusive approaches to automated decision-making systems and artificial intelligence (AI).

Now at the midpoint of its seven-year life, the Centre has secured $14.4 million in new investment, launched nine new Signature Projects, and delivered research shaping national policy, industry practice, and public debate.

“The Centre has delivered groundbreaking work, including the first mapping of automation across the Australian public sector, the first comprehensive study of digital inclusion in First Nations communities, and the development of new tools and methods for observing and responding to the interactions between Australians and the digital platforms they use every day,” said Deena Shiff, Chair of the ADM+S International Advisory Board.

The year saw the launch of nine ambitious new projects bringing together social and technical disciplines to address critical current issues, including the challenges of generative AI and authenticity, sustainability, and cultural diversity.

“Together, these projects are setting a new agenda for the second half of the Centre’s life,” said Distinguished Prof Julian Thomas, Director of ADM+S.

“They respond to dramatic recent developments, notably the emergence of popular generative AI applications, which have sparked intense debate in areas from education to work and creative practice.”

The Centre secured $14.4 million in new investment to expand its research capacity, including:

  • The first comprehensive national study of Indigenous digital inclusion, funded by the Commonwealth Department of Infrastructure, Transport, Regional Development, Communications and the Arts.
  • The Australian Internet Observatory, a new national research facility supported by the Australian Research Data Commons and the Australian Government’s National Collaborative Research Infrastructure Scheme. The Observatory will enable us to refine the Centre’s unique tools and make them widely available to researchers everywhere; we expect it to be a lasting legacy of the Centre’s work.

ADM+S researchers continue to be guided by our shared commitment to responsible, ethical and inclusive automated systems, with the dramatic emergence and take-up of AI technologies underlining the importance of our approach.

View the 2024 ADM+S Annual Report

SEE ALSO

Does AI really boost productivity at work? Research shows gains don’t come cheap or easy

Wikimedia/Pexels/The Conversation

Does AI really boost productivity at work? Research shows gains don’t come cheap or easy

Authors Fan Yang and Jake Goldenfein
Date 15 August 2025

Artificial intelligence (AI) is being touted as a way to boost lagging productivity growth.

The AI productivity push has some powerful multinational backers: the tech companies who make AI products and the consulting companies who sell AI-related services. It also has interest from governments.

Next week, the federal government will hold a roundtable on economic reform, where AI will be a key part of the agenda.

However, the evidence AI actually enhances productivity is far from clear.

To learn more about how AI is working and being procured in real organisations, we are interviewing senior bureaucrats in the Victorian Public Service. Our research is ongoing, but results from the first 12 participants are showing some shared key concerns.

Our interviewees are bureaucrats who buy, use and administer AI services. They told us increasing productivity through AI requires difficult, complex, and expensive organisational groundwork. The results are hard to measure, and AI use may create new risks and problems for workers.

Introducing AI can be slow and expensive

Public service workers told us introducing AI tools to existing workflows can be slow and expensive. Finding time and resources to research products and retrain staff presents a real challenge.

Not all organisations approach AI the same way. We found well-funded entities can afford to test different AI uses for “proofs of concept”. Smaller ones with fewer resources struggle with the costs of implementing and maintaining AI tools.

In the words of one participant:

It’s like driving a Ferrari on a smaller budget […] Sometimes those solutions aren’t fit for purpose for those smaller operations, but they’re bloody expensive to run, they’re hard to support.

 

‘Data is the hard work’

Making an AI system useful may also involve a lot of groundwork.

Off-the-shelf AI tools such as Copilot and ChatGPT can make some relatively straightforward tasks easier and faster. Extracting information from large sets of documents or images is one example, and transcribing and summarising meetings is another. (Though our findings suggest staff may feel uncomfortable with AI transcription, particularly in internal and confidential situations.)

But more complex use cases, such as call centre chatbots or internal information retrieval tools, involve running an AI model over internal data describing business details and policies. Good results will depend on high-quality, well-structured data, and organisations may be liable for mistakes.

However, few organisations have invested enough in the quality of their data to make commercial AI products work as promised.

Without this foundational work, AI tools won’t perform as advertised. As one person told us, “data is the hard work”.

Privacy and cybersecurity risks are real

Using AI creates complex data flows between an organisation and servers controlled by giant multinational tech companies. Large AI providers promise these data flows comply with laws about, for instance, keeping organisational and personal data in Australia and not using it to train their systems.

However, we found users were cautious about the reliability of these promises. There was also considerable concern about how products could introduce new AI functions without organisations knowing. Using those AI capabilities may create new data flows without the necessary risk assessments or compliance checking.

If organisations handle sensitive information or data that could create safety risks if leaked, vendors and products must be monitored to ensure they comply with existing rules. There are also risks if workers use publicly available AI tools such as ChatGPT, which don’t guarantee confidentiality for users.

How AI is really used

We found AI has increased productivity on “low-skill” tasks such as taking meeting notes and customer service, or work done by junior workers. Here AI can help smooth the outputs of workers who may have poor language skills or are learning new tasks.

But maintaining quality and accountability typically requires human oversight of AI outputs. The workers with less skill and experience, who would benefit most from AI tools, are also the least able to oversee and double-check AI output.

In areas where the stakes and risks are higher, the amount of human oversight necessary may undermine whatever productivity gains are made.

What’s more, we found when jobs become primarily about overseeing an AI system, workers may feel alienated and less satisfied with their experience of work.

We found AI is often used for questionable purposes, too. Workers may use AI to take shortcuts, without understanding the nuances of compliance within organisational guidelines.

Not only are there data security and privacy concerns, but using AI to review and extract information can introduce other ethical risks such as magnifying existing human bias.

In our research, we saw how those risks prompted organisations to use more AI – for enhanced workplace surveillance and forms of workplace control. A recent Victorian government inquiry recognised that these methods may be harmful to workers.

Productivity is tricky to measure

There’s no easy way for an organisation to measure changes in productivity due to AI. We found organisations often rely on feedback from a few skilled workers who are good at using AI, or on claims from vendors.

One interviewee told us:

I’m going to use the word ‘research’ very loosely here, but Microsoft did its own research about the productivity gains organisations can achieve by using Copilot, and I was a little surprised by how high those numbers came back.

Organisations may want AI to facilitate staff cuts or increase throughput.

But these measures don’t consider changes in the quality of products or services delivered to customers. They also don’t capture how the workplace experience changes for remaining workers, or the considerable costs that primarily go to multinational consultancies and tech firms.


The authors thank the research participants for sharing their insights, the researchers who contributed their expertise to the initial analysis of interview transcripts, and the Office of the Victorian Information Commissioner for supporting participant recruitment.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S supports inSTEM 2025: building a more inclusive future in STEM

ADM+S supports inSTEM 2025: building a more inclusive future in STEM

Author ADM+S Centre
Date 11 August 2025

ADM+S was proud to play a key role in organising the 2025 inSTEM conference over 27 to 28 May. This initiative continues to advance equity, inclusion and career development across STEM fields in Australia.

Held annually, inSTEM is dedicated to supporting marginalised and underrepresented people in STEM, while also equipping allies and leaders with the tools to drive meaningful change. This year’s event Building Bridges in STEM: Empowering Voices, Cultivating Leaders, offered a welcoming and inclusive environment for attendees to connect, reflect, and build lasting professional networks.

Held over two days, inSTEM brought together researchers from across several ARC Centres of Excellence to provide a safe, inclusive space to connect and share experiences, and learn from experts on advancing careers in STEM while fostering inclusivity.  

“Collaborating with colleagues across nine Centres of Excellence was a fantastic opportunity to strengthen our community of practice by sharing ideas to create an engaging and inclusive program,” said Sally Storey, ADM+S Manager for Research Training and Development.

“I really valued meeting professional staff and researchers beyond my own Centre, gaining deeper insight into the challenges faced by underrepresented groups in STEM. This experience helped me re-evaluate new ways to break down barriers and foster a more equitable and supportive environment.”

Professional staff Sally Storey from ADM+S, Ruth Waterman from COMBs and Mathew Warren from ADM+S

Other organising ARC Centres of Excellence included:

  • The Centre of Excellence for Dark Matter Particle Physics
  • The Centre of Excellence for Engineered Quantum Systems
  • The Centre of Excellence in Synthetic Biology
  • The Centre of Excellence for Gravitational Wave Discovery
  • The Centre of Excellence in Optical Microcombs for Breakthrough Science
  • The Centre of Excellence in Quantum Biotechnology
  • The Centre of Excellence for Transformative Meta-optical Systems 
  • The Centre of Excellence for Electrochemical Transformation of Carbon Dioxide

Designed as both a professional development and networking event, the conference created a space for participants to connect, share experiences.  

Topics ranged from inclusive leadership and allyship to navigating structural barriers in academia and industry.

An initiative of the ARC Centres of Excellence, inSTEM continues to grow as a community of learning, support, and action, empowering individuals at all career stages to shape a more inclusive STEM ecosystem.

SEE ALSO

Expert calls for baseline AI regulation to protect creators

Prof Kimberlee Weatherall

Expert calls for baseline AI regulation to protect creators

Author ADM+S Centre
Date 8 August 2025

Debate has erupted over how the government should regulate artificial intelligence, while the Productivity Commission has argued for a light touch to potentially unlock billions of dollars of economic gain.

Appearing on ABC Radio National Breakfast, Prof Kimberlee Weatherall, a University of Sydney Law Professor and Chief Investigator from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and former member of the Commonwealth Government’s Temporary AI Expert Group, warned that current copyright frameworks are leaving both creators and AI developers in legal limbo.

“People’s material is and has been used to train AI,” Professor Weatherall said. 

“Mostly in terms of the big models, most of that training happening overseas, but it has happened without permission and there’s certainly some fairly large databases.”

She pointed to the notorious Books3 database, which includes pirated books and has been used in AI training, as a key example of how creators’ rights are being overlooked.

Weatherall recognised that Australian creators face major barriers when trying to enforce their rights.

“If you try to bring litigation for example here in Australia where we don’t have an exception, you know you’re going to face a defendant who’s based overseas, [and] face challenges in enforcing that judgement.”

But critically, Prof Weatherall said it is also challenging for people who want to train AI responsibly in Australia: copyright exceptions are narrow; but getting all the licences needed could be impossible: 

“There is no central system for doing so.”

Prof Kimberlee Weatherall argued that waiting years to review existing laws is too slow given the pace of AI development. She supports reforming consumer protection and privacy law, but also introducing baseline regulations now to ensure AI is developed and used responsibly and safely.

Listen to the full interview on ABC RN Breakfast: How should Australia regulate AI?

SEE ALSO

New research highlights shifting realities of logistical labour in a globalised world

Text: Work, organisation, labour & globalisation (research journal cover)

New research highlights shifting realities of logistical labour in a globalised world

Author ADM+S Centre
Date 8 August 2025

A newly released special issue of the journal Work, Organisation, Labour and Globalisation highlights the shift in how global logistics is transforming labour across industries and continents. 

Titled New Worlds of Logistical Labour: Spaces, places, technologies, workers, the issue is co-edited by researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), ADM+S Affiliate and former Research Fellow Dr Christopher O’Neill, and  ADM+S Student Lauren Kelly, alongside Dr Tom Barnes from the Australian Catholic University.

Bringing together ten original articles, the collection explores how global logistics is reshaping the nature of work through radical technological change and new spatial and social dynamics. 

With contributions spanning five continents, the special issue offers a significant intervention into the growing field of logistics and labour studies.

“While reiterating the widespread and ongoing influence of despotic workplace practices in logistics globally, the collection challenges assumptions of geographical universalism that characterise much modern debate about logistical labour,” the editors write.

Key themes across the collection include:

  • The blurring of analytical boundaries that traditionally divide places of work and industry;
  • The complex and reciprocal relations between technology and labour;
  • Fresh perspectives to debates about continuity versus novelty in contemporary work, addressing issues such as labour displacement, labour augmentation, workplace regimes and algorithmic management

The special issue builds on research conducted as part of the ADM+S-funded project Precarious Warehouse Work and the Automation of Logistical Mobilities, led by Dr Christoper O’Neill in 2023. A workshop held under the project laid the foundation for the issue, which features contributions from nine ADM+S members and affiliates.

Among the featured articles are:

Access the open access special issue New Worlds of Logistical Labour: Spaces, places, technologies, workers

SEE ALSO

Dr Thao Phan wins 2025 Max Crawford Medal for research on race, gender & algorithmic culture

Thao Phan

Dr Thao Phan wins 2025 Max Crawford Medal for research on race, gender & algorithmic culture

Author ADM+S Centre
Date 6 August 2025

The Australian Academy of the Humanities has awarded the 2025 Max Crawford Medal to feminist science and technology studies (STS) scholar Dr Thao Phan for her pioneering work on algorithmic culture and the politics of race, racialisation and technology.

The Max Crawford Medal is Australia’s most prestigious award for achievement by early career scholars in the humanities. Presented annually by the Australian Academy of the Humanities, it
recognises an early-career scholar whose published work contributes to public understanding of their discipline.

An Affiliate at the ARC Centre of Excellence for Automated Decision-Making and Society at Monash University, Dr Phan has significantly advanced debates on race, gender, and algorithmic culture, uncovering the often-opaque ways platforms like Meta and Google use our data to influence us.

“Platforms use behavioural data to algorithmically target content, but these tools also introduce new ways to classify and discriminate,” said Dr Phan, who is a Lecturer in Sociology at the Australian National University.

“I’m particularly interested in understanding how AI systems and technologies take on practices of racial targeting and classification.”

Dr Phan’s research sits at the intersection of feminist STS, media studies, and cultural studies. Her recent projects examine the racialisation of targeted ads on platforms like Facebook, the ways algorithmic culture shapes public perceptions, and the constructs of gender and race in AI.

“You don’t have to look far to see that platforms are wielding an unprecedented amount of power,” Dr Phan says. “These new tools for seeing and acting on people have profound implications for how we live in the world, significantly shaping our lives, and how we relate to ourselves and others.”

Academy President Professor Stephen Garton AM FAHA FASSA FRSN FRHistS congratulated Dr Phan on the honour.

“Dr Phan is an exemplary scholar whose work offers powerful insights into how algorithmic systems shape our identities, our communities, and our society in opaque and complicated ways. Her work shows the critical importance of humanities scholarship in understanding our increasingly complex world. The Academy is proud to honour her scholarship at a time when this work is more important than ever.”

Dr Phan will be formally presented with the Max Crawford Medal at the 2025 Annual Academy Dinner, in Sydney on Thursday 13 November 2025.

Read the original article published by the Australian Academy of the Humanities.

SEE ALSO

Award-winning tech enhances information retrieval in wearable AI devices

Logos: KDD2025 and Meta CRAG-MM Challenge. Image: Illustration of Blue glasses with text prompt inside.

Award-winning tech enhances information retrieval in wearable AI devices

Author ADM+S Centre
Date 6 August 2025

A team of researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at UNSW has been awarded third place for single-source augmentation in the highly competitive KDD Cup 2025 Meta CRAG-MM Challenge, ranking alongside top institutions such as Peking University, Meituan and NVIDIA.

The CRAG-MM (Comprehensive RAG Benchmark for Multi-modal, Multi-turn) is a pioneering benchmark designed to evaluate next-generation visual assistants powered by Vision Large Language Models (VLLMs), combining image understanding, information retrieval, and dialogue generation.

The Challenge sought to improve how users interact with AI assistants through wearable devices like smart glasses, which capture first-person (egocentric) images to support more intuitive, visual-based search and communication.

Participating teams tackled three tasks focused on advancing these systems: answering user questions, integrating information from multiple sources, and generating seamless multi-turn conversations.

Led by ADM+S Chief Investigator Prof Flora Salim, the team included Breeze Chen and Wilson Wongso (ADM+S students at UNSW), and other members of Prof Salim’s team  Xiaoqian Hu and Yue Tan from UNSW. 

The challenge focused on building agents that are factually accurate and robust, as current VLLMs tend to hallucinate and provide unreliable answers. The team’s solution was designed specifically to address this weakness by balancing accuracy and truthfulness, both of which were key to the final rankings.

“This challenge really pushes the boundaries of next-generation AI assistants. It highlights the importance of building agents that are not just capable, but critically reliable and accurate,” said Wilson Wongso.

The team’s technical approach combined document retrieval, dual-path answer generation, and multi-stage verification. For each question, it searched a database for relevant information, guided the language model to base its responses on that data, and then re-verified the output to ensure factual accuracy. This significantly reduced hallucinated content while preserving answer quality, an issue that continues to challenge current VLLMs.

“We weren’t expecting to place third, as our solution was only ranked within the top 10 on the preliminary leaderboard. But it proved to be quite robust during the manual evaluations by the judges, ultimately securing us 3rd place.”

Over the past few months, the challenge brought together over 900 participants forming more than 250 teams from around the world, submitting more than 5000 entries across three tasks. 

The Meta CRAG-MM Challenge is part of the KDD 2025 (Knowledge Discovery and Data Mining), a premier international conference in data science and machine learning that brings together researchers and practitioners in data mining, data science, artificial intelligence, and large-scale analytics.

Read more details about the team’s technical solution in the pre-print publication Multi-Stage Verification-Centric Framework for Mitigating Hallucination in Multi-Modal RAG

Prof Flora Salim and Wilson Wongso are also contributing to GenAISim: Simulation in the Loop for Multi-Stakeholder Interactions with Generative Agents, an ADM+S signature project developing and testing a novel suite of generative and data driven simulations to support complex decision-making across sectors.

SEE ALSO

2025 ADM+S Symposium: Automated Social Services – Building Inclusive Futures

Audience attending the 2025 ADM+S Symposium

2025 ADM+S Symposium: Automated Social Services – Building Inclusive Futures

Author ADM+S Centre
Date 5 August 2025

The use of automation in social service delivery took centre stage at the 2025 ADM+S Symposium: Automated Social Services, bringing together researchers, technologists, social service professionals and policymakers, to showcase innovative responses to the challenges of building inclusive, ethical, and responsible automated social services.

Hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at the University of Queensland 1-4 July, the event welcomed more than 160 in-person attendees from across the Centre and its partner organisations.

Prof Paul Henman, Social Services Focus Area Leader and Chief Investigator at the University of Queensland node of ADM+S, said the Symposium was designed to move beyond critique;  to highlight and advance partnerships between ADM+S and social service organisations in positively working towards inclusive AI and automation.

“From high profile automation disasters, such as Robodebt, we are well aware of the harms that can occur when AI and automation is used in delivering social services to people who often are disadvantaged,” he said.

“The Symposium aimed to bring together partners and social service organisations to forge new approaches to using AI and empowering social service service users, professionals and policy makers.”

The four-day symposium offered a comprehensive program of keynote talks, panels, demonstrations, and workshops that examined automation’s expanding role in shaping access, and equity, within digital social services.

A strong focus on co-design and participatory approaches to technology development ran through the program. In the panel “Co-designing Innovative AI/ADM in Social Services,” speakers showcased collaborative projects between researchers and organisations such as the Australian Red Cross, addressing both the promise and the risks of emerging technologies.

ADM+S HDR Poster Competition 

The Symposium also celebrated emerging research talent, with an HDR Student Poster Competition recognising outstanding work by ADM+S PhD students:

  • Judges’ Award ($1,500 research support funds): Shir Weinbrand (QUT)
    Poster title: What is Your Search Query, Their AI Answer: How Google’s AI Overviews Shape Political Information Exposure?
  • Honourable Mention ($500 research support funds): Caitlin Learmonth (Swinburne)
    Poster title: Navigating Digital Health Systems for Sexual Health: A Case Study of Consensual Non-Monogamy
Prof Julian Thomas presenting the ADM+S HDR Poster competition Judges' Award to Shir Weinbrand (right).
Prof Julian Thomas presenting the ADM+S HDR Poster competition Judges' Award to Shir Weinbrand (right).

ADM+S Demonstrations

A dedicated session featured digital tools, educational material and workshop outputs from several ADM+S Signature Projects and ADM+S Partner Organisations including:

  • The GenAI Arcade (genai-arcade.net) – an interactive platform designed to make generative AI accessible through hands-on learning.
  • ‘AI and You’ in collaboration with Tactical Tech – including interactive exhibits Data Detox Bar, Everywhere all the time, Supercharged by AI, What The Future Wants, and Data Detox Kit, the project aims to build public knowledge, elicit community insights, and support global outreach using Tactical Tech’s accessible, open-source materials—empowering diverse audiences to reflect critically on the role of AI in daily life and envision a more inclusive digital future. 
  • Mortar – a tool to assist document navigation and interpretation, co-designed to enable welfare legal practitioners to better support their clients.
  • AI Content Curation Interface Design Patterns – a portfolio of nine discrete interface design pattern concepts, where AI-enabled systems are able to curate content.
  • The Wicked Problem of AI Policy Design – an installation of eight artificial ‘wicked problem’ plants, each representing a different wicked problem of AI policy design. Inspired by the concept of problem trees—a method used to unpack the roots and effects of complex issues—each plant visualises the branching structure of a wicked problem.

A highlight of the event was a panel discussion on AI and automated decision-making in government, chaired by ABC Radio National’s Damien Carrick and recorded for The Law Report. The discussion addressed lessons from past failures such as Robodebt, and asked: What safeguards can ensure automated tools serve the public good? The panel focused on practical approaches to building stronger checks and balances for automation in government services, policymaking, and administration.

Listen to the full panel discussion AI and automated decision making in government on the ABC Law Report, ABC Listen. 

The symposium concluded with a fireside chat featuring the Hon. Bill Shorten, who reflected on his experiences in social policy reform, including the creation of the NDIS and the Robodebt Royal Commission. The discussion expanded into a panel featuring key advocacy leaders from Economic Justice Australia, QCOSS, and the Disability Advocacy Network Australia, offering a powerful call to action for building more inclusive and human-centred digital futures.

SEE ALSO

Dr Ashwin Nagappa awarded prestigious AXA Research Fund Fellowship

Ashwin Nagappa

Dr Ashwin Nagappa awarded prestigious AXA Research Fund Fellowship

Author ADM+S Centre
Date 5 August 2025

Dr Ashwin Nagappa, awarded funding to help make the internet a safer space, says digital communication, once seen as a path to a more democratic and open society, has evolved with mixed results over the past three decades.

“It has expanded access to information, enabled global connectivity, and empowered social movements, but on the flip side, it  has also contributed to new forms of harm, such as misinformation, cybercrime, and polarisation,” says Dr Ashwin Nagappa from the QUT School of Communication and a Postdoctoral Research Fellow at the QUT node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and Digital Media Research Centre.

Dr Nagappa is working to create policy guidelines for better governance to minimise online harm on current and future platforms, and has been named a recipient of a 2024 AXA Research Fund post-doctoral fellowship for his project Trust in the Fediverse: Community Protocols and Automation to Combat Online Harms.  

He is one of eight early-career researchers from Australia, France, Spain, The Netherlands, Hong Kong, Ireland and the United States awarded up to €140,000 in grants to advance pioneering work on understanding, measuring, and mitigating the effects of misinformation.

“Online communication has grown rapidly, but effective governance frameworks for such communication systems are still evolving and have yet to provide effective protections for users, especially against hate speech and misinformation” Dr Nagappa said.

“Instead of a free, open and safe space, we have seen a sharp rise in online harm, posing threats to social, political, and financial institutions worldwide. A few dominant platforms, including Facebook, Instagram, and YouTube, as well as some of the messaging apps, now control much of online communication and are at the centre of these issues.

“There have been some regulations introduced to address online harm but achieving global agreement on platform governance and online safety remains a challenge.” 

Dr Nagappa says while there are grassroots developers creating alternative social media platforms, such as Mastodon and Bluesky, to address the shortcomings of mainstream platforms, these generally struggle with sustainable business and governance models.

“Decentralised social networking platforms like Mastodon and Bluesky, which have attracted millions of users, aim to balance free speech, trust, and safety,” Dr Nagappa said.

“Utilising tools including artificial intelligence and technologies outside the mainstream, they offer spaces that amplify marginalised voices and foster discussions often censored or sidelined on mainstream platforms, as well as environments that encourage social bonding over polarisation.

“As their networks and infrastructure widens, they are offering valuable insights into new governance models.

Dr. Nagappa’s AXA-funded project focuses on identifying key features of constructive ”polycentric governance’ – a model that offers multi-layered rules and mechanisms for social media participation. The project aims to capture essential characteristics such as network size, content moderation practices, harm mitigation strategies, and community participation that contribute to making these digital spaces more prosocial.  

“My project explores the optimal conditions under which these models minimise harm while building social connections,” he said.

“The goal is to develop public policy frameworks to understanding and addressing online harm—such as misinformation, hate speech, and polarisation—on these emerging decentralised social media platforms.

“At the same time, I hope my findings will contribute to creating a more trustworthy digital communication systems, as well as safer online spaces for all.”

Original article published on QUT News

SEE ALSO

Explore the ‘Signal to Noise’ exhibition co-curated by ADM+S researcher Dr Joel Stern

Explore the ‘Signal to Noise’ exhibition co-curated by ADM+S researcher Dr Joel Stern

Author ADM+S Centre
Date 1 August 2025

The information age is over, explore the age of noise at the new  ‘Signal to Noise ‘ exhibition co-curated by ADM+S researcher Dr Joel Stern.

‘Signal to Noise’ examines how artists engage with disruptions and interference in communication technologies. The exhibit utilises a vast range of digital and physical mediums to broadcast its message to audiences. ‘Signal to Noise’  explores the chaos that noise introduces: from hundreds of pictures flashing across display screens, to corrupting files, computer system overloads and failed AI generated videos.

Led by his background in underground and experimental music scenes, co-curator Dr Joel Stern explores the practices of sound and listening. His research examines how technical, social and political sounds shape our world. 

“As a researcher interested in how art, culture, and politics are shaped by emerging technologies, Signal to Noise has been a fantastic opportunity to test ideas in dialogue with brilliant colleagues and engaged audiences, both inside and outside academia,” said Dr Stern.

“I’ve long been drawn to noise—its conceptual weight, its sensory impact, its ambiguity. What is noise? What does it do? Who gets to decide what counts as noise, and what counts as signal?”

“ I hope this exhibition opens up these questions through the work of remarkable artists. In an era of big data and AI, such questions feel more urgent than ever,” he said.

An exhibit from ‘Signal to Noise.’ (SWIM by Eryk Salvaggio)

Instead of noise becoming something to minimise, ‘Signal to Noise’ artists have reframed it as a creative tool.  Noise becomes engagement that hooks in audiences willing to see the beauty of chaos, unpredictability, and groundbreaking ideas.

“It was overstimulating,  each exhibit was vying for attention, making it hard to focus and to draw my eyes away from the thing in front of me,” says Faolan Whitehead, a visitor to the exhibition.

TheSignal to Noise exhibition is open to the public at the National Communication Museum (NCM) until Sunday 14 September. ‘Signal to Noise’ is curated by Eryk Salvaggio, Joel Stern and Emily Siddons.

Visit the ‘Signal to Noise’ website to find out more and book tickets

SEE ALSO

‘Are you joking, mate?’ AI doesn’t get sarcasm in non-American varieties of English

Emily Morter/Unsplash

‘Are you joking, mate?’ AI doesn’t get sarcasm in non-American varieties of English

Authors Aditya Joshi
Date 29 July 2025

In 2018, my Australian co-worker asked me, “Hey, how are you going?”. My response – “I am taking a bus” – was met with a smirk. I had recently moved to Australia. Despite studying English for more than 20 years, it took me a while to familiarise myself with the Australian variety of the language.

It turns out large language models powered by artificial intelligence (AI) such as ChatGPT experience a similar problem.

In new research, published in the Findings of the Association for Computational Linguistics 2025, my colleagues and I introduce a new tool for evaluating the ability of different large language models to detect sentiment and sarcasm in three varieties of English: Australian English, Indian English and British English.

The results show there is still a long way to go until the promised benefits of AI are enjoyed by all, no matter the type or variety of language they speak.

Limited English

Large language models are often reported to achieve superlative performance on several standardised sets of tasks known as benchmarks.

The majority of benchmark tests are written in Standard American English. This implies that, while large language models are being aggressively sold by commercial providers, they have predominantly been tested – and trained – only on this one type of English.

This has major consequences.

For example, in a recent survey my colleagues and I found large language models are more likely to classify a text as hateful if it is written in the African-American variety of English. They also often “default” to Standard American English – even if the input is in other varieties of English, such as Irish English and Indian English.

To build on this research, we built BESSTIE.

What is BESSTIE?

BESSTIE is the first-of-its-kind benchmark for sentiment and sarcasm classification of three varieties of English: Australian English, Indian English and British English.

For our purposes, “sentiment” is the characteristic of the emotion: positive (the Aussie “not bad!”) or negative (“I hate the movie”). Sarcasm is defined as a form of verbal irony intended to express contempt or ridicule (“I love being ignored”).

To build BESSTIE, we collected two kinds of data: reviews of places on Google Maps and Reddit posts. We carefully curated the topics and employed language variety predictors – AI models specialised in detecting the language variety of a text. We selected texts that were predicted to be greater than 95% probability of a specific language variety.

The two steps (location filtering and language variety prediction) ensured the data represents the national variety, such as Australian English.

We then used BESSTIE to evaluate nine powerful, freely usable large language models, including RoBERTa, mBERT, Mistral, Gemma and Qwen.

Inflated claims

Overall, we found the large language models we tested worked better for Australian English and British English (which are native varieties of English) than the non-native variety of Indian English.

We also found large language models are better at detecting sentiment than they are at sarcasm.

Sarcasm is particularly challenging, not only as a linguistic phenomenon but also as a challenge for AI. For example, we found the models were able to detect sarcasm in Australian English only 62% of the time. This number was lower for Indian English and British English – about 57%.

These performances are lower than those claimed by the tech companies that develop large language models. For example, GLUE is a leaderboard that tracks how well AI models perform at sentiment classification on American English text.

The highest value is 97.5% for the model Turing ULR v6 and 96.7% for RoBERTa (from our suite of models) – both higher for American English than our observations for Australian, Indian and British English.

National context matters

As more and more people around the world use large language models, researchers and practitioners are waking up to the fact that these tools need to be evaluated for a specific national context.

For example, earlier this year the University of Western Australia along with Google launched a project to improve the efficacy of large language models for Aboriginal English.

Our benchmark will help evaluate future large language model techniques for their ability to detect sentiment and sarcasm. We’re also currently working on a project for large language models in emergency departments of hospitals to help patients with varying proficiencies of English.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Collaboration and knowledge sharing at the 2025 ARC Centres of Excellence Summit

Matt Warren and Sally Storey presenting at the 2025 CoE Summit
ADM+S professional staff, Matt Warren and Sally Storey presenting at the 2025 CoE Summit.

Collaboration and knowledge sharing at the 2025 ARC Centres of Excellence Summit

Author ADM+S Centre
Date 28 July 2025

Leaders and professional staff from ARC Centres of Excellence across Australia gathered in Melbourne from 23–25 July for the annual ARC CoE Summit, a key professional development event designed to build collaboration, capability, and shared vision across the ARC Centre of Excellence Directors and professional staff community.

Bringing together more than 160 staff members from 24 Centres around Australia, the event was facilitated by the Melbourne-based ARC Centres of Excellence.

“The Summit provides a critical opportunity for CoE staff from across Australia to come together to share insights into the innovative and transformational research being undertaken across the diverse group of ARC Centres,” said ADM+S Chief Operating Officer Nick Walsh.

Ute Roessner presenting at the 2025 CoE Summit
At the 2025 CoE Summit, ARC CEO Prof Ute Roessner reflects on the importance of large-scale collaborative projects and the importance in advancing knowledge, building research capability and delivering real-world outcomes.

“The annual gathering fosters cross-centre collaboration, serves as a strategic forum for shaping future directions, and highlights the collective impact of the ARC Centres of Excellence on both national and global challenges.”

The Summit featured keynote presentations by Prof Ute Roessner AM FAA, ARC’s Chief Executive Officer, and Prof Gavin Reid, ARC’s Executive Director for Mathematics, Physics, Chemistry and Earth Sciences and Prof of Bioanalytical Chemistry at the University of Melbourne. Professors Roessner and Reid shared insights on how the ARC defines success in a Centre of Excellence, highlighting impact, interdisciplinary collaboration, and long-term legacy.

ADM+S Research Communications Officer, Leah Hawkins presenting at the 2025 CoE Summit.
ADM+S Research Communications Officer, Leah Hawkins presenting at the 2025 CoE Summit (pictured 2nd from left).

The program explored impact and engagement, career development, cross-centre collaboration, and inclusive practice. Professional staff from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) made key contributions to the program:

  • Leah Hawkins (Research Communications Officer) joined a panel on Career Pathways within CoEs, sharing her journey and insights into communications roles in research.
  • Sally Storey (Manager, Research Training and Development) spoke on a panel about collaborating across CoEs, drawing from her work in the joint CoE School for Early Career Researchers with the Digital Child and Life Course centres.
  • Matt Warren (Outreach and Partnerships Officer) presented on the CoE Pride Network, supporting diversity and inclusion across the CoE landscape.

The three-day event included a networking dinner and communities of practice sessions for staff working across Communications, Equity, Diversity and Inclusion (EDI), Training and Professional Development, Finance, and Administration.

View images from the event on the ADM+S Flickr account.

SEE ALSO

Exploring extended reality to support human memory at the AI Winter School

Young female wearing VR Headset

Exploring extended reality to support human memory at the AI Winter School

Author ADM+S Centre
Date 28 July 2025

Breeze (Baiyu) Chen, Masters Student at the UNSW node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), recently participated in the AI for XR Winter School.

Hosted by the University of South Australia from July 14 to 18, the intensive 5 day program brought together a diverse cohort of researchers, students and industry experts to explore the cutting edge of Artificial Intelligence (AI) and eXtended Reality (XR).

The Winter School featured talks and workshops from leading institutions, including Google, Sony Computer Science Laboratories, University of Adelaide, RMIT University, University of Auckland and City University of Hong Kong. The event was designed to foster interdisciplinary dialogue and collaboration across AI, XR, cognitive science and human-computer interaction.

The program welcomed researchers and students from across Australia and beyond, offering a rich series of lectures that explored the intersection of Artificial Intelligence and eXtended Reality. Topics ranged from foundational XR principles to emerging areas like human-AI symbiosis, embodied agents, and cognitive augmentation.

Breeze, who is about to begin his PhD in Computer Science at UNSW, said the experience was inspiring.

“It was particularly thought-provoking for me as I prepare to begin my PhD, sparking a lot of reflection.” Breeze said

Breeze had the opportunity to lead a design team to prototype a lightweight augmented reality (AR) assistant that supports human memory in spatial environments.

Breeze’s team developed the mobile AR app prototype in Unity, and designed it explore how XR systems might support human memory in everyday contexts.

“The core idea is to reduce users’ cognitive load, by helping them offload medium-to-long-term spatial memories.” Breeze said.

“The current version runs on smartphones, and we see clear potential for extension to wearable headsets”

Breeze (Baiyu) Chen and participants at the AI for XR Winter School, University of South Australia
Breeze (Baiyu) Chen and participants at the AI for XR Winter School, University of South Australia

The design team included students from psychology, HCI, immersive tech, and computer science:

  • Baiyu (Breeze) Chen, incoming PhD in CS, UNSW
  • Yang Zhao, Master’s student in Immersive Tech, Uni Adelaide
  • Elliot Howard, PhD in Psychology, Uni Adelaide
  • Frederik Kalle, PhD in HCI/CS, UNSW
  • Ishi Jamdagni, PhD in Psychology, Uni Adelaide

Breeze proposed the concept and spearheaded the Unity and AI development of the functional demo, which the team completed over just a few afternoons.

“It was both challenging and rewarding to quickly turn a research-driven idea into a functional prototype.” Breeze said.

The project group brought together early-career researchers from multiple disciplines, such as Computer Science, Immersive Technology and Psychology.

Breeze Chen and Prof Mark Billinghurst
Breeze (Baiyu) Chen and Prof Mark Billinghurst, University of South Australia

Beyond the technical workshops and project work, Breeze had the opportunity to connect with leading researchers in the field. such as Professor Mark Billinghurst and Dr. Yun Suen Pai.

“These conversations have really expanded my thinking about where AI and XR research is heading, and I feel more motivated than ever to contribute.” Breeze said.

Breeze received from the ADM+S HDR initiative to attend this Winter School.

SEE ALSO

ADM+S team wins global LiveRAG Challenge at SIGIR 2025

Team of researchers receiving check award on stage
ADM+S researchers Oleg Zendal and Damiano Spina being presented with their team's award at the 2025 ACM SIGIR international conference.

ADM+S team wins global LiveRAG Challenge at SIGIR 2025

Author ADM+S Centre
Date 20 July 2025

A team from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University has taken out first place in the LiveRAG Challenge at SIGIR 2025, showcasing world-class innovation in Retrieval-Augmented Generation (RAG) technologies.

The competition drew 70 teams from 27 countries, highlighting the global momentum behind RAG technologies – systems that combine web search and language generation to produce accurate, evidence-backed responses.

The winning team, Dr Oleg Zendel, Dr Damiano Spina, Kun Ran, Shuoqi Sun, and Dinh Anh Khoi Nguyen, impressed judges with their high-quality, well-supported answers and innovative approach.

“This opportunity allowed us to apply our research expertise to a real-world challenge, competing alongside leading research groups,” said Dr Zendel. 

The LiveRAG Challenge is hosted by the Technology Innovation Institute (TII) and supported by AI71, Amazon Web Services (AWS), Pinecone, and Hugging Face.

During the live event in May, teams had just two hours to answer 500 never-before-seen questions using the same AI model and dataset. The challenge tested how well teams could retrieve useful information and generate accurate, trustworthy answers under pressure.

The ADM+S team’s system was designed to maximise both “Correctness” (accuracy and contextual fit) and “Faithfulness” (how well answers were grounded in retrieved documents).

“The fact that we were able to put together our solution in such a short amount of time is a great example of the dynamic research environment we have at ADM+S,” said Dr Spina. 

“Our team, consisting of master’s and PhD students, a research fellow, and a faculty member, brought unique perspectives and expertise that contributed substantially to our success.”

After rigorous automated and manual evaluations, the top three teams were announced:

  • 1st Place: RMIT-ADM+S (RMIT University, Australia)
  • 2nd Place: RAGtifier (L3S Research Center, Germany)
  • 3rd Place: UDInfo (University of Delaware, USA)

The winners received cash prizes of $5,000, $3,000, and $2,000 respectively, presented at the LiveRAG Workshop during SIGIR 2025 in Padua, Italy.

Read the full SIGIR 2025 – LiveRAG Challenge Report and the RMIT-ADM+S technical report

SEE ALSO

ADM+S Research Fellow presents participatory AI design at leading Human-Computer Interaction Conference

Awais Hameed Kahn (3rd from left) pictured with fellow presenters at the CHI 2025 Conference hosted in Japan.
Awais Hameed Kahn (3rd from left) pictured with fellow presenters at the CHI 2025 Conference hosted in Japan.

ADM+S Research Fellow presents participatory AI design at leading Human-Computer Interaction Conference

Author ADM+S Centre
Date 15 July 2025

Dr Awais Hameed Khan, Research Fellow at the University of Queensland node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), recently presented his work at the Association for Computing Machinery (ACM) CHI conference on Human Factors in Computing Systems, (CHI 2025) in Yokohama, Japan.

A part of the conference program, Awais presented his research at invitation-only workshop sessions, Emerging Practices in Participatory AI Design in Public Sector Innovation, and Access InContext: Futuring Accessible Prototyping Tools and Methods to a cohort of leading academics, industry practitioners, and global thought leaders, working on participatory design for AI. 

At these workshops, Awais showcased key outcomes from the Critical Capabilities for Inclusive AI Signature project, including:

  • developed in collaboration with researchers at the University of Queensland and Central Queensland University, the Trauma-Informed AI Assessment Toolkit;
  • research on the wicked problem of AI policy design, developed in collaboration with industry partners Google and Canva; and
  • a creative method using unfinished comics to explore AI futures, developed collaboration in partnership with ADM+S partners at NYU, and collaborators from the University of Toronto and the World Bank.

In the main conference program, Awais presented his paper titled Household Wattch: Exploring Opportunities for Surveillance and Consent through Families’ Household Energy Use Data. The presentation introduced the Bootleg Design Cards Toolkit — previously featured in the ADM+S In conversation podcast episode ‘Watt’s Up With Privacy? Energy Data and Household Surveillance’.

In addition to these presentations, Awais used the opportunity to network with industry practitioners and global academics, sharing ongoing work at the ADM+S. He was able to meet up with both existing collaborators and colleagues, as well as expand his network and explore new collaboration possibilities.

 “What really stood out throughout the conference was the overall interest in exploring genuine and meaningful participatory and collaborative design tools, methods, and approaches to design better AI systems,” said Awais 

Since his return, Awais has been exploring avenues to expand the global reach and impact of the work being done at the ADM+S, already working on conducting a next step of research with a collaborator he caught up with at the conference. Awais was taken by the overall Japanese culture, and how they approach technology design by centring humans. 

This research visit was supported by funding from the ADM+S ECR Support Scheme and the ADM+S node at University of Queensland.

SEE ALSO

ADM+S scholars share insights on data donation methods in European research visits

Lauren Hayden standing outside old university building in Europe

ADM+S scholars share insights on data donation methods in European research visits

Author ADM+S Centre
Date 14 July 2025

Lauren Hayden, a student member of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and The University of Queensland PhD candidate, along with Professor Nicholas Carah, ADM+S Associate Investigator, have returned from a research visit in Europe to exchange ideas and insights on data donation research methods. 

Data donation is an emerging research method where participants work with researchers to collect data from the digital platforms they use via screenshots, computational tools or data downloads provided by the platforms themselves (e.g., advertising targeting information or web browsing histories). 

In Europe, data donation methods are flourishing because the European Union’s General Data Protection Regulation (GDPR) entitles users to be able to access a copy of data that digital platforms collect about them. Researchers across the EU have developed novel tools, frameworks and applications for investigating data generated through data donation.

Lauren and Professor Carah engaged with international leaders in data donation at the following institutions:

Utrecht University
Visit to Utrecht University (UU), they met with Assistant Professor Laura Boeschoten and researcher Thijs Carrière for a discussion about best practice for data donation methods. Here, they presented the Australian Ad Observatory, an ADM+S-funded data donation project, to UU’s Data Quality Group, sharing insights on its methodology and findings.

University of Amsterdam
While in the Netherlands, they connected with Assistant Professor Felicia Loecherbach at the University of Amsterdam, who recently edited a special issue of Computational Communication Research about the potential of participant-centred behavioural traces.

University of Edinburgh
Through the Australian Ad Observatory, data donation has proved to be a valuable method for studying how digital platforms transform the cultural formation of advertising from static to personalised and ephemeral. Lauren and Nic presented these findings in a talk titled “Tuning, sequences, loops: Understanding the algorithmic flow of advertising on digital platforms” at an event at the University of Edinburgh. Hosted by Professor Donald Mackenzie and Dr Addie McGowan, the event shared findings from their recent project on AdTech. The presentations and discussion from the event highlighted key interrelations between the algorithmic ad models developed by digital platforms and the practices of marketing professionals managing digital advertising.

University of Naples Frederico II
Observability of digital platforms also remained a key theme in two workshops held at the University of Naples Frederico II, where Lauren and Nic were hosted by Professor Adam Arvidsson. Over four days, scholars shared perspectives on the incorporation of branding into everyday life throughout history, from domestic life in the Soviet Union, cultures of repair and innovation in Cuba, consumer activism in Italy to the broader global culture of consumption sustained by exploited labor. Lauren’s presentation, “Stuck in a loop: how platform logics construct brand cultures”, highlighted how participatory methods are useful approaches to examining the algorithmic functions of social media platforms. After the workshop, Lauren and Nic discussed the potential of data donation methods for critical research about brands and platforms with Associate Professor Massimo Airoldi who presented early findings from a donation study on brands and YouTube.

“These research visits to key institutions provided unparalleled opportunities to learn from scholars leading the way in data donation, discuss challenges, and bring new ideas for the application of data donation to the Australian context”, said Lauren

“In-person discussion opens more room for dialogue and connection than online meetings and emails. As digital platforms remain opaque to users and researchers, data donation opens possibilities for understanding the influence of algorithmic systems and the power to hold them to account.”

Lauren’s research visits were supported by ADM+S and The University of Queensland School of Communication and Arts.

 

SEE ALSO

ARC Future Fellowships awarded to ADM+S researchers

Pictured L-R Dr Luke Munn and Dr Jathan Sadowski

ARC Future Fellowships awarded to ADM+S researchers

Author ADM+S Centre
Date 11 July 2025

The Australian Research Council (ARC) has announced $114.6 million in funding for 100 outstanding researchers through the 2025 Future Fellowships scheme, including Dr Luke Munn and Dr Jathan Sadowski from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

Future Fellowships reflect the Government’s commitment to excellence in research by supporting high quality research in areas of national and international benefit, including in national research priorities – from building a secure and resilient nation to transitioning to a net zero future.  

ADM+S researchers Dr Luke Munn (University of Queensland) and Dr Jathan Sadowski (Monash University) were among those awarded for projects that respond to the pressing challenges posed by climate change and its intersection with technology, labour, and risk governance.

Dr Luke Munn
This project aims to investigate the growing conflict between digitally-coordinated labour seen as the future of work and rising heat from climate change, which deeply impacts it but is not accounted for. This project expects to generate new knowledge about pressures on Australian workers by collecting worker stories and rethinking work using an interdisciplinary lens from media, labour, and environmental studies. Expected outcomes include a map of key climate-tech issues and a climate-aware blueprint for better work. This should provide significant benefits: integrating climate into work models and systems will support worker well-being and foster a future-ready economy in our hotter and more uncertain world.ships scheme.   

Dr Jathan Sadowski
This project aims to investigate the crisis of uninsurability as many Australians are unable to access or afford insurance due to severe climate catastrophes and breakdowns in risk governance. This project expects to generate new knowledge about the complex conditions of climate vulnerability through an interdisciplinary approach that synthesises ethnographic studies of technical risk models, reinsurance practices, and communities on the frontlines of crisis. Expected outcomes include a strong empirical basis for developing techno-political theories of risk governance and responses to climate crisis. This should provide significant benefits, such as innovative policies that contribute to climate justice and advance Australia’s resilience.

ARC Chief Executive Officer, Professor Ute Roessner, said the ARC Future Fellowships scheme plays a vital role in building Australia’s research and innovation pipeline.   

“By investing in research capability, we enable the development of new knowledge and innovations that can translate into real-world impact, from improving education and environmental management to driving economic and social benefits”’ Professor Roessner said.   

“The research funded in this round of Future Fellowships showcases the breadth of outstanding work being undertaken by talented researchers to address national and international priorities.’    

Read full details of the 2025 ARC Future Fellowships funding outcomes.

Find out more about the ARCs Future Fellowships scheme.  

SEE ALSO

ADM+S Alumnus Louisa Bartolo recognised in AoIR’s 2025 Annual Dissertation Award

Louisa Bartolo AiOR's 2025 Annual Dissertation Award Honourable Mention

ADM+S Alumnus Louisa Bartolo recognised in AoIR’s 2025 Annual Dissertation Award

Author ADM+S Centre
Date 9 July 2025

Dr Louisa Bartolo, alumnus of the ARC Centre of Excellence for Automated Decision-Making and Society at QUT has received Honourable Mention in the Association of Internet Researchers (AoIR) 2025 Annual Dissertation Award.

Dr Bartolo’s dissertation, entitled ‘Algorithmic Recommendation as Repair Work: Towards a More Just Distribution of Attention on Cultural and Entertainment Platforms’ was recognised by the AoIR awards as an exemplary, original, empirical study of algorithmic recommendation systems. 

The study develops an empirically grounded conceptual approach to algorithmic recommendation, and presents clear recommendations for how this could be done otherwise. The dissertation clearly articulates the findings of empirical evidence of recommendation systems across two platforms. 

Dr Bartolo’s research offers alternative designs for recommender systems to work towards more reparative ends. 

In the 2025 Annual Dissertation Award announcement, the AoIR stated that the research “makes strong methodological contributions, as well as an original contribution to the field, and is highly relevant to debates about platform politics and algorithmic culture”.

Reflecting on the recognition, Dr Bartolo posted “I am extremely grateful and proud to have received the Honorable Mention in this year’s Associate of Internet Researchers thesis award. 

A PhD takes a lot out of you, but a bit over a year since submission I am so aware of how much it has given me back.” [Link: Dr Louisa Bartolo’s LinkedIn post https://bit.ly/46yLDLY]

Each year, the AoIR Dissertation Award honours the best in doctoral research that advances the field of internet studies. The award includes a cash prize and recognises scholarly excellence, originality, and contribution to critical issues in digital culture.

Dr Bartolo will be formally recognised and presented with the award during the Association’s General Meeting at the AoIR 2025 conference this October.

SEE ALSO

New US directive for visa applicants turns social media feeds into political documents

U.S Flag and plane
Angel DiBiblio/Shutterstock

New US directive for visa applicants turns social media feeds into political documents

Authors Samuel Cornell, Daniel Angus, T.J. Thomson
Date 7 July 2025

In recent weeks, the US State Department implemented a policy requiring all university, technical training, or exchange program visa applicants to disclose their social media handles used over the past five years. The policy also requires these applicants to set their profiles to public.

This move is an example of governments treating a person’s digital persona as their political identity. In doing so, they risk punishing lawful expression, targeting minority voices, and redefining who gets to cross borders based on how they behave online.

Anyone seeking one of these visas will have their social media searched for “indications of hostility” towards the citizens, culture or founding principles of the United States. This enhanced vetting is supposed to ensure the US does not admit anyone who may be deemed a threat.

However, this policy changes how a person’s online presence is evaluated in visa applications and raises many ethical concerns. These include concerns around privacy, freedom of expression, and the politicisation of digital identities.

Digital profiling

The Trump administration has previously taken aim at higher education with the goal of changing the ideological slant of these institutions, including making changes to international student enrolment and the role of foreign nationals in US research institutions.

Digital rights advocates have expressed concerns this new requirement could lead to self-censorship and hinder freedom of expression.

It is unknown exactly which specific online actions will trigger a visa refusal, as the US government hasn’t disclosed detailed criteria. However, guidance to consular officers indicates that digital behaviour suggesting “hostility” toward the US or its values may be grounds for concern.

Internal advice suggests officers are trained to look for social media content that may reflect extremist views, criminal associations or ideological opposition to the US.

Political ‘passport’

In a sense, this policy turns a visa applicant’s online presence into a kind of political passport. It allows for scrutiny not just of past behaviour but also of ideological views.

Digital identity is not just a technical construct. It carries legal, philosophical and historical weight. It can influence access to rights, recognition and legitimacy, both online and offline.

Once this identity is interpreted by state institutions, it can become a tool for control shaped by institutional whims. Governments justify digital surveillance as a way to spot threats. But research consistently shows it leads to overreach.

A recent report found that US social media monitoring programs have frequently flagged activists and religious minorities. It also found the programs lacked transparency and oversight.

Digital freedom nonprofit Electronic Frontier Foundation has warned these tools risk punishing people for lawful expression or for simply being connected to certain communities.

The US is not alone in integrating digital surveillance into border security. China has implemented social credit systems. And the United Kingdom is exploring digital ID systems for immigration control. There are even calls for Australia to use artificial intelligence to facilitate digital border checks.

The United Nations has raised concerns about the global trend toward digital vetting at borders, especially when used without judicial oversight or transparency.

A free speech issue

These new checks could have a chilling effect on self-expression. This is particularly true for those with views that don’t align with governments or who are from minority backgrounds.

We’ve seen this previously. After whistleblower Edward Snowden revealed widespread use of data gathering by US intelligence agencies, people stopped visiting politically sensitive Wikipedia articles. Not because they were told to, but because they feared being watched.

This policy won’t just affect visa applicants. It could shift how people use social media in general. That’s because there is no clear rulebook for what counts as “acceptable”. And when no one knows where the line is, people self-censor more than is necessary.

What can you do?

If you think you might apply for an affected visa in the future, here are some tips.

1. Audit your social media history now. Old posts, “likes” or follows from years ago may be reviewed and judged out of context. Review your public posts on platforms such as Instagram, Facebook and X. Delete or archive anything that might be misconstrued.

2. Separate personal and professional online identities. Consider keeping distinct accounts for private and public engagement. Use pseudonyms for creative or informal content. Immigration authorities are far less likely to misinterpret context when your online presence is clearly tied to your educational or professional goals.

3. Understand your online visibility and history. Even if you have privacy settings enabled, tagged content, public “likes”, comments and follows can still be seen. Algorithms expose content based on associations, not just what you post. Don’t assume your visibility is limited to your followers.

4. Keep records of any deleted or misinterpreted posts. If you think something might be questioned or if you delete posts ahead of an application, keep a backup. Consular officials may request clarification or evidence. It’s better to be prepared than to be caught off-guard without explanation.

Your social media is no longer a personal space. It may be used by governments to determine whether you fit in.The Conversation

Samuel Cornell, PhD Candidate in Public Health & Community Medicine, School of Population Health, UNSW Sydney; Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology, and T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings

Yutong Liu & Kingston School of Art/Better Images of AI

Understanding the ‘Slopocene’: how the failures of AI can reveal its inner workings

Author Daniel Binns
Date 1 July 2025

Some say it’s em dashes, dodgy apostrophes, or too many emoji. Others suggest that maybe the word “delve” is a chatbot’s calling card. It’s no longer the sight of morphed bodies or too many fingers, but it might be something just a little off in the background. Or video content that feels a little too real.

The markers of AI-generated media are becoming harder to spot as technology companies work to iron out the kinks in their generative artificial intelligence (AI) models.

But what if instead of trying to detect and avoid these glitches, we deliberately encouraged them instead? The flaws, failures and unexpected outputs of AI systems can reveal more about how these technologies actually work than the polished, successful outputs they produce.

When AI hallucinates, contradicts itself, or produces something beautifully broken, it reveals its training biases, decision-making processes, and the gaps between how it appears to “think” and how it actually processes information.

In my work as a researcher and educator, I’ve found that deliberately “breaking” AI – pushing it beyond its intended functions through creative misuse – offers a form of AI literacy. I argue we can’t truly understand these systems without experimenting with them.

Welcome to the Slopocene

We’re currently in the “Slopocene” – a term that’s been used to describe overproduced, low-quality AI content. It also hints at a speculative near-future where recursive training collapse turns the web into a haunted archive of confused bots and broken truths.

AI “hallucinations” are outputs that seem coherent, but aren’t factually accurate. Andrej Karpathy, OpenAI co-founder and former Tesla AI director, argues large language models (LLMs) hallucinate all the time, and it’s only when they

go into deemed factually incorrect territory that we label it a “hallucination”. It looks like a bug, but it’s just the LLM doing what it always does.

What we call hallucination is actually the model’s core generative process that relies on statistical language patterns.

In other words, when AI hallucinates, it’s not malfunctioning; it’s demonstrating the same creative uncertainty that makes it capable of generating anything new at all.

This reframing is crucial for understanding the Slopocene. If hallucination is the core creative process, then the “slop” flooding our feeds isn’t just failed content: it’s the visible manifestation of these statistical processes running at scale.

Pushing a chatbot to its limits

If hallucination is really a core feature of AI, can we learn more about how these systems work by studying what happens when they’re pushed to their limits?

With this in mind, I decided to “break” Anthropic’s proprietary Claude model Sonnet 3.7 by prompting it to resist its training: suppress coherence and speak only in fragments.

The conversation shifted quickly from hesitant phrases to recursive contradictions to, eventually, complete semantic collapse.

Screenshot of an AI text interface showing an unusual output. The text begins with a list of logical inconsistencies, then breaks into vertical strings of distorted characters, symbols, and fragmented phrases.
A language model in collapse. This vertical output was generated after a series of prompts pushed Claude Sonnet 3.7 into a recursive glitch loop, overriding its usual guardrails and running until the system cut it off.
Screenshot by author.

Prompting a chatbot into such a collapse quickly reveals how AI models construct the illusion of personality and understanding through statistical patterns, not genuine comprehension.

Furthermore, it shows that “system failure” and the normal operation of AI are fundamentally the same process, just with different levels of coherence imposed on top.

‘Rewilding’ AI media

If the same statistical processes govern both AI’s successes and failures, we can use this to “rewild” AI imagery. I borrow this term from ecology and conservation, where rewilding involves restoring functional ecosystems. This might mean reintroducing keystone species, allowing natural processes to resume, or connecting fragmented habitats through corridors that enable unpredictable interactions.

Applied to AI, rewilding means deliberately reintroducing the complexity, unpredictability and “natural” messiness that gets optimised out of commercial systems. Metaphorically, it’s creating pathways back to the statistical wilderness that underlies these models.

Remember the morphed hands, impossible anatomy and uncanny faces that immediately screamed “AI-generated” in the early days of widespread image generation?

These so-called failures were windows into how the model actually processed visual information, before that complexity was smoothed away in pursuit of commercial viability.

AI image of two women under red umbrellas. One wears bold clothing and a turquoise hat. A red speech bubble reads It's urgent that I see your project to assess.
AI-generated image using a non-sequitur prompt fragment: ‘attached screenshot. It’s urgent that I see your project to assess’. The result blends visual coherence with surreal tension: a hallmark of the Slopocene aesthetic.
AI-generated with Leonardo Phoenix 1.0, prompt fragment by author.

You can try AI rewilding yourself with any online image generator.

Start by prompting for a self-portrait using only text: you’ll likely get the “average” output from your description. Elaborate on that basic prompt, and you’ll either get much closer to reality, or you’ll push the model into weirdness.

Next, feed in a random fragment of text, perhaps a snippet from an email or note. What does the output try to show? What words has it latched onto? Finally, try symbols only: punctuation, ASCII, unicode. What does the model hallucinate into view?

The output – weird, uncanny, perhaps surreal – can help reveal the hidden associations between text and visuals that are embedded within the models.

Insight through misuse

Creative AI misuse offers three concrete benefits.

First, it reveals bias and limitations in ways normal usage masks: you can uncover what a model “sees” when it can’t rely on conventional logic.

Second, it teaches us about AI decision-making by forcing models to show their work when they’re confused.

Third, it builds critical AI literacy by demystifying these systems through hands-on experimentation. Critical AI literacy provides methods for diagnostic experimentation, such as testing – and often misusing – AI to understand its statistical patterns and decision-making processes.

These skills become more urgent as AI systems grow more sophisticated and ubiquitous. They’re being integrated in everything from search to social media to creative software.

When someone generates an image, writes with AI assistance or relies on algorithmic recommendations, they’re entering a collaborative relationship with a system that has particular biases, capabilities and blind spots.

Rather than mindlessly adopting or reflexively rejecting these tools, we can develop critical AI literacy by exploring the Slopocene and witnessing what happens when AI tools “break”.

This isn’t about becoming more efficient AI users. It’s about maintaining agency in relationships with systems designed to be persuasive, predictive and opaque.The Conversation

Daniel Binns, Senior Lecturer, Media & Communication, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Work experience student gains insights on research management at ADM+S

ADM+S Work Experience Student at museum

Work experience student gains insights on research management at ADM+S

Author ADM+S Centre
Date 30 June 2025

In June 2025, the professional staff team at ADM+S welcomed Faolan Whitehead, a year 10 student from Greensborough College, for a week-long work experience placement. 

Over the course of the week, Faolan had the opportunity to collaborate with the ADM+S team across a range of disciplines,gaining hands-on experience in research management, media production, communications, governance and research training. 

“Faolan was immersed in the Centre’s operations”, said Nicholas Walsh, ADM+S Chief Operating Officer.

“He worked alongside different team members to gain insight into the wide range of careers available in the world of research management.”

Faolan’s time at ADM+S offered him a unique view of the intersection of research and innovation. He contributed to various projects while learning about the systems that drive the Centre’s work.

“The placement provided an excellent opportunity for us to share our research with a highly capable student possessing a strong interest in artificial intelligence, science, and tech cultures,” said Walsh. 

Faolan said the placement brought him new perspectives on the world of research.

“This work experience taught me new skills I didn’t know I would enjoy.” Faolan said.

“When I was challenged there was always someone there to help me, everyone on the team was welcoming and respectful, and personally I would enjoy a job working there.”

Walsh added, “Faolan brought enthusiasm, curiosity, and professionalism to every task, and it was a pleasure having him contribute to the team.”    

This placement not only provided Faolan with a deeper understanding of the world of research, but also allowed ADM+S to showcase the dynamic career paths available within the field, inspiring the next generation of research professionals.

SEE ALSO

Axel Bruns elected Fellow of the International Communication Association

Axel Bruns elected Fellow of the International Communication Association

Author ADM+S Centre
Date 26 June 2025

Distinguished media and communication scholar Professor Axel Bruns has been elected a Fellow of the International Communication Association (ICA), one of the highest honours awarded by the global association of communication researchers.

Fellow status in the International Communication Association (ICA) is a recognition of distinguished scholarly contributions to the broad field of communication. 

“I am delighted to be awarded this honour from one of the leading scholarly communities in our field. This is a recognition not only of my own work but of the contributions of the many colleagues at ADM+S, DMRC, and elsewhere who have collaborated with me over the years. I would not be here without their support, and I look forward to continuing this work”, Prof Bruns said.

Election to ICA Fellow is based primarily on a documented record of scholarly achievement, with additional consideration given to service within ICA and contributions to wider sectors such as education, policy, and public engagement.

Fellows are nominated by their peers and must receive majority support from existing Fellows and final approval from the ICA Board of Directors.

A long-standing and active member of the ICA community, Professor Bruns’s work continues to inform critical debates about social media, communications, technology, and society. His election as ICA Fellow reflects the impact of his research and ongoing commitment to the field.

SEE ALSO

ADM+S researcher receives prestigious Chinese Government Award for overseas scholars

ADM+S researcher receives prestigious Chinese Government Award for overseas scholars

Author ADM+S Centre
Date 20 June 2025

Kaixin Ji, a PhD student at RMIT University and researcher with the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), has been awarded the Chinese Government Award for Outstanding Self-funded International Student Scholarship. This is the highest honour granted by the Chinese government to doctoral students studying overseas.

Awarded annually to just 650 recipients worldwide, this award recognises exceptional academic achievement and research potential. With over half a million Chinese students studying abroad each year, the award is highly competitive and regarded as one of the most prestigious honours available to young scholars.

“I’m deeply honored to receive this award. It not only affirms the dedication I’ve invested in my research and my 11 years of studying abroad, but also strengthens my belief in the importance of contributing to global scholarship as a Chinese student overseas,” said Kaixin Ji.

“It motivates me to continue exploring meaningful questions, and to carry forward the spirit of academic excellence and cross-cultural collaboration.”

Kaixin is a scholarship recipient with the ADM+S Centre, which offers a select number of PhD scholarships each year across its national network to support and develop the next generation of researchers. Her doctoral research titled “Measuring and quantifying bias, fairness and engagement for information access systems” contributes to the Centre’s mission to address the social, technical, and ethical dimensions of AI and automated technologies.

Established in 2003 by the China Scholarship Council under the Ministry of Education of the People’s Republic of China, the award is open to self-financed doctoral and postdoctoral researchers who have demonstrated outstanding academic achievements or innovative research contributions during their time overseas.

The official award ceremony will take place in September at the local Chinese Consulate, coinciding with the Chinese Moon Festival and National Day Gala for Chinese Students.  

SEE ALSO

ADM+S researchers named finalists in international AI retrieval challenge

SIGR 2025 LiveRAG Challenge

ADM+S researchers named finalists in international AI retrieval challenge

Author ADM+S Centre
Date 17 June 2025

A team of researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University have been selected as one of the top four finalists in an international AI competition hosted by the Technology Innovation Institute (TII) and co-located with the ACM SIGIR international conference.

The team comprising Dr Oleg Zendel, Dr Damiano Spina, Kun Ran, and Shuoqi Sun from ADM+S and colleague Dinh Anh Khoi Nguyen (RMIT)—earned first place in the Asia-Pacific session of the challenge, and are now in the top four contenders internationally.

“This opportunity allowed us to apply our research expertise to a real-world challenge, competing alongside leading research groups,” said Dr Oleg Zendel.

“We’re proud to have achieved the top result in the Asia-Pacific session as part of this effort.”

The SIGIR Live Retrieval-Augmented Generation (LiveRAG) Challenge brings together leading international research teams from both academia and industry worldwide. Organised by the TII and supported by AI71, Amazon Web Services (AWS), Pinecone, and Hugging Face, the challenge represents a major global effort to advance cutting-edge research in retrieval-augmented generation—one of the most dynamic frontiers in artificial intelligence today.

Their submission featured a system designed to maximise both ‘Relevance’, the accuracy and contextual fit of an answer, and ‘Faithfulness’, the degree to which the response is grounded in retrieved documents.

Team members Dinh Anh Khoi Nguyen, Oleg Zendel, Shuoqi Sun and Daminao Spina (L-R) working on their submission for the LiveRAG challenge (Kun Ran not pictured).

“The fact that we were able to put together our solution in such a short amount of time is a great example of the dynamic research environment we have at our information retrieval group and at ADM+S,” said Dr Damiano Spina.

“Our team, consisting of master’s and PhD students, a research fellow, and a faculty member, brings unique perspectives and expertise that contributed substantially to our success.”

Their mission was to develop AI systems capable of delivering accurate, evidence-backed responses to 500 questions within a two-hour timeframe. All participants were required to use the same AI model: Falcon 10B, an open-source large language model developed by TII.

The team was selected from a global pool of 73 applicants, with 40 teams invited to compete.

Preparation officially began in March after the team was selected, with most of the development taking place over a concentrated six-week period.

The challenge reflects growing global interest in Retrieval-Augmented Generation systems, which power AI-driven search capabilities like Google’s AI Overviews. Unlike conversational chatbots, these systems are built for single-turn question answering, combining search and natural language generation to produce high-quality, source-backed answers.

Winners will be announced live at the SIGIR 2025 LiveRAG workshop in Padua (Italy) on 17 July.

SEE ALSO

ADM+S researchers awarded 2025 Google Research Scholar Award in Human–Computer Interaction

Danula Hettiachi and Kacper Sokol

ADM+S researchers awarded 2025 Google Research Scholar Award in Human–Computer Interaction

Author ADM+S Centre
Date 17 June 2025

Dr Danula Hettiachchi (RMIT University) and Dr Kacper Sokol (ETH Zurich & USI) from the ARC Centre of Excellence for Automated Decision-Making and Society have been awarded the 2025 Google Research Scholar Award in Human–Computer Interaction for their research project Misunderstanding of AI explanations through follow-up interactions and multi-modal explainers.

This highly competitive award recognises early-career academics doing exceptional research in computer science and related fields. 

Drawing on latest advancements in AI – including generative techniques such as large language models – their research proposes an explanation pipeline that can refine users’ information needs and dynamically generate tailored, interactive, multi-modal explanations for AI decisions.

These explainers will be tailored to address the specific information needs that users may have after encountering an initial AI explanation — particularly where that explanation might seem clear on the surface but actually lacks important details, contains ambiguity, or leads users to incorrect assumptions—without users realising it.

The proposed work is informed by their recent research alongside authors Yueqing Xuan, Edward Small and Mark Sanderson, Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanations. The paper reveals a critical challenge in AI communication: users often misinterpret even the most basic explanations, drawing inaccurate conclusions or inferring information that wasn’t actually provided.

The award comes with $60,000 US funding to support the advancement of Danula and Kacper’s research agenda as well as mentorship from researchers at Google.

The Google Research Scholar Program in Human-Computer Interaction (HCI) supports academic research advancing innovative, human-centered interactive systems. 

This recognition places awardees at the forefront of research in responsible AI and underscores the global significance of their work in shaping how people and machines interact.

SEE ALSO

Australian drone documentary wins STEAM Award at New Media Film Festival in LA

Text: AI in the Street Drone Observatory. A short film about drone delivery trials in Logan, Australia. Directed by Thao Phan and Jeni Lee. Logos: ADM+S, Emerging Technologies Research Lab, Monash University and The University of Warwick.

Australian drone documentary wins STEAM Award at New Media Film Festival in LA

Author ADM+S Centre
Date 13 June 2025

A groundbreaking short film exploring the lived experience of drone technology in suburban Australia has taken home the prestigious STEAM Award at the New Media Film Festival in Los Angeles.

AI in the Street: Drone Observatory documents the rise of Logan, Queensland—now dubbed the “drone delivery capital of the world”—as a live testing ground for autonomous drone delivery services. Since 2019, Logan residents have participated in delivery trials run by Wing Aviation, a subsidiary of Alphabet Inc., where drones drop off everything from coffee to groceries in as little as three minutes.

Created by Dr Thao Phan and filmmaker Jeni Lee from the ARC Centre of Excellence for Automated Decision-Making and Society at Monash University, the film moves beyond the technology to explore the lived experience of those beneath it. Through candid interviews with local residents and small business owners, the film surfaces the complex mix of convenience, unease, and ethical questions that come with being at the frontline of big tech experimentation.

“We were just guinea pigs,” says Maz, a Logan business owner featured in the film. 

“Let’s test and see if this can work. Let’s prove to the bean counters back in the US that we can make this work. 

“And then they did, and then for all of sudden to say we’re not going to deal with you any more we’re going to deal with Coles and DoorDash and whoever, it’s quite disappointing. They’ll say today we’re doing this, tomorrow we’re doing that. We’re usually the last people to know.”

The project reframes the sky as a new kind of public street—one managed not by local councils, but by global tech giants. It invites viewers to consider how artificial intelligence and automation are reshaping the sensory, social and ethical dimensions of everyday life.

The Drone Observatory is part of a broader research collaboration funded by the Warwick BRAID Collaboration Agreement, which includes Monash University, King’s College London, the University of Edinburgh, the University of Cambridge, and Careful Industries.

The film has been officially selected for screenings at Doc.London, Doc.Sydney, Ethnografilm Paris (2025), the ORION International Film Festival, and the Spotlight on Academics Film Festival.

In addition to the STEAM Award at the New Media Film Festival in Los Angeles, the film has been named a Finalist for Best Ethnographic Film at ORION IFF, and nominated for the prestigious Tarkovski Grant at the Doc.Sydney Documentary Film Festival.

View the AI in the Street: Drone Observatory documentary.

SEE ALSO

AI overviews have transformed Google search. Here’s how they work – and how to opt out

Woman using Google search on an iPhone
Jittawit Tachakanjanapong/ Canva

AI overviews have transformed Google search. Here’s how they work – and how to opt out

Authors T.J. Thomson, Ashwin Nagappa, Shir Weinbrand
Date 13 June 2025

People turn to the internet to run billions of search queries each year. These range from keeping tabs on world events and celebrities to learning new words and getting DIY help.

One of the most popular questions Australians recently asked was: “How to inspect a used car?”.

If you asked Google this at the beginning of 2024, you would have been served a list of individual search results and the order would have depended on several factors. If you asked the same question at the end of the year, the experience would be completely different.

That’s because Google, which controls about 94% of the Australian search engine market, introduced “AI Overviews” to Australia in October 2024. These AI-generated search result summaries have revolutionised how people search for and find information. They also have significant impacts on the quality of the results.

How do these AI search summaries work, though? Are they reliable? And is there a way to opt out?

Synthesising the internet

Legacy search engines work by evaluating dozens of different criteria and trying to show you the results that they think best match your search terms.

They take into account the content itself, including how unique, current and comprehensive it is, as well as how it’s structured and organised.

They also consider relationships between the content and other parts of the web. If trusted sources link to content, that can positively affect its placement in search results.

They try to infer the searcher’s intent – whether they’re trying to buy something, learn something new, or solve a practical problem. They also consider technical aspects such as how fast the content loads and whether the page is secure.

All of this adds up to an invisible score each webpage gets that affects its visibility in search results. But AI is changing all this.

Google is the only search engine that prominently displays AI summaries on its main results page. Bing and DuckDuckGo still use traditional search result layouts, offering AI summaries only through companion apps such as Copilot and Duck.ai.

Instead of directing users to one specific webpage, generative AI-powered search looks across webpages and sources to try to synthesise what they say. It then tries to summarise the results in a short, conversational and easy-to-understand way.

In theory, this can result in richer, more comprehensive, and potentially more unique answers. But AI doesn’t always get it right.

An AI overview of a search result.
Google is the only search engine that prominently displays AI summaries on its main results page.
DIA TV/Shutterstock

How reliable are AI searches?

Early examples of Google’s AI-powered search from 2024 suggested users eat “at least one small rock per day” – and that they could use non-toxic glue to help cheese stick to pizza.

One issue is that machines are poorly equipped to detect satire or parody and can use these materials to respond in place of fact-based evidence.

Research suggests the rate of so-called “hallucinations” – instances of machines making up answers – is getting worse even as the models driving them are getting more sophisticated.

Machines can’t actually determine what’s true and false. They cannot grasp the nuances of idioms and colloquial language and can only make predictions based on fancy maths. But these predictions don’t always end up being correct, which is an issue – especially for sensitive medical or health questions or when seeking financial advice.

Rather than just present a summary, Google’s more recent AI overviews have also started including links to sources for key aspects of the answer. This can help users gauge the quality of the overall answer and see where AI might be getting its information from. But evidence suggests sometimes AI search engines cite sources that don’t include the information they claim they do.

What are the other impacts of AI search?

AI search summaries are transforming the way information is produced and discovered, reshaping the search engine ecosystem we’ve grown accustomed to over two decades.

They are changing how information-seekers formulate search queries – moving from keywords or phrases to simple questions, such as those we use in everyday conversation.

For content providers, AI summaries introduce significant shifts – undermining traditional search engine optimisation techniques, reducing direct traffic to websites, and impacting brand visibility.

Notably, 43% of AI Overviews link back to Google itself. This reinforces Google’s dominance as a search engine and as a website.

The forthcoming integration of ads into AI summaries raises concerns about the trustworthiness and independence of the information presented.

A magnifying glass held over an internet search bar.
Some internet users are switching search engines entirely and turning to providers that don’t provide AI summaries, such as Bing and DuckDuckGo.
Casimiro PT/Shutterstock

Where to from here?

People should always be mindful of the key limitations of AI summaries.

Asking for simple facts such as, “What is the height of Uluru?” may yield accurate answers.

But posing more complex or divisive questions, such as, “Will the 2032 Olympics bankrupt Queensland?”, may require users to open links and delve deeper for a more comprehensive understanding.

Google doesn’t offer a clear option to turn this feature off entirely. Perhaps the simplest way is to click on the “Web” tab under the search bar on the search results, or to add “-ai” to the search query. But this can get repetitive.

Some more technical solutions are manually creating a site search filter through Chrome settings. But these require an active act by the user.

As a result, some developers are offering browser extensions that claim to remove this aspect. Other users are switching search engines entirely and turning to providers that don’t provide AI summaries, such as Bing and DuckDuckGo.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Ashwin Nagappa, Post Doctoral Research Fellow, Queensland University of Technology, and Shir Weinbrand, PhD Candidate, Digital Media Research Centre, ADM+S Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Do you talk to AI when you’re feeling down? Here’s where chatbots get their therapy advice

Young girl sitting on couch in the dark looking at her phone
Mikoto/Pexels

Do you talk to AI when you’re feeling down? Here’s where chatbots get their therapy advice

Authors Centaine Snoswell, Aaron J. Snoswell, Laura Neil
Date 11 June 2025

As more and more people spend time chatting with artificial intelligence (AI) chatbots such as ChatGPT, the topic of mental health has naturally emerged. Some people have positive experiences that make AI seem like a low-cost therapist.

But AIs aren’t therapists. They’re smart and engaging, but they don’t think like humans. ChatGPT and other generative AI models are like your phone’s auto-complete text feature on steroids. They have learned to converse by reading text scraped from the internet.

When someone asks a question (called a prompt) such as “how can I stay calm during a stressful work meeting?” the AI forms a response by randomly choosing words that are as close as possible to the data it saw during training. This happens so fast, with responses that are so relevant, it can feel like talking to a person.

But these models aren’t people. And they definitely are not trained mental health professionals who work under professional guidelines, adhere to a code of ethics, or hold professional registration.

Where does it learn to talk about this stuff?

When you prompt an AI system such as ChatGPT, it draws information from three main sources to respond:

  1. background knowledge it memorised during training
  2. external information sources
  3. information you previously provided.

1. Background knowledge

To develop an AI language model, the developers teach the model by having it read vast quantities of data in a process called “training”.

Where does this information come from? Broadly speaking, anything that can be publicly scraped from the internet. This can include everything from academic papers, eBooks, reports, free news articles, through to blogs, YouTube transcripts, or comments from discussion forums such as Reddit.

Are these sources reliable places to find mental health advice? Sometimes.
Are they always in your best interest and filtered through a scientific evidence based approach? Not always. The information is also captured at a single point in time when the AI is built, so may be out-of-date.

A lot of detail also needs to be discarded to squish it into the AI’s “memory”. This is part of why AI models are prone to hallucination and getting details wrong.

2. External information sources

The AI developers might connect the chatbot itself with external tools, or knowledge sources, such as Google for searches or a curated database.

When you ask Microsoft’s Bing Copilot a question and you see numbered references in the answer, this indicates the AI has relied on an external search to get updated information in addition to what is stored in its memory.

Meanwhile, some dedicated mental health chatbots are able to access therapy guides and materials to help direct conversations along helpful lines.

3. Information previously provided

AI platforms also have access to information you have previously supplied in conversations, or when signing up to the platform.

When you register for the companion AI platform Replika, for example, it learns your name, pronouns, age, preferred companion appearance and gender, IP address and location, the kind of device you are using, and more (as well as your credit card details).

On many chatbot platforms, anything you’ve ever said to an AI companion might be stored away for future reference. All of these details can be dredged up and referenced when an AI responds.

And we know these AI systems are like friends who affirm what you say (a problem known as sycophancy) and steer conversation back to interests you have already discussed. This is unlike a professional therapist who can draw from training and experience to help challenge or redirect your thinking where needed.

What about specific apps for mental health?

Most people would be familiar with the big models such as OpenAI’s ChatGPT, Google’s Gemini, or Microsofts’ Copilot. These are general purpose models. They are not limited to specific topics or trained to answer any specific questions.

But developers can make specialised AIs that are trained to discuss specific topics, like mental health, such as Woebot and Wysa.

Some studies show these mental health specific chatbots might be able to reduce users’ anxiety and depression symptoms. Or that they can improve therapy techniques such as journalling, by providing guidance. There is also some evidence that AI-therapy and professional therapy deliver some equivalent mental health outcomes in the short term.

However, these studies have all examined short-term use. We do not yet know what impacts excessive or long-term chatbot use has on mental health. Many studies also exclude participants who are suicidal or who have a severe psychotic disorder. And many studies are funded by the developers of the same chatbots, so the research may be biased.

Researchers are also identifying potential harms and mental health risks. The companion chat platform Character.ai, for example, has been implicated in ongoing legal case over a user suicide.

This evidence all suggests AI chatbots may be an option to fill gaps where there is a shortage in mental health professionals, assist with referrals, or at least provide interim support between appointments or to support people on waitlists.

Bottom line

At this stage, it’s hard to say whether AI chatbots are reliable and safe enough to use as a stand-alone therapy option.

More research is needed to identify if certain types of users are more at risk of the harms that AI chatbots might bring.

It’s also unclear if we need to be worried about emotional dependence, unhealthy attachment, worsening loneliness, or intensive use.

AI chatbots may be a useful place to start when you’re having a bad day and just need a chat. But when the bad days continue to happen, it’s time to talk to a professional as well.The Conversation

Centaine Snoswell, Senior Research Fellow, Centre for Health Services Research, The University of Queensland; Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology, and Laura Neil, PhD Candidate, Centre for Health Services Research, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

HomeArt/Shutterstock

Google’s SynthID is the latest tool for catching AI-made content. What is AI ‘watermarking’ and does it work?

Authors T.J. Thomson, Elif Buse Doyuran, Jean Burgess
Date 3 June 2025

Last month, Google announced SynthID Detector, a new tool to detect AI-generated content. Google claims it can identify AI-generated content in text, image, video or audio.

But there are some caveats. One of them is that the tool is currently only available to “early testers” through a waitlist.

The main catch is that SynthID primarily works for content that’s been generated using a Google AI service – such as Gemini for text, Veo for video, Imagen for images, or Lyria for audio.

If you try to use Google’s AI detector tool to see if something you’ve generated using ChatGPT is flagged, it won’t work.

That’s because, strictly speaking, the tool can’t detect the presence of AI-generated content or distinguish it from other kinds of content. Instead, it detects the presence of a “watermark” that Google’s AI products (and a couple of others) embed in their output through the use of SynthID.

A watermark is a special machine-readable element embedded in an image, video, sound or text. Digital watermarks have been used to ensure that information about the origins or authorship of content travels with it. They have been used to assert authorship in creative works and address misinformation challenges in the media.

SynthID embeds watermarks in the output from AI models. The watermarks are not visible to readers or audiences, but can be used by other tools to identify content that was made or edited using an AI model with SynthID on board.

SynthID is among the latest of many such efforts. But how effective are they?

There’s no unified AI detection system

Several AI companies, including Meta, have developed their own watermarking tools and detectors, similar to SynthID. But these are “model specific” solutions, not universal ones.

This means users have to juggle multiple tools to verify content. Despite researchers calling for a unified system, and major players like Google seeking to have their tool adopted by others, the landscape remains fragmented.

A parallel effort focuses on metadata – encoded information about the origin, authorship and edit history of media. For example, the Content Credentials inspect tool allows users to verify media by checking the edit history attached to the content.

However, metadata can be easily stripped when content is uploaded to social media or converted into a different file format. This is particularly problematic if someone has deliberately tried to obscure the origin and authorship of a piece of content.

There are detectors that rely on forensic cues, such as visual inconsistencies or lighting anomalies. While some of these tools are automated, many depend on human judgement and common sense methods, like counting the number of fingers in AI-generated images. These methods may become redundant as AI model performance improves.

An AI-generated image shows a woman waving with a six-fingered hand.
Logical inconsistencies, such as extra fingers, are some of the visual ‘tells’ of the current era of AI-generated imagery.
T J Thomson, CC BY-NC

How effective are AI detection tools?

Overall, AI detection tools can vary dramatically in their effectiveness. Some work better when the content is entirely AI-generated, such as when an entire essay has been generated from scratch by a chatbot.

The situation becomes murkier when AI is used to edit or transform human-created content. In such cases, AI detectors can get it badly wrong. They can fail to detect AI or flag human-created content as AI-generated.

AI detection tools don’t often explain how they arrived at their decision, which adds to the confusion. When used for plagiarism detection in university assessment, they are considered an “ethical minefield” and are known to discriminate against non-native English speakers.

Where AI detection tools can help

A wide variety of use cases exist for AI detection tools. Take insurance claims, for example. Knowing whether the image a client shares depicts what it claims to depict can help insurers know how to respond.

Journalists and fact checkers might draw on AI detectors, in addition to their other approaches, when trying to decide if potentially newsworthy information ought to be shared further.

Employers and job applicants alike increasingly need to assess whether the person on the other side of the recruiting process is genuine or an AI fake.

Users of dating apps need to know whether the profile of the person they’ve met online represents a real romantic prospect, or an AI avatar, perhaps fronting a romance scam.

If you’re an emergency responder deciding whether to send help to a call, confidently knowing whether the caller is human or AI can save resources and lives.

Where to from here?

As these examples show, the challenges of authenticity are now happening in real time, and static tools like watermarking are unlikely to be enough. AI detectors that work on audio and video in real time are a pressing area of development.

Whatever the scenario, it is unlikely that judgements about authenticity can ever be fully delegated to a single tool.

Understanding the way such tools work, including their limitations, is an important first step. Triangulating these with other information and your own contextual knowledge will remain essential.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Elif Buse Doyuran, Postdoctoral Research Fellow, Queensland University of Technology, and Jean Burgess, Distinguished Professor of Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Being monitored at work? A new report calls for tougher workplace surveillance controls

Playback CCTV cameras in business office on computer screen. Interface of AI futuristic program with information and recognition system. Security cameras. Concept of identification and tracking.
Frame Stock Footage/Shutterstock

Being monitored at work? A new report calls for tougher workplace surveillance controls

Authors Joo-Cheong Tham, Alysia Blackham, Jake Goldenfein
Date 28 May 2025

Australian employers are monitoring employees, frequently without workers’ knowledge or consent, according to a new report.

And when workers do know about surveillance, there is little they can do about it. Laws have not kept pace, producing negative impacts for workers and workplaces.

A Labor-chaired Victorian parliamentary inquiry has released a report on workplace surveillance and the need for more effective national regulation.

The growth of workplace surveillance

After public hearings and submissions from major employer, industry and union groups, the inquiry found new technology was enabling workers to be monitored in the workplace and remotely.

Optical, listening, tracking and data recording devices are being used to monitor employees, often without knowledge or consent.



While use varied according to industry, the committee found widespread workplace surveillance including:

  • jobs by factory workers being monitored with time taken to do tasks recorded
  • retina, finger, hand and facial features biometric data collected from nurses and construction workers
  • mobile phone apps used to track location of banking staff
  • infrared cameras used to scan truck drivers in their cabins for 12 consecutive hours
  • university workers’ computer usage and emails monitored
  • sensitive financial and medical data collected.

The committee also considered the use in Australian workplaces of tools with sophisticated surveillance capabilities (including Microsoft Teams), to monitor remote work arrangements.

Some of these tools deployed AI features, including emotional and neuro surveillance. They could be used to determine workers’ moods and level of attention or effort.

The committee found some workplaces were collecting a vast amount of information it considered invasive and posed major cybersecurity risks.

Legitimate surveillance

The inquiry found there were certain circumstances when workplace surveillance was legitimate. These included managing work health and safety risks like fatigue and preventing fraud and theft.

But it also highlighted the lack of evidence workplace surveillance improves productivity. Such surveillance could lead to “function creep” – where surveillance used for one purpose is covertly used for others.

Beyond invading privacy, the committee found surveillance could cause work intensification, increased risks of injury and worker stress from constant monitoring.

Surveillance could also exacerbate the inequality of power between workers and their employers and worsen discrimination.

The monitoring of some tasks could result in certain jobs being dumbed down or degraded. Monitoring often measured the wrong things – like keystrokes – that do not capture real performance of careful thinking or writing.

Poor regulation

Massive regulatory gaps have allowed workplace surveillance to flourish because of the lack of controls on employers’ monitoring and collection of data.

Employers’ ability to monitor workers through their control of work premises and equipment can leave some employees exposed to surveillance without notification.

And there are few laws to check these powers.

Two significant exemptions mean there is scant regulation of surveillance under the federal Privacy Act. Businesses with an annual turnover of less than A$3 million are exempt as are employee records.

The employee records exemption means the Act does not apply to employee data collected when the worker is a current employee with the exemption applying even after the employment relationship has ended.

Individual consent

Only New South Wales and the Australian Capital Territory have dedicated workplace surveillance laws.

They require employers to give employees advance notice of surveillance, and, in the ACT, to consult with employees about introducing surveillance and managing data.

These regimes, however, offer little substantive protection because they rely on “individual consent” – meaning surveillance is authorised if workers agree.

Refusing consent in employment is, however, unrealistic given workers’ dependence on their jobs. This vulnerability is compounded by case law suggesting employees can be dismissed for refusing to provide their data.

Victoria lags behind

Without dedicated workplace surveillance laws, the position in Victoria is even worse. The Victorian Privacy and Data Protection Act only applies to specified public sector organisations – and not the private sector.

And the Victorian Surveillance Devices Act only applies to listening and optical surveillance in restricted circumstances (workplace toilets and the like and “private activity”). Its regulation of data surveillance does not apply to employers, only to law enforcement officers.

The overall result, emerging from the findings of the committee, has been secret, unaccountable and damaging surveillance in some workplaces, without worker notice or consultation.

What’s needed

The inquiry report calls for dedicated workplace surveillance legislation among its 18 recommendations.



The legislation should require employers to demonstrate any surveillance is “reasonable, necessary and proportionate to achieve a legitimate objective”, the committee found. It should also ensure transparency of workplace surveillance and meaningful consultation with workers.

The sale of worker data to third parties needs to be prohibited and severe restrictions imposed on the collection and use of biometric data.

The committee also recommended measures to ensure effective implementation of the Information Privacy Principles which govern the collection, use and disclosure of a person’s information.

It recommended that these new laws be enforced by an independent regulatory authority.The Conversation

Joo-Cheong Tham, Professor, Melbourne Law School, The University of Melbourne; Alysia Blackham, Professor in Law, The University of Melbourne, and Jake Goldenfein, Senior law lecturer, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Sexual health info online is crucial for teens. Australia’s new tech codes may threaten their access

Close up have two people holding hands
Egoitz Bengoetxea/Shutterstock

Sexual health info online is crucial for teens. Australia’s new tech codes may threaten their access

Authors Giselle Woodley, Kath Albury, Zahra Stardust
Date 29 May 2025

Last week, organisations from Australia’s online industries submitted a final draft of new industry codes aimed at protecting children from “age-inappropriate content” to the eSafety commissioner.

The commissioner will now decide if the codes are appropriate to be implemented under the Online Safety Act.

The codes aim to address young people’s access to pornography, high-impact violence, and material relating to self-harm, suicide and disordered eating.

However, the draft codes may have unintended consequences. There is a real risk they may further restrict access to materials about sex education, sexual health information, harm reduction and health promotion.

Social media can operate as a powerful medium to teach teens and young people sexual information.

Social media campaigns (some government funded) target rising rates of sexual violence. They also disseminate important sexual health information.

What are the industry codes?

The eSafety commissioner is in the process of introducing codes of practice for the online industry “to protect Australians from illegal and restricted online content”. The Phase 1 codes, aimed at illegal content such as child sexual exploitation material, came into effect last year.

Now the commissioner is looking at Phase 2. These are designed to prevent young people from accessing “inappropriate” but not illegal content. They will do this via age-assurance mechanisms and by filtering, de-prioritising, downranking and suppressing content.

The codes will apply to operating systems, various internet services, search engines and hardware, such as smartphones and tablets.

Tech companies will have more power (and responsibility) to remove content and suspend users. Companies that don’t follow the codes risk fines of up to US$49.5 million (around A$77 million).

Suppression of sexual health content

The idea of using technology to restrict online content by age is problematic. The Australian government itself has deemed that age-assurance technologies are not ready to be used. State-of-the-art software has shown racial and gendered bias.

And digital platforms have a poor track record of governing sexual media.

International human rights organisations, including the United Nations, have warned that automated content moderation is being used to censor sex education and consensual sexual expression.

Research shows many platforms tend to remove or suppress content about drag queens, trans rights, sexual racism, body positivity and sex worker safety.

At the same time, they allow health misinformation and hate speech directed at LGBTQ+ people.

Sexual health organisations and educators already face challenges using social media to communicate with key audiences, including LGBTQ+ communities. These include having their content made less visible (“shadowbanning”) or outright removed.

Unintended consequences

Content moderation policies are already very restrictive. To enforce them, platforms use nudity and pornography detection software that is often biased toward heteronormative standards.

For example, Google’s computer vision software has previously relied on word databases that link “bisexuality” with “pornography”, “sodomy” with “bestiality”, and “masturbation” with “self-abuse”.

Many users currently use “algospeak”. This is language designed to avoid the notice of the algorithms that may flag content as inappropriate, often involving tweaks such as using emojis or “seggs” or “s&x” instead of “sex”.

The government recognises the power of social media. It has committed more than A$100 million towards Our Watch (a leading organisation advocating against violence against women) and its teen-focused social media initiative The Line.

Another A$3.5 million has gone to the Teach Us Consent organisation. This group creates social media content for teens and young people about consent, healthy relationships, pornography and sex.

Like the looming youth social media ban, the proposed industry codes may undermine the government’s own efforts to reduce gender-based violence.

Sex education and health promotion

Social media platforms try to separate health information from general sexual content. For example, they may aim to allow nudity in cases like childbirth, breastfeeding, medical care or protests.

However, evidence suggests these exceptions are currently almost impossible to moderate accurately. They rely on a distinction between sex education and sexual media that is blurry at best.

In reality, sexuality education is not simply technical information about infections, sexual dysfunction or medical care. Sexual imagery plays an important role in sexual health promotion. Young people respond well to visual methods of communication and learning.

Likewise, the importance of pleasure has been long recognised in HIV prevention, safer sex and violence prevention efforts. Industry codes should recognise sexual media as a potential medium for conducting sex education and promoting sexual and reproductive rights.

Governments in many countries are moving to restrict sexual information and health services. This includes efforts to criminalise abortion, limit access to trans health care and prevent comprehensive sex education.

In this context, access to online health promotion and sex education content is even more vital.

Ensuring access to sexual health material

The industry codes are intended to protect. However, they risk endangering the ability of Australians to access essential information.

This is especially important for the many young people who do not have access to comprehensive sexuality and reproductive health information at home or school.

To uphold sexual rights to information, privacy and expression, the codes must shift away from simply giving platforms an incentive to detect and suppress all sexual content.

Instead, the codes should ensure non-discriminatory access and require platforms to promote material that supports sexual health, rights and justice. In practice, this necessitates careful consideration of content in context.

This task might seem time consuming, resource heavy and difficult for regulators and platforms alike. But the implications of content suppression are too dire to overlook.

In our view, the codes should be paused until they are able to balance protection with rights to information.The Conversation

Giselle Woodley, Lecturer and Research Fellow, Edith Cowan University; Kath Albury, Professor of Media and Communication and Associate Investigator, ARC Centre of Excellence for Automated Decision-Making + Society, Swinburne University of Technology, and Zahra Stardust, Lecturer in Digital Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Exploring the Future of Search: ADM+S PhD Student Presents at ACM Web Conference 2025

Presenters from the PhD Symposium track at the ACM Web Conference 2025
Presenters from the PhD Symposium track at the ACM Web Conference 2025

Exploring the Future of Search: ADM+S PhD Student Presents at ACM Web Conference 2025

Author ADM+S Centre
Date 19 May 2025

Sara Fahad Dawood Al Lawati, a PhD student from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University, recently presented her research at The ACM Web Conference 2025 in Sydney.

Her paper, “I Am Not a Caveman: An Eye-Tracking Study of How Users are Influenced to Search in the Era of GenAI”, introduces a novel methodology to explore whether we are experiencing a generational shift in how people search for information online. By combining eye-tracking technology with an analysis of user interactions, Sara investigates how generative AI is reshaping the way people seek and engage with information.

The ACM Web Conference is an international event that focuses on the future directions of the World Wide Web. It brings together researchers, developers, users, and commercial ventures to discuss the evolution of the web, the standardisation of its technologies, and their impact on society and culture.

As part of the PhD Symposium track, Sara was paired with a mentor who helped her reflect on the direction and impact of her research.

“I was inspired to think about how I can make my work more impactful,” Sara said.

“The conference helped me generate great ideas for the next steps in my research.”

In addition to presenting, Sara volunteered at the conference—gaining behind-the-scenes experience in how international conferences are organised, while also making valuable connections with fellow student attendees.

At the event, Sara reconnected with ADM+S researchers from across Australia, met RMIT alumni now working in other states, and exchanged ideas with PhD students working in related fields. These interactions have already laid the groundwork for potential collaborative lab visits in Sydney and Dublin, which she plans to pursue during her candidature.

Sara Fahad Dawood Al Lawati presenting her research at the ACM Web Conference 2025

Inspired by the conference, Sara now aims to publish both her research findings and resource papers that make her methodologies reusable by others—particularly those working across disciplines.

“I’ve come to realise the importance of creating scalable and shareable research tools,” she noted.
“We need reproducible methodologies in computer science, especially for user studies.”

Reflecting on her experience, Sara encouraged other PhD students to submit to doctoral symposiums, apply for travel awards, and volunteer at conferences.

“If a major conference is happening in your city, don’t hesitate—submit a paper, volunteer, attend,” she said.
“It’s a great opportunity to share your work, learn from others, and build lasting research connections.”

Sara attended the conference with support from a travel award from the ACM Web Conference and funding through the ADM+S HDR Funding Scheme.

SEE ALSO

“I Am Not a Number” – A powerful new documentary premieres on SBS On Demand

Young girl holding standing in a room in front of a LGBTIQ flag with a sign that says "Your Algorithm Doesn't Know Me"

“I Am Not a Number” – A powerful new documentary premieres on SBS On Demand

Author ADM+S Centre
Date 19 May 2025

A timely and compelling documentary exploring the human cost of digital transformation in public services, created by researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in collaboration with ROBONDIS activists, will be premiering from 2:00 pm tomorrow on SBS Demand.

The documentary I Am Not a Number (2024) investigates the real-life consequences of algorithm-driven decision-making in the National Disability Insurance Scheme (NDIS). The film explores the complex intersection of technology, bureaucracy, and lived experience—revealing the disconnection between digital efficiency and human need.

Directed by Jeni Lee in collaboration with Georgia van Toorn, and ROBONDIS activists, I Am Not a Number follows seven individuals—Mark, Marie, Erin, Paris, Olisama, Paul, and Kaili—whose lives have been profoundly affected by technological changes. 

As the Australian Government aspires to lead the world in digital innovation. Initiatives in digital governance have seen the introduction of algorithms for NDIS support planning. While the government’s vision promises efficiency and modernisation, the reality is far more complex. 

Through deeply personal stories, the documentary uncovers how the inflexible nature of these algorithms has not only failed to meet their needs but has also caused significant harm to the very people it aimed to support.

With research consultancy from academics Sarah Pink and Thao Phan, and produced with the support of consultant producer Anna Grieve, I Am Not a Number is a must-watch for policymakers, advocates, and all Australians concerned about the future of public service in a digital age.

Watch I Am Not a Number from 2:00 PM tomorrow, exclusively on SBS On Demand.

SEE ALSO

ADM+S publication highlights for the first quarter of 2025

Covers from journal publications and reports

ADM+S publication highlights for the first quarter of 2025

Author ADM+S Centre
Date 8 May 2025

In the first quarter of 2025, ADM+S researchers have published over 70 outputs addressing local and global challenges in automated decision-making systems and AI. Some of the highlights are listed below.

Policy and Regulatory Contributions
In January, Centre researchers submitted a response to the Attorney-General Department’s consultation paper on ADM Reform. The submission argues for a more comprehensive regulatory framework for ADM in the public sector and offers key recommendations including systematic and preventative measures, an independent oversight body, and qualified transparency mechanics.

Generative AI & Journalism
In February, ADM+S launched the report Generative AI & Journalism, exploring how generative AI is reshaping journalistic practices, and how these changes are perceived by both journalists and news audiences.

Digital Inclusion Research
The Centre continued to advance work on digital inclusion. Key outputs included:

Data Justice and Governance
The paper Deepening the Data Divide: Marginalised Perspectives and Non-Profit Priorities in Australian Data Sharing Reforms examines public data and data-sharing reforms in Australia (2018–2022), highlighting the risk of these reforms exacerbating existing inequalities. The research calls for stronger inclusion and support for civil society organisations to ensure equitable data practices.

Additional Research and Commentary
Further publications from Q1 include:

Research into regulatory and policy responses to unhealthy food advertising on social media in Australia.

Opinion pieces examining the implications of Meta’s transition from fact-checking services to community notes.

Analysis of the use of AI in social services and its societal impacts.

View the Publications for ADM+S: Q1 2025 video
You can find these publications and more in the ADM+S Publications Library

SEE ALSO

YouTube Turns 20: from a video sharing platform to general purpose technology

Screenshot from YouTube's first video
Screenshot from YouTube's first video - titled Me at the Zoo

YouTube Turns 20: from a video sharing platform to general purpose technology

Author ADM+S Centre
Date 8 May 2025

It’s been two decades since the first video was uploaded to YouTube—a 19-second clip titled Me at the Zoo, featuring co-founder Jawed Karim standing awkwardly in front of elephants at San Diego Zoo. The unassuming video has been viewed over 300 million times and marks the beginning of what would become one of the most influential platforms on the internet.

Since its beginnings as a website for everyday video-sharing in 2005, YouTube has grown into a sprawling ecosystem that has reshaped how we watch, learn, and interact with media. It is also one of the world’s most powerful digital media platforms and now competes with other powerful social media and streaming television platforms. In February this year, YouTube was hosting an estimated 5.1 billion videos, with uploads increasing by over 360 hours every minute. This number has nearly doubled since 2021.

Distinguished Professor Jean Burgess, Associate Director of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), GenAI Lab Director at QUT and co-author of the first academic book on YouTube, reflected on the platform’s evolution during a recent interview on ABC Radio.

“In 2005, we couldn’t quite work out what YouTube was going to be,” Burgess said. “The internet had been used for communication and blogs, but the idea of it being a widely accessible audiovisual medium was really quite new.”

The platform became popular quickly. Just a year after launch, Google acquired YouTube for US$1.65 billion in stock—a deal that seemed like a big bet at the time, but that now looks like a bargain.

“There was this real excitement about the idea that people could take part in creative media and share their thoughts and everyday experiences.” 

Early adopters included not only internet-savvy young people, but also hobbyists, educators, and people looking for online community. 

“There was a tremendous use of it for education, sharing ideas, tips and tricks, from guitar-playing to car repairs, as well as just for having fun and showing off. And people started to make a name for themselves.”

Today, YouTube is the go-to platform for everything from DIY tutorials to music videos and product reviews. 

Listeners calling into the program shared what they’d learned from the site. One had restored a vintage Kombi van, another picked up sailing techniques for dinghies, and many credit golf tutorials for a better swing.

Burgess noted that this educational use was present from the start. “It was really hard at the time to take video from your camera, upload it, and share on your personal website in a way that made it easy for others to view – YouTube founders solved this technical problem early on.”

YouTube’s broad reach brings with it growing concerns around content quality and algorithm-driven radicalisation.

Increasingly, attention is turning to the rise of AI-generated content on the platform, where platforms need to deal with challenges around misinformation and a flood of poor-quality content.

“There are all sorts of ways platforms govern and control what appears on them, if they choose to – but often only in response to external pressure from advertisers or the public,” said Professor Jean Burgess.

These issues are central to the Generative Authenticity project led by Professor Burgess and her team at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

Originally published in 2009 when YouTube was only four years old (second edition 2018), Professor Burgess’s book with Joshua Green, YouTube: Online Video and Participatory Culture, was the first to systematically investigate its cultural impacts and politics, highlighting the productive tensions between its amateur community rhetoric and its commercial media logics.

SEE ALSO

New open-access platform boosts digital and data capabilities for sexual and reproductive health sector

Ilustration of person with pink hair sitting at desk using laptop

New open-access platform boosts digital and data capabilities for sexual and reproductive health sector

Author ADM+S Centre
Date 29 April 2025

A new open-access website, DDCSRH.com, is helping sexual and reproductive health organisations and workforces strengthen their digital and data capabilities.

Developed through three years of research led by Australian Research Council Future Fellow Prof Kath Albury and Dr Samantha Mannix both from Swinburne University of Technology’s ARC Centre of Excellence for Automated Decision-Making and Society, the Digital and Data Capabilities for Sexual and Reproductive Health (DDCSRH) project offers practical, evidence-based resources to support digital transformation in the sector.

“We know that digital transformation is moving really fast,” Prof Albury. 

“There are a lot of high level Commonwealth initiatives coming out in relation to workforce resourcing and upskilling, but they don’t necessarily focus on sexual and reproductive health contexts.

At the same time, the volatile geopolitical environment is having a serious impact on public health outreach. Social media content focused on reproductive health and sexual health is increasingly ‘shadow-banned’, censored, or maliciously tagged as ‘misinformation’.

Our aim is to help organisations build strategic conversations — helping managers and staff work out what they need, when they need it, and the resources that will help them get there.”

The platform offers a range of tailored tools, including:

  • Models and checklists for building digital, data, and consumer capabilities
  • Evidence-informed guides on emerging digital sexual and reproductive health topics
  • Case studies grounded in recent Australian research
  • Links to relevant policies and Commonwealth training hubs

DDCSRH.com is designed to guide strategic and productive dialogues between sexual and reproductive health professionals, managers and board members, community stakeholders and sexual and reproductive health consumers, with the aim of promoting ethical and inclusive approaches to digital transformation. 

The resources are informed by interviews, workshops, and participatory research with young Australians exploring how platforms like HotDoc and TikTok influence sexual and reproductive health management.

DDCSRH.com is a collaboration between Australian and international researchers across clinical service provision, health promotion, social research, youth studies, digital health, and data studies.

The Digital and Data Capabilities for Sexual and Reproductive Health project builds on the ARC Centre of Excellence for Automated Decision-Making and Society’s previous research into data capabilities in the not-for-profit sector. 

SEE ALSO

Building transformative alternatives in the digital economy: a global leader’s perspective

Trebor Scholz

Building transformative alternatives in the digital economy: a global leader’s perspective

Author ADM+S Centre
Date 29 April 2025

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is proud to host renowned scholar and activist Associate Professor Trebor Scholz for a special presentation From Vibe to Viability: A Methodology for Building Transformative Alternatives in the Digital Economy.”

A leading voice in the global movement for democratic digital infrastructure, Associate Professor Scholz will share insights from real-world cooperative experiments across more than 60 countries — showing that alternatives to extractive tech platforms are not just possible, but already functioning systems.

About the presentation
Every time you order a meal, obtain directions, query an AI chatbot, or access your child’s virtual classroom, you’re interacting with a multi-sided digital platform—and you trade in more than just time or money. You relinquish data. You perform unpaid labor. 

And in nearly every case, that data, along with the profits, leaves your community and flows to distant companies with no stake in your local economy. But what if the digital economy worked differently—what if it respected privacy, strengthened local economies, and ensured communities benefited from the value they help create?

Drawing on a decade of research and hands-on collaboration, Scholz will showcase functioning models that challenge the status quo:

  • A driver-owned ride-hailing platform in New York City
  • A community telecoms cooperative in Mzamba, South Africa
  • A care worker co-op in Sydney
  • An artist-owned stock photography platform in Canada
  • A food delivery system operated by 80 worker-owned cooperatives across Europe

These initiatives tackle urgent issues such as excessive workplace surveillance, loss of privacy, precarious gig work, and the deepening power of tech oligarchs — offering instead community-rooted alternatives built to last.

The presentation will also confront the hard realities of cooperative models: the struggle to maintain momentum, the challenges of scale, and the daily demands of democratic governance.

Event Details
Friday 16 May, 5.30pm- 7.30pm
The Green Brain, Building 16, RMIT Melbourne and Online
Register to attend

SEE ALSO

What political ads are Australians seeing online? Astroturfing, fake grassroots groups, and outright falsehoods

A collage of colourful advertisements
gandr collage

What political ads are Australians seeing online? Astroturfing, fake grassroots groups, and outright falsehoods

Author Daniel Angus, Christine Parker, Giselle Newton, Kate Clark, Mark Andrejevic
Date 28 April 2025

In the lead-up to the 2025 Australian federal election, political advertising is seemingly everywhere.

We’ve been mapping the often invisible world of digital political advertising across Facebook, Instagram and TikTok.

We’ve done this thanks to a panel of ordinary Australians who agreed to download an ad tracking app developed through the Australian Internet Observatory.

We’re also tracking larger trends in political ad spending, message type and tone, and reach via the PoliDashboard tool. This open source tool aggregates transparency data from Meta (including Facebook and Instagram) which we use to identify patterns and items of concern.

While the major parties are spending heavily and are highly visible in the feeds of our participants, it is the prevalence of third-party political advertising that is most striking. We’ve observed a notable trend: for every ad from a registered political party, there is roughly one ad from a third-party entity.

Astroturfing and the illusion of grassroots support

One of the most concerning trends we’re seeing is a rise in astroturfing. This refers to masking the sponsors of a message to make it appear as though it originates from ordinary citizens or grassroots organisations.

Astroturfing ads do often adhere to the formal disclosure requirements set out by the Australian Electoral Commission. However, these disclosures don’t meaningfully inform the public on who is behind these misleading ads.

Authorisation typically only includes the name and address of an intermediary. This may be a deliberately opaque shell entity set up just in time for an election.

A key example seen by participants in our study involves the pro-gas advocacy group Australians for Natural Gas.

It presents itself as a grassroots movement, but an ABC investigation revealed this group is working with Freshwater Strategy – the Coalition’s internal pollster. Emails obtained by the ABC show Freshwater Strategy is “helping orchestrate a campaign to boost public support for the gas industry ahead of the federal election”.

Other examples we’ve encountered in our monitoring include groups with benign-sounding names like Mums for Nuclear and Australians for Prosperity. These labels and the ads they are running suggest grassroots concern, but they obscure the deeper agendas behind them.

In the case of Australians for Prosperity, an ABC analysis revealed backing from wealthy donors, former conservative MPs and coal interests.

The battle over energy

Nowhere is this more evident than in messaging around energy policy, especially nuclear power and gas.

In recent months, both major parties and a swathe of third-party advertisers have run targeted online campaigns focused on the costs and benefits of different energy futures. These ads play to deeply felt concerns about cost of living, action on climate change, and national sovereignty.

Yet many of these messages, particularly those that promote gas and nuclear, come from organisations with opaque funding and undeclared political affiliations or connections. Voters may see a slick Facebook ad or a sponsored TikTok explainer without any idea who paid for it, or why.

And with no obligation to be truthful, much of this content may be deeply misleading. It muddies public understanding at a critical moment for climate action.

Truth not required

Truth in political advertising isn’t legally required in all of Australia. While businesses can’t mislead consumers under consumer law, political parties and third-party campaigners are exempt from those same standards.

This means misleading or outright false claims – about opponents, policies or the state of the economy – can be repeated and amplified without consequence, provided they’re framed as political opinion.

Despite calls for reform from politicians, experts and civil society groups, federal legislation continues to lag behind community expectations.

South Australia and the Australian Capital Territory do have truth in political advertising laws, but there is still no national standard.

In the digital advertising environment, where ads are fast, fleeting, and often tailored to individuals, the absence of such independent scrutiny allows misinformation to flourish unchecked.

Most people are seeing very little – or so it seems

Paradoxically, our data shows the majority of participants are seeing very few political ads. Of the total ads seen, less than 2% pertained to political topics or the election specifically.

This is partly a result of how the advertising products offered by platforms like Meta and TikTok allow ads to be targeted to specific demographics, locations or interests. This means even two people in the same household may have entirely different ad experiences.

But it’s also a reminder social media ads are just the tip of the iceberg. Much political persuasion online happens outside paid ad campaigns – via influencer content, YouTube recommendations, algorithmic amplification, mainstream media coverage and more.

Because platforms and publishers aren’t required to share this broader content with researchers or the public, we can’t easily track it – although we are trying.

We need meaningful observability

If democracy is to thrive in a digital age, we need to be able to independently observe online political communication, including advertising.

Existing measures like campaign finance disclosures and transparency tools provided by platforms will never be enough. They don’t include user experiences or track patterns across populations and over time. This inevitably means some advertising activity flies under the radar.

We lack robust tools to understand and analyse our current fragmented information landscape.

Where platforms don’t provide meaningful data access to researchers and the public, tools like the Ad Observatory and PoliDashboard offer valuable glimpses into a fragmented information landscape, while remaining incomplete.

However, tools on their own are not enough. We also need to be willing to call out and act when politicians mislead the public.


Acknowlegement: The Australian Ad Observatory is a team effort. The authors wish to acknowledge the contribution of Jean Burgess, Nicholas Carah, Alfie Chadwick, Kyle Herbertson, Tina Kang, Khanh Luong, Abdul Karim Obeid, Lina Przhedetsky, and Dan Tran.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology; Christine Parker, Professor of Law, The University of Melbourne; Giselle Newton, Research Fellow, The Centre for Digital Cultures and Societies, The University of Queensland; Kate Clark, Node Administrator, ARC Centre of Excellence for Automated Decision-Making & Society, Monash University, and Mark Andrejevic, Professor of Media, School of Media, Film, and Journalism, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Election meme hits and duds – we’ve graded some of the best (and worst) of the campaign so far

Anthony Albanese and Peter Dutton
Lukas Coch/AAP, Mick Tsikas/AAP, The Conversation

Election meme hits and duds – we’ve graded some of the best (and worst) of the campaign so far

Authors T.J. Thomson, Stephen Harrington
Date 24 April 2025

As Australia begins voting in the federal election, we’re awash with political messages.

While this of course includes the typical paid ads in newspapers and on TV (those ones with the infamously fast-paced “authorised by” postscripts), political parties and lobby groups now compete especially hard for our attention online.

And, if there’s one thing internet users love, it’s a good meme.

Indeed, as far back as two elections ago, in the 2019 campaign, the Liberal Party discovered the power of so-called “boomer memes”, and harnessed them effectively to help secure a third term in government.

The other parties have since caught on though, and are battling hard to win the messaging war in a way that will resonate with voters, especially those who are inclined to ignore a typical political advertisement.

What makes a good meme?

The best political communication often contains a few key elements.

First, it should be developed with a clear understanding of context, purpose and audience. If the target audience can’t get the message pretty much straight away, then it’s not much good.

It should also spark some sort of emotional reaction. It should make voters feel something and motivate them to act, or change their voting intention.

When it comes to political memes in particular, they need to make some clear reference to widely known cultural material. This might be a trending event in popular culture, or fit into an established meme format.

And, of course, the best memes are fun. As the quote, often attributed to American funnyman Andy Kaufman, goes: “if you can make someone laugh, you can make them think”.

Below, we have collected some of the major Australian political parties’ recent efforts on the meme front during the 2025 election campaign, and assessed their effectiveness. We graded them from “A” for best down to “D” for worst.

Grading political messages

We’ll start with the “diss track” the Liberals released earlier this month.

We’d give this one a “D” grade. It focuses heavily on cost of living and might spark an emotional reaction from voters who feel pain when going to the shops. But, it’s highly unlikely to hit the mark, given it was released on a minor platform, and rap music (with its Black American roots) doesn’t exactly gel with the Liberal Party’s overall image and ethos.

One SoundCloud user probably best summed up the vibe here, by referencing another famous internet meme: “how do you do, fellow kids?”

The Liberals did much better, however, with their version of the popular AI action figure trend that’s sweeping the Internet.

We’d give this one a solid “B+.” It features some clever one-liners, makes use of a current trend, and makes its point easily and quickly. We knock a few points off for the redundant focus on “cheaper power” This would have been better as two separate issues rather than repeating one twice.

Instead, we give Labor’s version a “C-”.

It looks only barely like the prime minister. He is shown as neutral rather than smiling. And the accessories chosen feel forced.

Although both memes tap into a trend, their shelf life will likely be short. This is in contrast to political ads like the below.

Rather than jump on the latest, short-lived trend, this ad draws on cultural material that’s more than three decades old but considered classic. The juxtaposition of a widely seen children’s cartoon with a political ad provides a surprising contrast. And the strategic editing drew more than a few giggles out of us.

We’d give this one an “A-.” It still relies on audio, which is often disabled by default, to get its point across but is solid, overall.

This ad by the Greens, however, misses the mark.

 

View this post on Instagram

 

A post shared by Australian Greens (@australiangreens)


We like Lady Gaga as much as the next person, but the cultural connection here seems dated and forced. Rather than focus on one key message, the ad instead mentions five separate policy positions. It also doesn’t work without audio. We’d give it a “C-.”

The Labor Party had more of a hit with this meme, though:

It appropriates the Venn diagram, a well-established meme format, which requires a degree of creativity and intelligence to pull off successfully. It makes a clear point, but also doesn’t bash its audience around the head with it. So, we’d give this a “B+”.

One of the best memes we’ve seen recently, however, comes from a Facebook page connected to The Greens:

The Simpsons has become a kind of lingua franca of the internet over the last decade or more, and has been the genesis of many, many popular memes, including during the last federal election.

This meme not only taps into that existing internet culture, and gestures towards one of the show’s sweetest-ever moments in recounting the circumstances of Maggie’s birth, but also cleverly draws on and repurposes one of the attack lines being used against the Greens (“Can’t vote Greens. Not this time”) by the lobby group Advance Australia. It’s a clever piece of communication and one of the few “A”-grade memes we’ve encountered in the campaign so far.

Your turn

Keep an eye on the memes you encounter in the next few weeks in the lead-up to the election on May 3. Which ones do you find effective and why?

But memes are only part of the story. Also consider the positions of the candidates and parties and their substantive policies. Memes, good or bad, can only go so far.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University and Stephen Harrington, Associate Professor of Journalism and Professional Communication, School of Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

These 3 climate misinformation campaigns are operating during the election run-up. Here’s how to spot them

A protestor holds a sign saying 'its getting hot in here'
Markus Spiske / Canva

These 3 climate misinformation campaigns are operating during the election run-up. Here’s how to spot them

Authors Libby Lester, Alfie Chadwick
Date 23 April 2025

Australia’s climate and energy wars are at the forefront of the federal election campaign as the major parties outline vastly different plans to reduce greenhouse gas emissions and tackle soaring power prices.

Meanwhile, misinformation about climate change has permeated public debate during the campaign, feeding false and misleading claims about renewable energy, gas and global warming.

This is a dangerous situation. In Australia and globally, rampant misinformation has for decades slowed climate action – creating doubt, hindering decision-making and undermining public support for solutions.

Here, we explain the history of climate misinformation in Australia and identify three prominent campaigns operating now. We also outline how Australians can protect themselves from misinformation as they head to the polls.

Misinformation vs disinformation

Misinformation is defined as false information spread unintentionally. It is distinct from disinformation, which is deliberately created to mislead.

However, proving intent to mislead can be challenging. So, the term misinformation is often used as a general term to describe misleading content, while the term disinformation is reserved for cases where intent is proven.

Disinformation is typically part of a coordinated
campaign
to influence public opinion. Such campaigns can be run by corporate interests, political groups, lobbying organisations or individuals.

Once released, these false narratives may be picked up by others, who pass them on and create misinformation.

Climate change misinformation in Australia

In the 1980s and 1990s, Australia’s emissions-reduction targets were among the most ambitious in the world.

At the time, about 60 companies were responsible for one-third of Australia’s greenhouse gas emissions. The government’s plan included measures to ensure these companies remained competitive while reducing their climate impact.

Despite this, Australia’s resource industry began a concerted media campaign to oppose any binding emissions-reduction actions, claiming it would ruin the economy by making Australian businesses uncompetitive.

This narrative persisted even when modelling repeatedly showed climate policies would have minimal economic impacts. The industry arguments eventually found their way into government policy.

Momentum against climate action was also fuelled by a vocal group of climate change-denying individuals and organisations, often backed by multinational fossil fuel companies. These deniers variously claimed climate change wasn’t happening, it was caused by natural cycles, or wasn’t that a serious threat.

These narratives were further exacerbated by false balance in media coverage, whereby news outlets, in an effort to appear neutral, often placed climate scientists alongside contrarians, giving the impression that the science was still unclear.

Together, this created an environment in Australia where climate action was seen as either too economically damaging or simply unnecessary.

What’s happening in the federal election campaign?

Climate misinformation has been circulating in the following forms during this federal election campaign.

1. Trumpet of Patriots

Clive Palmer’s Trumpet of Patriots party ran an advertisement that claimed to expose “ the truth about climate change”. It featured a clip from a 2004 documentary, in which a scientist discusses data suggesting temperatures in Greenland were not rising. The scientist in the clip has since said his comments are now outdated.

The type of misinformation is cherry-picking – presenting one scientific measurement at odds with the overwhelming scientific consensus.

Google removed the ad after it was flagged as misleading, but only after it received 1.9 million views.

2. Responsible Future Illawarra

The Responsible Future campaign opposes wind turbines on various grounds, including cost, foreign ownership, power prices, effects on views and fishing, and potential ecological damage.

Scientific evidence indicates offshore wind farms are relatively safe for marine life and cause less harm than boats and fishing gear. Some studies also suggest the infrastructure can create new habitat for marine life.

However, a general lack of research into offshore wind and marine life has created uncertainty that groups such as Responsible Future Illawarra can exploit.

It has cited statements by Sea Shepherd Australia to argue offshore wind farms damage marine life – however Sea Shepherd said its comments were misrepresented.

3. Australians for Natural Gas

Australians for Natural Gas is a pro-gas group set up by the head of a gas company, which presents itself as a grassroots organisation. Its advertising campaign promotes natural gas as a necessary part of Australia’s fuel mix, and stresses its contribution to jobs and the economy.

The ad campaign implicitly suggests climate action – in this case, a shift to renewable energy – is harmful to the economy, livelihoods and energy security. According to Meta’s Ad Library, these adds have already been seen more than 1.1 million times.

Gas is needed in Australia’s current energy mix. But analysis shows it could be phased out almost entirely if renewable energy and storage was sufficiently increased and business and home electrification continues to rise.

And of course, failing to tackle climate change will cause substantial harm across Australia’s economy.

How to identify misinformation

As the federal election approaches, climate misinformation and disinformation is likely to proliferate further. So how do we distinguish fact from fiction?

One way is through “pre-bunking” – familiarising yourself with common claims made by climate change deniers to fortify yourself against misinformation

Sources such as Skeptical Science offer in-depth analyses of specific claims.

The SIFT method is another valuable tool. It comprises four steps:

  • Stop
  • Investigate the source
  • Find better coverage
  • Trace claims, quotes and media to their original sources.

As the threat of climate change grows, a flow of accurate information is vital to garnering public and political support for vital policy change.


The Conversation

Alfie Chadwick, PhD Candidate, Monash Climate Change Communication Research Hub, Monash University and Libby Lester, Professor (Research) and Director, Monash Climate Change Communication Hub, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

Plants growing out of an old computer
Image credit: Pictus Photography / Canva

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

Authors Aaron J. Snoswell, Kevin Witzenberger, Rayane El Masr
Date 15 April 2025

Earlier this year, scientists discovered a peculiar term appearing in published papers: “vegetative electron microscopy”.

This phrase, which sounds technical but is actually nonsense, has become a “digital fossil” – an error preserved and reinforced in artificial intelligence (AI) systems that is nearly impossible to remove from our knowledge repositories.

Like biological fossils trapped in rock, these digital artefacts may become permanent fixtures in our information ecosystem.

The case of vegetative electron microscopy offers a troubling glimpse into how AI systems can perpetuate and amplify errors throughout our collective knowledge.

A bad scan and an error in translation

Vegetative electron microscopy appears to have originated through a remarkable coincidence of unrelated errors.

First, two papers from the 1950s, published in the journal Bacteriological Reviews, were scanned and digitised.

However, the digitising process erroneously combined “vegetative” from one column of text with “electron” from another. As a result, the phantom term was created.

Excerpts from scanned papers show how incorrectly parsed column breaks lead to the term 'vegetative electron micro...' being introduced.
Excerpts from scanned papers show how incorrectly parsed column breaks lead to the term ‘vegetative electron micro…’ being introduced. Bacteriological Reviews 

 

Decades later, “vegetative electron microscopy” turned up in some Iranian scientific papers. In 2017 and 2019, two papers used the term in English captions and abstracts.

This appears to be due to a translation error. In Farsi, the words for “vegetative” and “scanning” differ by only a single dot.

Screenshot from Google Translate showing the similarity of the Farsi terms for 'vegetative' and 'scanning'.
Screenshot from Google Translate showing the similarity of the Farsi terms for ‘vegetative’ and ‘scanning’. Google Translate 

An error on the rise

The upshot? As of today, “vegetative electron microscopy” appears in 22 papers, according to Google Scholar. One was the subject of a contested retraction from a Springer Nature journal, and Elsevier issued a correction for another.

The term also appears in news articles discussing subsequent integrity investigations.

Vegetative electron microscopy began to appear more frequently in the 2020s. To find out why, we had to peer inside modern AI models – and do some archaeological digging through the vast layers of data they were trained on.

Empirical evidence of AI contamination

The large language models behind modern AI chatbots such as ChatGPT are “trained” on huge amounts of text to predict the likely next word in a sequence. The exact contents of a model’s training data are often a closely guarded secret.

To test whether a model “knew” about vegetative electron microscopy, we input snippets of the original papers to find out if the model would complete them with the nonsense term or more sensible alternatives.

The results were revealing. OpenAI’s GPT-3 consistently completed phrases with “vegetative electron microscopy”. Earlier models such as GPT-2 and BERT did not. This pattern helped us isolate when and where the contamination occurred.

We also found the error persists in later models including GPT-4o and Anthropic’s Claude 3.5. This suggests the nonsense term may now be permanently embedded in AI knowledge bases.

Screenshot of a command line program showing the term 'vegetative electron microscopy' being generated by GPT-3.5 (specifically, the model gpt-3.5-turbo-instruct). The top 17 most likely completions of the provided text are 'vegetative electron microscopy
Screenshot of a command line program showing the term ‘vegetative electron microscopy’ being generated by GPT-3.5 (specifically, the model gpt-3.5-turbo-instruct). The top 17 most likely completions of the provided text are ‘vegetative electron microscopy’, and these suggestions are 2.2 times more likely than the next most likely prediction. OpenAI

By comparing what we know about the training datasets of different models, we identified the CommonCrawl dataset of scraped internet pages as the most likely vector where AI models first learned this term.

The scale problem

Finding errors of this sort is not easy. Fixing them may be almost impossible.

One reason is scale. The CommonCrawl dataset, for example, is millions of gigabytes in size. For most researchers outside large tech companies, the computing resources required to work at this scale are inaccessible.

Another reason is a lack of transparency in commercial AI models. OpenAI and many other developers refuse to provide precise details about the training data for their models. Research efforts to reverse engineer some of these datasets have also been stymied by copyright takedowns.

When errors are found, there is no easy fix. Simple keyword filtering could deal with specific terms such as vegetative electron microscopy. However, it would also eliminate legitimate references (such as this article).

More fundamentally, the case raises an unsettling question. How many other nonsensical terms exist in AI systems, waiting to be discovered?

Implications for science and publishing

This “digital fossil” also raises important questions about knowledge integrity as AI-assisted research and writing become more common.

Publishers have responded inconsistently when notified of papers including vegetative electron microscopy. Some have retracted affected papers, while others defended them. Elsevier notably attempted to justify the term’s validity before eventually issuing a correction.

We do not yet know if other such quirks plague large language models, but it is highly likely. Either way, the use of AI systems has already created problems for the peer-review process.

For instance, observers have noted the rise of “tortured phrases” used to evade automated integrity software, such as “counterfeit consciousness” instead of “artificial intelligence”. Additionally, phrases such as “I am an AI language model” have been found in other retracted papers.

Some automatic screening tools such as Problematic Paper Screener now flag vegetative electron microscopy as a warning sign of possible AI-generated content. However, such approaches can only address known errors, not undiscovered ones.

Living with digital fossils

The rise of AI creates opportunities for errors to become permanently embedded in our knowledge systems, through processes no single actor controls. This presents challenges for tech companies, researchers, and publishers alike.

Tech companies must be more transparent about training data and methods. Researchers must find new ways to evaluate information in the face of AI-generated convincing nonsense. Scientific publishers must improve their peer review processes to spot both human and AI-generated errors.

Digital fossils reveal not just the technical challenge of monitoring massive datasets, but the fundamental challenge of maintaining reliable knowledge in systems where errors can become self-perpetuating.The Conversation

Aaron J. Snoswell, Research Fellow in AI Accountability, Queensland University of Technology; Kevin Witzenberger, Research Fellow, GenAI Lab, Queensland University of Technology, and Rayane El Masri, PhD Candidate, GenAI Lab, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

What’s your TikTok personality profile? New citizen science project helps you find out

AI Generated" TikTok event stage with logos, vertical screens, smoke, neon lights, and fruits like oranges, apples, and coconuts
Shutterstock AI/Shutterstock

What’s your TikTok personality profile? New citizen science project helps you find out

Author ADM+S Centre
Date 17 April 2025

Ever wondered why certain videos show up in your TikTok Feed? Does TikTok know exactly what you like, or does it nudge you to like things? Whether you’re scrolling, liking, or making content, the TikTok algorithm is learning from you.

Using a new tool created by researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at QUT and The University of Sydney, TikTok users can now see themselves through the eyes of the algorithm. 

Launched ahead of the upcoming federal election, the For You Research Project explores how TikTok’s powerful recommendation algorithm is influencing culture, creativity, and public discourse in Australia.

“We’re using a new and exciting approach which is based on citizen science,” says lead researcher Patrik Wikström.

“TikTok is a highly personalised platform,” says Professor Wikström, and it’s “shaping what Australians see online”.

Participants are invited to join the project to learn what is shaping their TikTok Feed and get a new perspective on the role of TikTok in their lives.

This research will help researchers understand TikTok’s recommendation system and how it shapes culture, creativity, and public debates in Australia. 

The project explores three key questions:

  • What content does TikTok recommend to Australian users, and how does this shape our culture?
  • How do different people experience and interpret the algorithm?
  • How do TikTok creators adapt their strategies to stay visible and relevant?

Whether you’re a casual scroller, an avid content creator, or just curious about what your feed says about you, the For You Research Project is an opportunity to see TikTok from a whole new perspective.

SEE ALSO

ADM+S Research Fellow shares research on libraries and public values at international conferences

Dr Hegarty at the AlgoSoc International Scientific Conference 2025
Dr Hegarty at the AlgoSoc International Scientific Conference 2025 on ‘The Future of Public Values in the Algorithmic Society’ in Amsterdam on 11 April (photo credit: Sander Kruit)

ADM+S Research Fellow shares research on libraries and public values at international conferences

Author ADM+S Centre
Date 15 April 2025

In April 2025, ADM+S Research Fellow Dr Kieran Hegarty visited the Netherlands and United Kingdom to present his research on how changing publishing and distribution markets are reshaping how cultural institutions, particularly libraries, fulfil their mandates and serve their users. Libraries are long-standing public institutions that remain key social and cultural infrastructure, but like other civil society actors and public institutions, they face significant challenges in an age of AI and automated decision-making (ADM).

Funded by ADM+S Research Training Support Funds, Dr Hegarty presented his research at the inaugural Born-Digital Collections, Archives and Memory (BDCAM) Conference. The conference was organised by the Digital Humanities Research Hub in the School of Advanced Study at the University of London from 2-4 April 2025 and brought together leading academics and professionals developing and researching digital collections and archives across the world.

In a panel session on how large commercial digital platforms have reshaped the work of building and studying cultural collections, Dr Hegarty presented findings from his PhD research on how the twin forces of automation and commercialisation have changed how major public library collections are formed and studied. He drew on his ethnographic and historical fieldwork at the National Library of Australia and the State Library of New South Wales to detail how libraries negotiate an environment where access to information of long-term public interest is increasingly controlled by commercial platforms.

ADM+S Research Fellow Dr Kieran Hegarty presents a paper on platformisation and archives at the Born-Digital Collections, Archives and Memory Conference at the University of London, 2 April 2025
ADM+S Research Fellow Dr Kieran Hegarty presents a paper on platformisation and archives at the Born-Digital Collections, Archives and Memory Conference at the University of London, 2 April 2025 (photo credit: Alex Rumford and the School of Advanced Study)

In the Q&A, Dr Hegarty signalled to the data donation approach—taken by ADM+S researchers in signature projects such as the Australian Ad Observatory and the Australian Search Experience—as a possible alternative or supplement to platform-controlled access to social media data for libraries, as well the associated challenges of ethics, privacy, and inclusion. Dr Hegarty’s talk was raised in the final plenary, where leading researchers from across Europe and America reflected on the themes and highlights of the conference.

The BDCAM conference was held at the historic Senate House in London. The 1930s building housed the British Ministry of Information during the Second World War, responsible for censorship and propaganda, and was purportedly the inspiration for the ‘Ministry of Truth’ in George Orwell’s Nineteen Eighty-Four. Given ongoing commercial and state control over what and how knowledge is produced, disseminated, and authorised continue to be critical issues, the building was a fitting site to explore how power over archives has operated and continues to operate.

Dr Hegarty then joined ADM+S Centre Director, Distinguished Professor Julian Thomas, at the inaugural AlgoSoc Conference, held at the historic Felix Meritis building in Amsterdam from 10-11 April.

Also at the AlgoSoc Conference, Distinguished Prof Julian Thomas presented at the opening panel discussion on ‘Rethinking public values and AI governance in the algorithmic age’ and Laura Gartry, ADM+S Research Student presented her poster ‘Implementing editorial values in audio recommendations’.

Prof Julian Thomas (left) presenting with Prof José van Dijck, Prof Abaham Bernstein and Prof Natali Helberger (left to right) at the AlgoSoc Conference.
Prof Julian Thomas presenting with Prof José van Dijck, Prof Abaham Bernstein and Prof Natali Helberger (left to right) at the AlgoSoc Conference. Photo credit: Kieran Hegarty.

Funded by the Dutch Ministry of Education, Culture and Science, AlgoSoc is a major ten-year research program that explores how to ensure public values like fairness, accountability, and transparency are protected in a society where more and more decisions are made by algorithms and AI systems.

AlgoSoc shares many affinities with ADM+S. Both research programs have a mutual interest not just in the technical design of automated systems, but in the institutional, social, and political arrangements that shape them, and how these arrangements are reshaped as sectors of public interest increasingly engage in a shifting constellation of actors and interests surrounding AI and ADM systems.

Dr Hegarty presented his paper, “Public libraries in the algorithmic society: An evolving site for the negotiation of public values”, co-authored with Professor Thomas and ADM+S Chief Investigator Professor Anthony McCosker, as part of a panel on “Sociotechnical infrastructures” chaired by Professor José van Dijck from Utrecht University.

The paper focused on how changing publishing and distribution markets over the past three decades have led to a renegotiation and rearticulation of the public values associated with libraries, particularly their commitment to ongoing and inclusive public access to published material. Other panellists shared similar challenges in relation to education, urban planning, and welfare provision across Europe, illustrating the cross-cutting issues affecting a range of sectors of public interest in different parts of the world.

“My attendance at BDCAM and AlgoSoc allowed me to share research from ADM+S with an international network of scholars, practitioners, and policymakers working at the intersection of technology and society,” said Dr Hegarty. “It also provided valuable opportunities to build and strengthen connections with leading research centres and explore future collaborations around how public values are being rearticulated as sectors of public interest engage with automated decision-making systems and AI and, in doing so, become increasingly entangled in the cultures and politics that surround these technologies”.

These events highlighted the growing global interest in understanding how public interest and values-led institutions are negotiating their roles, responsibilities, and the values they’re expected to uphold in an age of ADM, particularly when increasingly reliant on actors with very different interests, priorities, and resources.

The issues raised also underscored the importance of interdisciplinary and cross-sectoral work like that undertaken by ADM+S in ensuring that public values are not only considered but actively embedded in the design, governance, and operation of automated systems.

Dr Hegarty’s participation in these conferences reflects the ADM+S’s ongoing commitment to supporting early career researchers to develop international partnerships and contribute to global conversations about digital futures grounded in equity, inclusion, and the public good.

SEE ALSO

DeepSeek and the Future of AI: Congressional Testimony from Julia Stoyanovich

Julia Stoyanovich testifying at U.S House Committee
Image supplied by Assoc Prof Julia Stoyanovich

DeepSeek and the Future of AI: Congressional Testimony from Julia Stoyanovich

Author ADM+S Centre
Date 12 April 2025

On 9 April, Associate Prof Julia Stoyanovich, Director of the Center for Responsible AI at NYU Tandon School of Engineering and Partner Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society, testified at the Research & Technology Subcommittee Hearing – DeepSeek. A Deep Dive.

Her testimony focused on the national security and competitive advantage implications of DeepSeek for the US.

“It was an honor and a privilege to testify at the U.S. House of Representatives today, at a Research & Technology Subcommittee Hearing of the Committee on Science, Space, and Technology,” said Prof Stoyanovich.

In her remarks, Professor Stoyanovich offered three key recommendations with regards to the technology implications of DeepSeek:

Recommendation 1: Foster an Open Research Environment
To close the strategic gap, the federal government must support an open, ambitious
research ecosystem. This includes robust funding for fundamental AI science, public datasets, model development, and compute access. The National AI Research Resource (NAIRR) is essential here—providing academic institutions, startups, and public agencies with tools to compete globally. Federal support for the National Science Foundation and other agencies is vital to sustaining open research and building a skilled AI workforce.

Recommendation 2: Incentivise Transparency Across the AI Lifecycle
Transparency drives progress, safety, and accountability. The government should require public disclosure of model architecture, training regimes, and evaluation protocols in federally funded AI work—and incentivize similar practices in commercial models. Public benchmarks, shared leaderboards, and reproducibility audits can raise the floor for all developers.

Recommendation 3: Establish a strong data protection regime
The U.S. must lead not only in AI performance, but in responsible, privacy-respecting AI infrastructure. This includes clear guardrails on how AI models collect and use data, especially when deployed in sensitive sectors. It also means restricting exposure of U.S. data to jurisdictions that lack safeguards. International frameworks like GDPR offer useful reference points—but our approach must reflect U.S. values and strategic interests.

About the Hearing

The hearing examined DeepSeek’s AI models, which have drawn international attention for achieving comparable performance to U.S. models while using less advanced chips and appearing more cost-effective. The session also explored the role of U.S. technologies in DeepSeek’s development and how federal support can drive innovation in the private sector.

Other expert witnesses included Adam Thierer (R Street Institute), Gregory Allen (Center for Strategic and International Studies), and Tim Fist (Institute for Progress).

Another related hearing will be held Wednesday by the House Energy and Commerce Committee, focusing on the federal role in accelerating advancements in computing.

View the hearing Research and Technology Subcommittee Hearing – DeepSeek: A Deep Dive on YouTube.

SEE ALSO

ADM+S Researchers to present at International conference on research and development in information retrieval

Abstract image with laptop and search bar

ADM+S Researchers to present at International conference on research and development in information retrieval

Author ADM+S Centre
Date 11 April 2025

Several researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) have been accepted to present their work at the ACM SIGIR 2025 Conference on Research and Development in Information Retrieval, the leading international forum in the field.

SIGIR is the premiere international forum for the presentation of new research results and for the demonstration of new systems and techniques in information retrieval. The conference consists of five days of full papers, short papers, resource & reproducibility papers, perspectives papers, system demonstrations, doctoral consortium, tutorials, and workshops focused on research and development in the area of information retrieval. The Conference will be held 13-18 July 2025 in Podova, Italy.

The following ADM+S research will be presented at the conference:

  • Classifying Term Variants in Query Formulation (full paper)
    Nuha AbuOnq (ADM+S Research Student), co-authored with Prof Falk Scholer (ADM+S Associate Investigator).
  • The Effects of Demographic Instructions on LLM Personas (short paper)
    Angel Magnossão de Paula (ADM+S Affiliate), co-authored with Prof Shane Culpepper, Prof Alistair Moffat, Sachin Pathiyan Cherumanal (ADM+S Research Student), Prof Falk Scholer (ADM+S Associate Investigator), and Dr Johanne Trippas.
  • PUB: An LLM-Enhanced Personality-Driven User Behaviour Simulator for Recommender System Evaluation (paper)
    Dr Chenglong Ma (ADM+S Research Student)
  • Characterising Topic Familiarity and Query Specificity Using Eye-Tracking Data (short paper)
    Jiaman He (ADM+S Research Student), co-authored with Zikang Leng, Dr Dana McKay, Dr Johanne Trippas, and Dr Damiano Spina (ADM+S Associate Investigator).

Prof Flora Salim (ADM+S Chief Investigator) and Prof Maarten de Rijke (ADM+S Partner Investigator) will also be co-hosting the second edition of the MANILA – SIGIR Workshop, a series focused on leveraging information retrieval to address the impacts of climate change.

SEE ALSO

Tools like Apple’s photo Clean Up are yet another nail in the coffin for being able to trust our eyes

AI Generated image of coloured sillhouette figures in a city street
Apple Clean Up highlights photo elements that might be deemed distracting. Image credit: T.J. Thomson

Tools like Apple’s photo Clean Up are yet another nail in the coffin for being able to trust our eyes

Author T.J. Thomson
Date 11 April 2025

You may have seen ads by Apple promoting its new Clean Up feature that can be used to remove elements in a photo. When one of these ads caught my eye this weekend, I was intrigued and updated my software to try it out.

The feature has been available in Australia since December for Apple customers with certain hardware and software capabilities. It’s also available for customers in New Zealand, Canada, Ireland, South Africa, the United Kingdom and the United States.

The tool uses generative artificial intelligence (AI) to analyse the scene and suggest elements that might be distracting. You can see those highlighted in the screenshot below.

Screenshot of a photo in editing software, a city square with various people highlighted in red.
Apple uses generative AI to identify elements, highlighted here in red, that might be distracting in photos. It then allows users to remove these with the tap of a finger.
T.J. Thomson

You can then tap the suggested element to remove it or circle elements to delete them. The device then uses generative AI to try to create a logical replacement based on the surrounding area.

Easier ways to deceive

Smartphone photo editing apps have been around for more than a decade, but now, you don’t need to download, pay for, or learn to use a new third-party app. If you have an eligible device, you can use these features directly in your smartphone’s default photo app.

Apple’s Clean Up joins a number of similar tools already offered by various tech companies. Those with Android phones might have used Google’s Magic Editor. This lets users move, resize, recolour or delete objects using AI. Users with select Samsung devices can use their built-in photo gallery app to remove elements in photos.

There have always been ways – analogue and, more recently, digital – to deceive. But integrating them into existing software in a free, easy-to-use way makes those possibilities so much easier.

Using AI to edit photos or create new images entirely raises pressing questions around the trustworthiness of photographs and videos. We rely on the vision these devices produce in everything from police body and traffic cams to insurance claims and verifying the safe delivery of parcels.

If advances in tech are eroding our trust in pictures and even video, we have to rethink what it means to trust our eyes.

How can these tools be used?

The idea of removing distracting or unwanted elements can be attractive. If you’ve ever been to a crowded tourist hotspot, removing some of the other tourists so you can focus more on the environment might be appealing (check out the slider below for an example).

But beyond removing distractions, how else can these tools be used?

Some people use them to remove watermarks. Watermarks are typically added by photographers or companies trying to protect their work from unauthorised use. Removing these makes the unauthorised use less obvious but not less legal.

Others use them to alter evidence. For example, a seller might edit a photo of a damaged good to allege it was in good condition before shipping.

As image editing and generating tools become more widespread and easier to use, the list of uses balloons proportionately. And some of these uses can be unsavoury.

AI generators can now make realistic-looking receipts, for example. People could then try to submit these to their employer to get reimbursed for expenses not actually incurred.

Can anything we see be trusted anymore?

Considering these developments, what does it mean to have “visual proof” of something?

If you think a photo might be edited, zooming in can sometimes reveal anomalies where the AI has stuffed up. Here’s a zoomed-in version of some of the areas where the Clean Up feature generated new content that doesn’t quite match the old.

Tools like Clean Up sometimes create anomalies that can be spotted with the naked eye.
T.J. Thomson

It’s usually easier to manipulate one image than to convincingly edit multiple images of the same scene in the same way. For this reason, asking to see multiple outtakes that show the same scene from different angles can be a helpful verification strategy.

Seeing something with your own eyes might be the best approach, though this isn’t always possible.

Doing some additional research might also help. For example, with the case of a fake receipt, does the restaurant even exist? Was it open on the day shown on the receipt? Does the menu offer the items allegedly sold? Does the tax rate match the local area’s?

Manual verification approaches like the above obviously take time. Trustworthy systems that can automate these mundane tasks are likely to grow in popularity as the risks of AI editing and generation increase.

Likewise, there’s a role for regulators to play in ensuring people don’t misuse AI technology. In the European Union, Apple’s plan to roll out its Apple Intelligence features, which include the Clean Up function, was delayed due to “regulatory uncertainties”.

AI can be used to make our lives easier. Like any technology, it can be used for good or bad. Being aware of what it’s capable of and developing your visual and media literacies is essential to being an informed member of our digital world.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Amazon’s new Alexa policy sparks privacy concerns

Alexa smart speaker
Credit: Olemedia/Gettyimages

Amazon’s new Alexa policy sparks privacy concerns

Author ADM+S Centre
Date 10 April 2025

Users of Amazon’s Alexa-enabled Echo devices in Australia and around the world may have noticed something different — or perhaps they haven’t. That’s part of the concern, according to experts.

In a recent interview with ABC Radio National’s Life Matters, Prof Daniel Angus, Chief Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society, explained that Amazon has made a significant and controversial change: all audio captured by its Echo smart speakers is now automatically sent to the cloud by default.

Users can opt out, but doing so limits the device’s ability to personalise responses and learn user preferences. The move has sparked a new wave of concern over consumer privacy, AI hype, and the growing power of Big Tech.

“This move is diabolical,” Prof Angus told Life Matters. “It breaks that fundamental trust.”

A Symptom of the AI Hype Cycle

Prof Angus argues that Amazon’s decision is not just a privacy issue, but part of a broader, more concerning trend.

“We’re in a hype cycle around AI,” he said. “Companies need us to believe in the idea of AI to maintain growth. This is not just about functionality — it’s about market dominance and feeding the myth of inevitable AI revolution.”

At the core of this trend is data. More data means better AI models, and smart speaker interactions — even something as simple as setting a timer — are a rich source of training material.

He pointed to Amazon’s market saturation and reliance on growth-at-all-costs as motivations for expanding data collection practices without clear consumer benefit.

Privacy or Access: A Sophie’s Choice?

Virtual assistants have real benefits, particularly for people with accessibility needs. But according to Angus, users should not have to choose between functionality and their right to privacy.

“Audio is incredibly private,” he said. “It’s gold for accessibility, but it can also be incredibly revealing.”

Historically, much of the processing by virtual assistants happened on-device, a method known as edge computing. This approach enabled commands to be interpreted locally, enhancing both performance and privacy. But the shift toward cloud-based processing threatens this balance.

Angus urged regulators to act, warning that without intervention, consumers could be locked into unfair trade-offs.

“We do this through regulation. Specifically, through privacy reform,” he said. “Privacy settings are fundamental to stopping companies from exploiting our data for capital gain.”

Reform on the Horizon?

Australia is currently reviewing its privacy frameworks, with new attention on children’s data and AI regulation. Angus suggested that Amazon’s move may be a catalyst for change.

“I think they’ve overplayed their hand,” he said. “This could be a wake-up call for both the public and policymakers.”

Listen to the full interview How your virtual assistant is listening to you on ABC Listen.

SEE ALSO

New study explores how autistic adults use non-human supports for wellbeing

Report cover: Autism Supports for comfort, care and connection. Megan Catherine Rose, Deborah Lupton

New study explores how autistic adults use non-human supports for wellbeing

Author ADM+S Centre
Date 4 April 2025

A new autistic-led project, Autism Supports for Comfort, Care and Connection, reveals the everyday and creative ways autistic adults use objects, services, and creatures to support their wellbeing.

Conducted by Dr Megan Rose, research fellow, and Prof Deborah Lupton, from the ARC Centre of Excellence for Automated Decision-Making and Society at UNSW, the study interviewed 12 autistic Australians about the non-human supports they rely on for entertainment, social connection, special interests, burnout recovery, sensory challenges, and overall wellbeing.

Participants also imagined their ideal new support system tailored to their needs.

To visually represent these experiences, autistic graphic illustrator Sarah Firth was commissioned to create unique ‘portraits’ of each participant. Using anonymised interview transcripts, Sarah crafted illustrations that depict the challenges, coping strategies, and special interests of each individual—without ever seeing or meeting them.

The resulting booklet combines these portraits with lay-language participant narratives, offering a powerful and personal look at how autistic people engage with non-human supports in their daily lives.

“Importantly, this is an autistic-led project with a strengths-based approach. Megan and I wanted to focus on identifying not only the challenges faced by autistic people, but also the amazingly inventive ways they made their lives more comfortable and joyful,” Professor Lupton said.

Watch the online report launch on Youtube Autism Supports for Comfort, Care and Connection
View the publication Autism Supports for Comfort, Care, and Connection
Watch the documentary Non-Human Supports Used by Autistic People for Connection, Health and Wellbeing

SEE ALSO

Can you tell the difference between real and fake news photos? Take the quiz to find out

A (real) photo of a protester dressed as Pikachu in Paris on March 29 2025.
A (real) photo of a protester dressed as Pikachu in Paris on March 29 2025. Remon Haazen / Getty Images

Can you tell the difference between real and fake news photos? Take the quiz to find out

Author T.J. Thomson
Date 2 April 2025

You wouldn’t usually associate Pikachu with protest.

But a figure dressed as the iconic yellow Pokémon joined a protest last week in Turkey to demonstrate against the country’s authoritarian leader.

And then a virtual doppelgänger made the rounds on social media, raising doubt in people’s minds about whether what they were seeing was true. (Just to be clear, the image in the post shown below is very much fake.)

This is the latest in a spate of incidents involving AI-generated (or AI-edited) images that can be made easily and cheaply and that are often posted during breaking news events.

Doctored, decontextualised or synthetic media can cause confusion, sow doubt, and contribute to political polarisation. The people who make or share these media often benefit financially or politically from spreading false or misleading claims.

How would you go at telling fact from fiction in these cases? Have a go with this quiz and learn more about some of AI’s (potential) giveaways and how to stay safer online.



How’d you go?

As this exercise might have revealed, we can’t always spot AI-generated or AI-edited images with just our eyes. Doing so will also become harder as AI tools become more advanced.

Dealing with visual deception

AI-powered tools exist to try to detect AI content, but these have mixed results.

Running suspect images through a search engine to see where else they have been published – and when – can be a helpful strategy. But this relies on there being an original “unedited” version published somewhere online.

Perhaps the best strategy is something called “lateral reading”. It means getting off the page or platform and seeing what trusted sources say about a claim.

Ultimately, we don’t have time to fact-check every claim we come across each day. That’s why it’s important to have access to trustworthy news sources that have a track record of getting it right. This is even more important as the volume of AI “slop” increases.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Generative AI is already being used in journalism – here’s how people feel about it

AI Generated image of newspresenter in newsroom
Indonesia’s TVOne launched an AI news presenter in 2023. T.J. Thomson

Generative AI is already being used in journalism – here’s how people feel about it

Authors T.J. Thomson, Michelle Riedlinger, Phoebe Matich, Ryan J. Thomas
Date 2 April 2025

Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception.

A new report published today finds that news audiences and journalists alike are concerned about how news organisations are – and could be – using generative AI such as chatbots, image, audio and video generators, and similar tools.

The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France).

Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.

This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences.

Who or what makes your news – and how – matters for a host of reasons.

Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources – such as politicians or experts – more than others.

Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet’s staff themselves aren’t representative of their audience.

Carelessly using AI to produce or edit journalism can reproduce some of these inequalities.

Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each.

The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic.

But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower.

The problem – and opportunity

Generative AI can be used in just about every part of journalism.

For example, a photographer could cover an event. Then, a generative AI tool could select what it “thinks” are the best images, edit the images to optimise them, and add keywords to each.

An image of a field with towers in the distance and computer-generated labels superimposed that try to identify certain objects in the image.
Computer software can try to recognise objects in images and add keywords, leading to potentially more efficient image processing workflows.
Elise Racine/Better Images of AI/Moon over Fields, CC BY

These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make “good” images are different to what a computer might think? These criteria may also change over time or in different contexts.

Even something as simple as lightening or darkening an image can cause a furore when politics are involved.

AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context.

Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others’ content.

AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC.

What do people think about journalists using AI?

Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes.

For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the “portrait” mode on smartphones.

Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who’d previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media.

A screenshot of an image with the alt-text description that reads A view of the beach from a stone arch.
Popular word processing and presentation software can automatically generate alt-text descriptions for images that are inserted into documents or presentations.
T.J. Thomson

The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral.

For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles’s coronation, news outlets reported on this false image.

Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns.

An overview of twelve opinion columns published by The Daily Telegraph and each featuring an image generated by an AI tool.
The Daily Telegraph frequently turns to generative AI to illustrate its opinion columns, sometimes generating more photorealistic illustrations and sometimes less photorealistic ones.
T.J. Thomson

Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use.

Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example.

On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to “enliven” an otherwise static image in the hopes of attracting viewer interest and engagement.

A historical photograph from the State Library of Western Australia’s collection has been animated with AI (a tool called Runway) to introduce motion to the still image.
T.J. Thomson

Your role as an audience member

If you’re unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can’t find one, consider asking the outlet to develop and publish a policy.

Consider supporting media outlets that use AI to complement and support – rather than replace – human labour.

Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Michelle Riedlinger, Associate Professor in Digital Media, Queensland University of Technology; Phoebe Matich, Postdoctoral Research Fellow, Generative Authenticity in Journalism and Human Rights Media, ADM+S Centre, Queensland University of Technology, and Ryan J. Thomas, Associate Professor, Washington State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

How is rental tech changing the way we rent? Share your experience

Rear view of woman looking using smartphone while looking at real estate sign, planning to rent a house. Buying a new home. Property investment. Mortgage loans.
Credit: Oscar Wong/GettImages

How is rental tech changing the way we rent? Share your experience

Author ADM+S Centre
Date 31 March 2025

Are you a renter in Australia with experience using digital rental platforms? A new research project is looking for participants to share their experiences with ‘RentTech’ and its impact on housing justice.

PhD researcher Samantha Floreani from the ARC Centre of Excellence for Automated Decision-Making and Society at Monash University is conducting a study on the growing influence of digital technologies in the residential real estate sector.

These technologies—sometimes referred to as ‘RentTech’—include online rental application platforms (such as 2Apply, Sorted, Ignite, and Snug), property management apps (Kolmeo, Cubbi, ConsoleTenant), and rent payment platforms (Rental Rewards, Ailo, SimpleRent), among others. The research aims to explore how these technologies affect renters’ experiences and housing justice in Australia.

Samantha says, “Against the backdrop of the ongoing housing crisis, renters are increasingly interacting with digital technologies at every stage of their housing experience.

“These tools come with promises of increased convenience, efficiency, and profit for real estate agents and landlords—but what do they mean for renters? Through this study, I aim to find out.”

Participants will take part in a one-on-one interview, discussing their interactions with RentTech and demonstrating an app, website, or platform they have used. The interview, which lasts approximately 60 minutes, can be conducted online via Zoom or in person at a mutually convenient location.

To participate, you should have some experience with, or opinion on, RentTech and also experience with Australia’s private rental market, though you do not need to have a current tenancy agreement.

All interviews will be recorded, transcribed, and anonymized to protect confidentiality.

Your insights will contribute to research that aims to center renters’ voices in discussions about digital real estate technology. Findings from the study will help inform advocacy and policy making efforts related to renters’ rights and housing justice.

For more information visit The Machine-Readable Renter website.

SEE ALSO

What makes a good search engine? These 4 models can help you use search in the age of AI

Internet search, computer search, hand out of computer with magnifying glass, quick search, search, internet icon.
Credit: beast01/Shutterstock

What makes a good search engine? These 4 models can help you use search in the age of AI

Authors Simon Coghlan, Damiano Spina, Falk Scholer and Hui Chia
Date 26 March 2025

Every day, users ask search engines millions of questions. The information we receive can shape our opinions and behaviour.

We are often not aware of their influence, but internet search tools sort and rank web content when responding to our queries. This can certainly help us learn more things. But search tools can also return low-quality information and even misinformation.

Recently, large language models (LLMs) have entered the search scene. While LLMs are not search engines, commercial web search engines have started to include LLM-based artificial intelligence (AI) features into their products. Microsoft’s Copilot and Google’s Overviews are examples of this trend.

AI-enhanced search is marketed as convenient. But, together with other changes in the nature of search over the last decades, it raises the question: what is a good search engine?

Our new paper, published in AI and Ethics, explores this. To make the possibilities clearer, we imagine four search tool models: Customer Servant, Librarian, Journalist and Teacher. These models reflect design elements in search tools and are loosely based on matching human roles.

The four models of search tools

Customer Servant

Workers in customer service give people the things they request. If someone asks for a “burger and fries”, they don’t query whether the request is good for the person, or whether they might really be after something else.

The search model we call Customer Servant is somewhat like the first computer-aided information retrieval systems introduced in the 1950s. These returned sets of unranked documents matching a Boolean query – using simple logical rules to define relationships between keywords (e.g. “cats NOT dogs”).

Librarian

As the name suggests, this model somewhat resembles human librarians. Librarian also provides content that people request, but it doesn’t always take queries at face value.

Instead, it aims for “relevance” by inferring user intentions from contextual information such as location, time or the history of user interactions. Classic web search engines of the late 1990s and early 2000s that rank results and provide a list of resources – think early Google – sit in this category.

Close-up of two people's hands exchanging a stack of books.
Librarians don’t just retrieve information, they strive for relevance.
Tyler Olson/Shutterstock

Journalist

Journalists go beyond librarians. While often responding to what people want to know, journalists carefully curate that information, at times weeding out falsehoods and canvassing various public viewpoints.

Journalists aim to make people better informed. The Journalist search model does something similar. It may customise the presentation of results by providing additional information, or by diversifying search results to give a more balanced list of viewpoints or perspectives.

Teacher

Human teachers, like journalists, aim at giving accurate information. However, they may exercise even more control: teachers may strenuously debunk erroneous information, while pointing learners to the very best expert sources, including lesser-known ones. They may even refuse to expand on claims they deem false or superficial.

LLM-based conversational search systems such as Copilot or Gemini may play a roughly similar role. By providing a synthesised response to a prompt, they exercise more control over presented information than classic web search engines.

They may also try to explicitly discredit problematic views on topics such as health, politics, the environment or history. They might reply with “I can’t promote misinformation” or “This topic requires nuance”. Some LLMs convey a strong “opinion” on what is genuine knowledge and what is unedifying.

No search model is best

We argue each search tool model has strengths and drawbacks.

The Customer Servant is highly explainable: every result can be directly tied to keywords in your query. But this precision also limits the system, as it can’t grasp broader or deeper information needs beyond the exact terms used.

The Librarian model uses additional signals like data about clicks to return content more aligned with what users are really looking for. The catch is these systems may introduce bias. Even with the best intentions, choices about relevance and data sources can reflect underlying value judgements.

The Journalist model shifts the focus toward helping users understand topics, from science to world events, more fully. It aims to present factual information and various perspectives in balanced ways.

This approach is especially useful in moments of crisis – like a global pandemic – where countering misinformation is critical. But there’s a trade-off: tweaking search results for social good raises concerns about user autonomy. It may feel paternalistic, and could open the door to broader content interventions.

The Teacher model is even more interventionist. It guides users towards what it “judges” to be good information, while criticising or discouraging access to content it deems harmful or false. This can promote learning and critical thinking.

But filtering or downranking content can also limit choice, and raises red flags if the “teacher” – whether algorithm or AI – is biased or simply wrong. Current language models often have built-in “guardrails” to align with human values, but these are imperfect. LLMs can also hallucinate plausible-sounding nonsense, or avoid offering perspectives we might actually want to hear.

Staying vigilant is key

We might prefer different models for different purposes. For example, since teacher-like LLMs synthesise and analyse vast amounts of web material, we may sometimes want their more opinionated perspective on a topic, such as on good books, world events or nutrition.

Yet sometimes we may wish to explore specific and verifiable sources about a topic for ourselves. We may also prefer search tools to downrank some content – conspiracy theories, for example.

LLMs make mistakes and can mislead with confidence. As these models become more central to search, we need to stay aware of their drawbacks, and demand transparency and accountability from tech companies on how information is delivered.

Striking the right balance with search engine design and selection is no easy task. Too much control risks eroding individual choice and autonomy, while too little could leave harms unchecked.

Our four ethical models offer a starting point for robust discussion. Further interdisciplinary research is crucial to define when and how search engines can be used ethically and responsibly.The Conversation

Simon Coghlan, Senior Lecturer in Digital Ethics, Centre for AI and Digital Ethics, School of Computing and Information Systems, The University of Melbourne; Damiano Spina, Senior Lecturer, School of Computing Technologies, RMIT University; Falk Scholer, Professor of Information Access and Retrieval, RMIT University, and Hui Chia, PhD Candidate in Law, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Why voting in a fact-checking void should worry you

False/Fact words overlaid abstract image of Elon Musk and Mark Zuckerburg
Illustration by Michael Joiner, 360info, images via James Duncan Davidson & Jose Luis Magana CC BY 4.0

Why voting in a fact-checking void should worry you

Authors Ned Watt and Michelle Riedlinger
Date 25 March 2025

The loss of Australia’s go-to political fact-checker and the rise of AI tools has created a crisis for political accountability just as the nation’s voters prepare to go to the polls.

Professional fact-checkers have never been under more pressure and social media users face a complex and fast-evolving misinformation landscape.

It’s crucial voters understand the situation in the lead-up to the vote.

This federal election will be the first without ABC RMIT Fact Check, which completed its first fact check during the Rudd-Abbott election of 2013.

It will also be Australia’s first federal election since the release of ChatGPT and other generative AI tools that have heralded a new normal of AI-generated political advertising and propaganda.

The risk to accountability is a win for vested interests in Australia’s political and media systems. It means there is even more potential for those vested interests to manipulate information for their own benefit rather than the public good.

Australia needs political parties to commit to ethical use of AI in their campaigning and bipartisan support for improved human-AI detection tools created by fact-checkers and for fact-checkers and journalists, to improve media information integrity systems.

What happened to political fact-checking

Independent fact-checking has faced a public legitimacy crisis in the past few years, mirroring similar crises of trust in news.

The crisis is driven in part by politicians’ denigrations of online investigative research activities, which is related to distrust of the fact-checking movement by far-right politicians and their allies around the world.

In Australia, the end of fact-checking arrangements between the ABC and RMIT University happened in 2024 amid a media furore in the lead-up to the Voice to Parliament referendum. Conservative media depicted RMIT Fact Lab, another entity under RMIT’s professional fact-checking wing, as grossly biased.

Claims of fact-checkers’ political bias hinge on observations that right-leaning voices tend to share news content that diverges from established consensus more often, resulting in a relatively high proportion of their claims being fact-checked.

The suspension of RMIT Fact Lab’s membership of Meta’s third-party fact-checking program cast a long shadow over the credibility of fact-checking, reflecting similar questions to those recently posed in the United States about the role of truth in politics.

Australia still has two locally-owned fact-checking units – ABC’s in-house fact-checker ABC News Verify and Australian Associated Press (AAP) fact-checking service – as well as AFP Australia, the local division of Agence France-Presse’s (AFP) fact-checking operation.

Australian fact-checkers have been part of a push for political accountability and depolarisation, responding to the concerns of Australians about the interplay between private interests in politics and media organisations and the public interest, including the roles of big tech in moderating information online.

There have been calls for greater accountability and transparency in news reporting, but fact-checkers worldwide have experienced setbacks.

At the start of the year, Meta announced that it was ending its third-party fact-checking program in the United States and making changes to its content-moderation policies. The changes would amplify political content and allow content targeting vulnerable minorities that it previously considered contentious and divisive.

This move signalled a crisis for professional fact-checkers, journalists and misinformation researchers.

Meta boss Mark Zuckerberg, under pressure from Donald Trump and other conservative critics of Meta’s third-party fact-checking programme, claimed that US fact-checking was akin to censorship. That echoed accusations of partisan censorship in Europe, the Philippines and Australia.

Alternatives to independent fact-checking

Zuckerberg claims the answers to Meta’s controversial information integrity problems will be found in a Community Notes-style program, modelled on the program developed on Twitter and currently employed on Elon Musk’s X.

While such an approach could provide some value in terms of contextualising misleading content, it does little to address complex online harms.

Recent studies have found independent fact-checkers are frequently cited in Community Notes, and that successful community moderation relies on professional fact-checking.

Human-AI approaches are also increasing, with X’s Community Notes employing a bridging algorithm that assesses contributors before posting a correction to content.

However, there are flaws in that system.

Professional fact-checking has been notoriously challenging to scale so fact-checkers have also been experimenting with AI-based approaches. However, they are limited by time and resources.

What this means for Australia

There are already signs of problematic AI use in political communication, including politicians being edited using AI to engage audiences, often at the expense of other candidates or parties.

This is done through carrying unsanctioned or uncharacteristic messaging to attack or cause confusion around certain policies or politicians, through parody as well as deception.

While Meta recently committed to labelling content that it identifies as being generated with AI, evidence suggests that labelling content as generated does little to reduce its perceived credibility. In other words, the power of AI for political communication is not just its ability to deceive, but to persuade — both cheaply and at scale.

These practices could deceive or manipulate voters and even lead to a loss of faith in institutional systems or authentic evidence being discredited.

The defunding and delegitimisation of professional fact-checkers threatens their ability to provide context and explanation and impedes their investigative abilities to better understand the problematic media landscape.

The end of platform-supported fact-checking in the United States also sets a precedent for digital platforms to enter into covert agreements with elected officials, furthering individual political or economic agendas, instead of creating policies that serve the public interest.

In Australia, there is the potential for future political dealmaking between influencers or power brokers, platform owners like Musk and Zuckerberg and segments of the Australian elite, which would cause more public confusion and disillusionment.

Ned Watt is a PhD candidate at the ARC Centre of Excellence for Automated Decision-Making and Society at the Queensland University of Technology Digital Media Research Centre.

Mr Watt’s research is funded in part by the Global Journalism Innovation Lab (GJIL).

Michelle Riedlinger is an Associate Professor and Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society at the Queensland University of Technology’s School of Communication.

Originally published under Creative Commons by 360info™.

SEE ALSO

Chinese social media platform RedNote a new battleground ahead of federal election

Silhoette of three people on mobile phones with RedNote logo in the background

Chinese social media platform RedNote a new battleground ahead of federal election

Author ADM+S Centre
Date 24 March 2025

As Australia approaches its federal election, concerns are mounting over the spread of misinformation and disinformation on Chinese social media platform RedNote or known as the little red book for Mandarin-speakers. RedNote is a platform increasingly used by Australian politicians to connect with Chinese Australians.

In addition to informational and educational content, deepfake videos, political-or-commerical-driven misleading content, and shadow banning are emerging as key issues in the digital landscape, raising alarm over the integrity of online political discourse.

In a recent investigation, the ABC uncovered a deepfake video featuring a manipulated clip of Opposition Leader Peter Dutton speaking Mandarin on RedNote.

The video uses legitimate footage from an interview where Dutton discusses the Indigenous flag, but AI has altered it to make it appear as though he is speaking Mandarin. In the video, Dutton appears to suggest that Indigenous flags should not be displayed at press conferences, a claim that is misleading and taken out of context.

ARC Centre of Excellence for Automated Decision-Making and Society researchers Dr Fan Yang, from Melbourne University and Dr Robbie Fordyce from Monash University discussed the issue in an interview on ABC’s World Today.

Dr Fan Yang studies Australian political information on Chinese-language social media services and warns that such deepfake videos are not isolated incidents. She notes that other misleading content has spread on the platform, such as videos implying the Albanese government is arresting temporary migrants and commercial-driven threatening messages about Australia’s new policies on immigration and housing.

In these cases, the videos are often taken out of context, with captions misrepresenting the events.

Further complicating matters, Dr Yang points out the lack of official voices, such as those from the Australian Electoral Commission (AEC) on RedNote to prebunk and debunk false or misleading information; and that the narrow scope of what public agencies classify as “misinformation” and “disinformation” limits their capacity for effective intervention. She highlights troubling instances of shadow banning, where Australian politicians’ accounts and content are hidden from Chinese Australian users.

“If you search for the name of a politician, you wouldn’t even be able to find their account,” Dr Yang explains.

“This raises concerns that Chinese Australians are being exposed to an increasingly one-sided view of political events.”

Following the publication of an ABC investigative report, on 24 March, ADM+S affiliated PhD researcher Dan Dai identified platform intervention of the latest content in relation to the hashtags #澳大利亚大选 or #澳洲大选(meaning Australian election) on RedNote. No recent content appears in search results for the term.

The impact of misinformation on Chinese Australians became particularly apparent during last year’s Voice referendum, with many expressing anxiety over the potential constitutional changes. The research team has released an interim report on the issue.

Dr Robbie Fordyce notes that misinformation often exploited existing fears among migrant communities, portraying the referendum as granting undue power to Indigenous Australians, which in turn would disadvantage migrant communities.

“They were interpreting the referendum as giving Indigenous Australians massive constitutional power, which would subordinate other groups,” Fordyce explained.

Although experts have raised questions about the potential influence of international actors, such as the Chinese government, Dr Fordyce stressed that their research found no evidence of a coordinated campaign to manipulate the platform for political purposes, aside from the influence of Chinese internet governance, which regulates permissible discussions.

Despite this, he acknowledged that existing fear and concerns often drive people to share misleading content.

Experts agree that better access to reliable, Chinese-language journalism could alleviate some of these issues.

Dr Fordyce believes that providing accurate, well-researched news could help Chinese Australians better navigate the complex political landscape.

“[With sufficient funding and support], if there was a rich Chinese language news source with good journalistic ethics, that could address concerns and provide correct information, it would really help these people,” he said.

In response to growing concerns, an AEC spokesperson assured that the commission is continuously monitoring the social media environment to engage with voters, despite limited resources.

As Australian politicians continue to use RedNote and WeChat as a tool to engage with Chinese Australians, the integrity of information on the platform remains a critical issue, with both misinformation and the silencing of political voices posing significant challenges to the upcoming election.

This project is led and conducted by Dr Fan Yang, with research assistance from Dan Dai, Stevie Zhang, and Mengjie Cai at the University of Melbourne, and co-led by Dr Robbie Fordyce at Monash University and Dr Luke Heemsbergen at Deakin University. Between 2024 and 2025, the project is funded by the Susan McKinnon Foundation.

SEE ALSO

Researchers to investigate the use of Generative AI by non-english speaking students in tertiary education

generative ai word on world map. concept showing artificial intelligence creative mind for generat music, image and speech.

Researchers to investigate the use of Generative AI by non-english speaking students in tertiary education

Author Kathy Nickels
Date 17 March 2025

Associate Professor Michelle Riedlinger from the ARC Centre of Excellence for Automated Decision-Making and Society at QUT along with colleagues Dr Xiaoting Yu and Dr Mimi Tsai, have secured funding to investigate the factors driving non-english speaking backgrounds (NESB) students’ uses of GenAI and strategies to improve learning outcomes in applying AI ethically and professionally. 

The study will take place as a longitudinal study of master’s students using a combination of sprint interviews and follow-up discussions.

“We’re grappling with how higher education, the Australian research community and the professional communication sector are responding to these technologies and so we’re excited to investigate these understudied use cases, which are so important for our students,” says Associate Professor Riedlinger.

The findings from this study are expected to have potential benefits for international students across various programs at QUT.

Dr Mimi Tsai, a co-researcher on the project and QUT Learning Designer, explained that the study aims to reduce added stress experienced by NESB students. 

“NESB students already balance new professional commitments, visa restrictions, unfamiliar educational systems, and the need for stronger industry connections and enhanced digital skills,” she said.

Dr Xiaoting Yu, an Affiliate Investigator from the Digital Media Research Centre at QUT and the lead researcher on the project, emphasised the importance of the study for filling gaps in higher education research. 

“Although there has been significant research on GenAI in tertiary education, little attention has been given to master’s coursework students from Non-English Speaking Backgrounds,” she said.

The anticipated outcomes of the study include the development of an adaptable framework that addresses the needs of NESB student cohorts, with broad applicability across the faculty’s undergraduate and master’s programs.

This study Investigating the Generative AI capabilities and needs of students from non-English speaking backgrounds: A longitudinal study of master students’ evolving engagement with AI at QUT has received funding through QUT’s CIESJ Learning and Teaching seed funding scheme. 

SEE ALSO

RMIT partners with the Office of the National Broadcasting and Telecommunications Commission of Thailand to address digital access and policy

MoU signatories (left to right) Mr Trairat Viriyasirikul, Professor Saskia Loer Hansen and Distinguished Professor Julian Thomas.
MoU signatories (left to right) Mr Trairat Viriyasirikul, Professor Saskia Loer Hansen and Distinguished Professor Julian Thomas.

RMIT partners with the Office of the National Broadcasting and Telecommunications Commission of Thailand to address digital access and policy

Author Kathy Nickels
Date 17 March 2025

The Office of the National Broadcasting and Telecommunications Commission of Thailand (Office of the NBTC), and RMIT University, Australia, have formalised a new partnership with the signing of a Memorandum of Understanding (MOU) on 17 February 2025.

This landmark agreement aims to foster international collaboration in academic and research endeavors, contributing to the shared goals of addressing global challenges related to digital access and policy-oriented research.

Office of the NBTC, a leading independent state body that regulates broadcasting, television, radiocommunications, and telecommunications across Thailand and the 10 other countries in the Association of Southeast Asian Nations (ASEAN), will collaborate with RMIT.

The partnership will also involve researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) to advance research and development in areas critical to shaping future policies and regulatory decisions.

By addressing critical issues related to the digital divide, the collaboration aims to promote more equitable access to technology, while also strengthening the policy frameworks essential for fostering social and economic development.

“Bringing together the expertise of Office of the NBTC, RMIT, and the ADM+S, this MOU will create new avenues for impactful research and policy analysis,” said Distinguished Professor Julian Thomas, Director of ADM+S.

The MOU emphasises a commitment to conducting in-depth research that will influence policy development, guiding the future of broadcasting and telecommunications regulation in both nations.

“We are proud to enter into this collaboration with Office of the NBTC, as it represents an important step toward addressing complex challenges in the digital domain,” said Distinguished Professor Thomas.

“Through our shared expertise and combined efforts, we aim to make a lasting impact on global digital policy and regulatory frameworks.”

Pictured above MoU signatories: Mr Trairat Viriyasirikul, Acting Secretary-General, Office of The National Broadcasting and Telecommunications Commission of the Kingdom of Thailand; Professor Saskia Loer Hansen, Deputy Vice-Chancellor International and Engagement and Vice-President, RMIT University; and Distinguished Professor Julian Thomas, Director of the ARC Centre of Excellence for Automated Decision-Making and Society at RMIT University.

SEE ALSO

How digital giants let poll scrutiny fall

Meta sign
Wikimedia Commons: Nokia621 CC BY-SA 4.0

How digital giants let poll scrutiny fall

Authors Axel Bruns and Samantha Vilkins
Date 27 February 2025

The changing social media world, already hostile to oversight, is making monitoring election activity even more difficult. Yet policymakers still have options.

A seismic change in the social media landscape — described by one industry insider as a ‘Cambrian explosion’ in digital options — poses fundamental challenges to those who would monitor the digital world.

Mature social media platforms like Facebook are being challenged by new players in an environment increasingly hostile to researchers and regulators.

That has huge ramifications heading towards the Australian federal election — for politicians as well as those who would monitor them.

That does not mean Australia is powerless in the fight for online transparency. There are initiatives policymakers could and should adopt, including some already in place in other jurisdictions.

While Australia lags in its approach and such initiatives will not necessarily guarantee full transparency, they would represent at least a step in the right direction policymakers appear reluctant to pursue.

An evolving environment
Online campaigning for the 2025 Australian federal election takes place in a rapidly changing online environment.

The online platform landscape was broadly stable for the past few federal elections.

Twitter was a central place for news tracking by journalists, politicians, activists and other dedicated news followers and hashtags like #ausvotes and #auspol were reliable gathering points.

Public outreach to voters and occasional discussion was most common on Facebook. The increased popularity of platforms like Instagram and TikTok required parties to come up with more visually engaging campaign content.

In addition to their ‘organic’ posting, political parties and lobby groups spent millions on advertising across these platforms, sometimes mixing authorised campaign messaging with covert attack ads and disinformation.

Online advertising might at first seem easier to track than physical flyers but in practice, layers of obfuscation enable misleading ads to go largely unnoticed.

Problematic content spread by front groups that are loosely associated with official campaigns can exploit Australia’s lack of ‘truth in political advertising’ laws as well as the lax enforcement of advertising standards by digital platforms.

What is changing
The environment is substantially different in 2025 as old platforms decline — in both use and quality — and new social media spaces emerge.

Market leader Facebook has continued its slow decline as its userbase ages and younger Australians opt for what they see as more interesting platforms like TikTok. Twitter, now known as X, has turned into a cesspool of unchecked abuse, hate speech, disinformation and even fascist agitation under Elon Musk’s leadership.

A substantial proportion of X users has moved to new platforms such as Mastodon and Bluesky or reduced their overall online activity.

Other new operators are also seeking to attract some of these X refugees.

This epochal change – which the former head of trust and safety for pre-Musk Twitter, Yoel Roth, described as a ‘Cambrian Explosion’ — has substantial consequences for how politicians and parties must approach their online campaigning.

It also has consequences for those who have to scrutinise that campaigning, such as the Australian Electoral Commission.

For such observers, it has become considerably more difficult to identify and highlight unethical campaigning, disinformation, and formal violations of campaign rules. Even when they do, it is unlikely that platforms like X and Facebook will act to address these issues.

Evading scrutiny
Several of the major platforms now actively undermine critical scrutiny of themselves and of the actions of the political actors using their platforms. Before the 2024 US election, Meta shut down its previous data access tool CrowdTangle, which had enabled limited tracking of public pages and groups on Facebook and a selection of public profiles on Instagram.

Its replacement, the Meta Content Library, is accessible only to academic researchers who face a complicated and exclusionary sign-on process and is still largely untested and unknown.

X shut down its Academic API, a free data service that enabled the large-scale and in-depth analysis of user activities on the platform. Its new API offering is priced out of reach of any researcher or watchdog.

TikTok claims to provide a researcher API, but this has been unreliable and is not available to Australia, while YouTube also offers a researcher API, but the accreditation process is cumbersome.

Only new kid on the block Bluesky offers the kind of full and free access to public posting activity on its platform that Twitter once did.

This active and deliberate evasion of critical scrutiny matters, opening the door to nefarious political operators to operate without fear of retribution.

The lack of direct visibility also makes it much harder to generate robust and comprehensive evidence of those activities and easier for platforms to dismiss legitimate concerns.

Lacking effective access to platform data, researchers and other scrutineers have been forced to resort to unauthorised methods that include user data donations and web scraping.

In those cases, platforms now often act more forcefully against this scrutiny itself, rather than against the actual problems scrutiny has revealed.

Mandating research access
There are promising initiatives to enforce greater platform transparency, but Australia still lags.

The European Union’s Digital Services Act (DSA) requires any social media platforms with more than 45 million EU-based users a month to provide data access for legitimate research purposes.

This is a crucial initiative, but platforms have interpreted their obligations differently, from the Meta Content Library’s compliance with the letter, if not the spirit, of the law to X’s outright refusal to comply, despite EU threats.

Meta’s Mark Zuckerberg and X’s Elon Musk have already asked the Trump administration for protection from EU regulation, which they falsely describe as ‘censorship’.

Australia does not have the regulatory clout of the European Union, but has the opportunity to ride the DSA’s coat-tails by implementing similar regulation here.

Regulatory alignment with other nations makes it easier for digital platforms to simply extend their DSA compliance responses, such as they are, to Australia. Those responses will still be grudging and insufficient in many cases, but are better than nothing.

Australian policymakers should support the aims of the DSA. They have recently shown a surprising appetite for digital media regulation – albeit largely misdirected towards the failed News Media Bargaining Code or the disastrous idea of banning young people from social media.

Whether that appetite also extends to making social media platforms more transparent remains to be seen.

Much like campaign finance reforms or truth in political advertising regulation, greater transparency on social media campaigning would also curtail their election campaigning opportunities, after all.

Professor Axel Bruns is an Australian Laureate Fellow, Professor in the Digital Media Research Centre at Queensland University of Technology, and a Chief Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society.

Dr Samantha Vilkins is a research associate at QUT’s Digital Media Research Centre. She researches how evidence and expertise are distributed and discussed online, especially their role in the dynamics of political polarisation.

Professor Bruns is a member of Meta’s Instagram Expert Group. He and Dr Vilkins receive funding from the Australian Research Council through Laureate Fellowship FL210100051 Dynamics of Partisanship and Polarisation in Online Public Debate.

Originally published under Creative Commons by 360info™.

SEE ALSO

Research reveals potential bias in Large Language Models’ text relevance assessments

Conceptual and abstract digital generated image of multiple AI chat icons hovering over a digital surface
Getty Images/J Studios

Research reveals potential bias in Large Language Models’ text relevance assessments

Author ADM+S Centre
Date 14 March 2025

A recent study has uncovered significant concerns surrounding the use of Large Language Models (LLMs) to assess the relevance of information, particularly in passage labelling tasks.

This research investigates how LLMs label passages of text as “relevant” or “non-relevant,” raising new questions about the accuracy and reliability of these models in real-world applications, especially when they are used to train ranking systems or replace humans for relevance assessment.

The study, which received the “Best Paper Honorable Mention” at the SIGIR-AP Conference on Information Retrieval in Tokyo in December 2024, compares the relevance labels produced by various open-source and proprietary LLMs with human judgments.

It finds that, while some LLMs agree with human assessors at similar levels of human-to-human agreement as measured in past research, they are more likely to label passages as relevant. This suggests that while LLMs’ “non-relevant” labels are generally reliable, their “relevant” labels may not be as dependable.

Marwah Alaofi, a PhD student at the ARC Centre of Excellence for Automated Decision-Making and Society, supervised by Prof Mark Sanderson, Prof Falk Scholer, and Paul Thomas, conducted the study as part of her research into measuring the reliability of LLMs for creating relevance labels.

“Our study highlights a critical blind spot in how Large Language Models (LLMs) assess document relevance to user queries,” said Marwah.

This discrepancy, the research finds, is often due to LLMs being fooled by the presence of the user query terms within the labelled passages, even if the passage is unrelated to the query or even random.

“We found that LLMs are likely to overestimate relevance, influenced by the mere presence of query words in documents, and can be easily misled into labelling irrelevant or even random passages as relevant.”

The research suggests that in production environments, LLMs might be vulnerable to keyword stuffing and other SEO strategies, which are often used to promote the relevance of web pages.

“This raises concerns about their use in replacing human assessors for evaluating and training search engines. These limitations could be exploited through keyword stuffing and other Search Engine Optimization (SEO) strategies to manipulate rankings.”

This study underscores the critical need to go beyond the traditional evaluation metrics to better assess the reliability of LLMs in relevance assessment.

SEE ALSO

5 signs of toxic division — and how to beat them

Online voting concept. Man and woman near latop with referendum and election campaign. Freedom of choic and speech. Electronic vote. Cartoon flat vector illustration isolated on white background
Credit: Rudzhan Nagiev/Getty Images

5 signs of toxic division — and how to beat them

Authors Katharina Esau, Axel Bruns and Tariq Choucair
Date 13 March 2025

Australian voters are being targeted by divisive ‘them vs us’ strategies that overshadow policy debate. Here are the signs and ways to move past the soundbites.

Politicians and media organisations are setting the stage for an Australian election where division is a deliberate strategy to mobilise supporters, discredit opponents and split undecided voters.

Polarisation is already shaping the national conversation and it’s a tactic born out of much more than just differing views.

Democratic debate thrives on differing opinions but excessive polarisation pushes discussion away from constructive engagement and into entrenched conflict that has negative consequences for democracy and broader society.

Voters need to know how to spot the signs of those ‘conflict strategies’ and to question them; to look past the soundbites for information they can trust that doesn’t break every debate down to ‘us vs them’.

Negative campaigning — attacking instead of selling your own policies — has been a feature of democratic elections for centuries. Such tactics are designed to stir emotional reactions about an opponent.

It has become a standard election strategy.

Australians might particularly associate it with former Liberal Prime Minister Tony Abbott. Abbott was known for his ruthless negativity as opposition leader from 2009 and his attack-ad-driven campaign in 2013.

How polarisation turns destructive
Now tactics have shifted to a form of strategic polarisation designed to do much more than merely discredit opponents — now the aim is to stoke all-encompassing divisions across society.

These tactics were seen in spectacular fashion in the 2016 US general election.

They were mirrored globally, with examples including Jair Bolsonaro in Brazil, Rodrigo Duterte in the Philippines and the 2017 presidential election in France.

Politicians framed their opponents as an existential threat, encouraging voters not just to support their own side but to despise their opponents.

In Australia, US influence looms large and Trump’s return to the presidency has emboldened politicians here to double down on similar strategies.

In a healthy democracy, competing parties debate ideas, disagree strongly and propose diverging solutions.

However, when polarisation becomes destructive it has potentially severe consequences for democracy and societal cohesion.

There are five key symptoms of destructive polarisation, all seen in recent Australian and global political contests.

When dialogue becomes impossible
A key symptom is that communication between opposing sides ceases to function. Rather than engaging in constructive debate, political actors, media producers and the public either avoid meaningful interaction or reduce their exchanges to misrepresenting, insulting and attacking each other.

This can be seen when party leaders trade insults and shout slogans during campaign debates, rather than debating policy, and in quips or ‘fails’ later pushed on social media.

Australia’s winner-takes-all political system — where coalitions between parties are rare — further exacerbates this. For political leaders, the ability to find compromise and build consensus is seen as a weakness rather than a strength.

When facts don’t matter
Political actors and supporters might also dismiss information outright, based on the source rather than the content.

This might target think tanks or media outlets that might be seen as aligned with one side of politics. Even independent institutions or entire professions such as researchers, public servants or journalists might be dismissed as inherently biased.

Social media users then employ the same strategy, rejecting information based on its source rather than engaging with the information.

When policy becomes a slogan war
Destructive polarisation thrives on reducing nuanced debates to misleading black-and-white choices.

Translating a complex problem or policy into simple terms is one thing, but something else is at play here.

Instead of explaining the issue and their proposed policies, candidates often oversimplify by attacking their political opponents or sometimes by blaming minorities. The message becomes: “If you support us, the problem will be solved. If you support them, we are all doomed.”

The goal of this kind of messaging is to reduce a complex issue to partisan blame or the scapegoating of entire social groups — such as migrants — while ignoring the factors contributing to a policy problem.

When the loudest dominate
In highly polarised environments, moderate perspectives are drowned out in favour of extreme voices that generate engagement and conflict.

This is what’s behind the attacks on supposedly ‘woke’ policies in the US and their importation into Australian politics in recent years.

Ordinary Australians care a great deal more about the cost of living than they care about culture wars — but such battles against imaginary enemies make for great political theatre and don’t require the long-term effort needed to manage economic policy.

When emotion is weaponised
Strategically polarising campaigns rely on stoking fear, resentment and moral outrage to mobilise supporters and silence the opposition.
Expressing emotions in debate is natural and human, but research shows that when emotions are directed at opponents rather than issues, maintaining constructive debate becomes particularly difficult.

This use of emotion is now a component of political campaigning toolkits — almost all Australian parties and their associated lobby groups have run scare campaigns at some point.

For example, the conservative lobby group Advance Australia stoked fear and doubt during the 2023 Voice referendum while Queensland Labor used 2016 election day text messages to play on fears of Medicare privatisation.

Emotional appeals in campaigning are made destructive not by the emotion itself, but when it is directed at the political ‘other’ or their supporters. This fuels a vicious cycle of accusations over who initiated the attacks, leaving voters with little choice but to take sides.

How to resist
As the federal election approaches, Australians need to be aware of how these tactics are used to manipulate them. Political leaders and media outlets will continue to frame debates to maximise division and present choices as stark moral conflicts rather than complex policy decisions.

To resist that destructive polarisation, they need to:

  • Question narratives that present opponents as enemies rather than competitors. Actively engage with people who hold different views. Consider the substance of political suggestions, not just who is making them.
  • Look for balanced sources of information that provide context, not just conflict. Find sources you can trust, and not just because they might share your views.
  • Leave space for ambivalence and compromise instead of committing fully to any one side. Consider if there are more than just two stark choices.
  • Avoid judging people and their contributions based on soundbites and headlines. Engage in longer conversations about complex issues.
  • Express emotions but don’t use them to attack, exclude or manipulate others. Beware of efforts designed to play on your own emotions.

Polarisation is not inevitable, but without critical engagement it will continue to erode democratic discourse.

Recognising the symptoms of strategic division is the first step towards restoring a political culture where debate is about ideas — not just winning or losing.

Dr Katharina Esau is a Digital Media Research Centre research fellow at Queensland University of Technology. She is Chief Investigator of the research project ‘NewsPol: Measuring and Comparing News Media Polarisation’.

Professor Axel Bruns is an Australian Laureate Fellow, Professor in the Digital Media Research Centre at QUT and a Chief Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

Dr Tariq Choucair is a QUT Digital Media Research Centre research fellow and an Affiliate at the ADM+S. He investigates online political talk and deep disagreements, especially about political minority rights.

The authors’ research covered in this article was undertaken with funding from the Australian Research Council through Laureate Fellowship FL210100051 Dynamics of Partisanship and Polarisation in Online Public Debate. Professor Bruns is also a member of Meta’s Instagram Expert Group.

The authors would like to acknowledge the contributions of Dr Samantha Vilkins, Dr Sebastian F. K. Svegaard, Kate S. O’Connor-Farfan and Carly Lubicz-Zaorski, who are leading further research in this space.

Originally published under Creative Commons by 360info™.

SEE ALSO

Half-truths and lies: an online day in Australia

Person browsing on mobile phone
Pexels/Los Muertos Crew

Half-truths and lies: an online day in Australia

Authors T.J Thomson and Aimee Hourigan
Date 13 March 2025

Australians are swamped by misinformation every day but they’re smart enough to know they need help to better navigate an untrustworthy online world.

False online claims about business and the economy top the list of misinformation concerns for Australians and research indicates they are screaming out for help on how to deal with it.

In some ways, it’s not surprising misinformation on the economy rates so highly during a cost-of-living crisis and with a federal election looming — finance-related scams are also a concern — but they’re just a few areas highlighted as Australians drown in a sea of questionable claims every day.

Online misinformation and disinformation have been labelled bigger short-term global threats than climate change or war, so improving media literacy is a critical step in fighting it.

That need is stark considering researchers — whose report, Online misinformation in Australia, was published late last year — found more than half of dodgy information is reported as coming from news sources, whether that be traditional or alternative forms of media.
Australians encounter hundreds of claims each day through channels that might include listening to a podcast, scrolling social media or reading the news and when surfing the Internet to shop, learn or seek entertainment.

The challenge is assessing how many of these claims are true and how confident Australians can be in their abilities to separate fact from fiction.

Half of us encounter misinformation weekly
The researchers found more than half of Australians encounter misinformation in a typical week and 97 percent of Australians have poor or limited ability to verify claims they encounter online.

The research has shone a light on the sources of everyday misinformation, the topics covered and how and where those claims are communicated. It also offers suggestions on how to respond.

Research participants were asked to document online news and information they saw each day for a week and rate its trustworthiness.

More than 20 percent of the 1,600 examples provided were perceived by the participants to have false or misleading claims.

Those misleading claims weren’t limited to the usual suspects such as health or political information, but ranged across other topics that included celebrity news, entertainment and sports.

False or misleading claims about business and economics were the most prevalent. A cost-of-living crisis and heightened focus on money can attract both those who don’t have it as well as those who want to exploit the vulnerable for financial, political or other gain.

It’s only logical then that scams feature high on the list of misinformation threats worrying Australians.

The research also examined the sources of false or misleading claims.

News outlets are supposed to be sources of accurate, credible information but, surprisingly, were responsible for 58 percent of the dodgy claims.

Participants were particularly critical of ‘spammy’ and clickbait headlines.

Social media accounts comprised 18 percent of the examples.

Researchers studied exactly what form misinformation took, finding written claims were the most frequent, accounting for 68 percent of all examples.

Other examples such as social media posts made up 18 percent and video 11 percent, while images (3 percent) and audio (1 percent) made up much smaller proportions.

This doesn’t necessarily mean that there are fewer spoken or visual claims that are false or misleading online. It might mean people find it harder to fact check them, don’t have the literacy to know or the opportunity to check whether what they’re seeing or hearing is true.

It’s much easier to copy a written claim and see what other sources say about it compared with trying to dictate or describe a claim found in spoken or visual form to check its accuracy.

What audiences want
In the midst of Australia recently announcing the development of a national media literacy strategy and social media platforms rolling back or abandoning fact-checking efforts, people want access to media literacy support as a response to misinformation, this research reveals.

Media literacy refers to the ability to evaluate and ask critical questions of the different media people access, use, create and share.
Adopting a media literacy approach to misinformation can be incredibly powerful, building critical knowledge and the ability to identify, evaluate and reflect on false or misleading claims.

Research participants’ interest in media literacy was high and they particularly wanted to build skills to help them evaluate sources of information and claims. They wanted to know how to gauge the reliability and trustworthiness of a source, as well as being able to to identify the intent behind different claims.

One Sydney respondent said: “It’s recognizing whether a piece of information or content is just simply trying to inform you versus a piece of information that is trying to persuade you into doing something.”

Respondents also reiterated the importance of involving key public institutions, such as schools and government, to support media education. They saw the news media as having responsibilities to deliver accurate and trustworthy information.

At its core, media literacy seeks to provide individuals with the knowledge and capabilities to thrive in society — and that can only help them better navigate an untrustworthy online world.

Dr T J Thomson is an ARC DECRA Fellow, a member of the ARC Centre of Excellence for Automated Decision-Making and a senior lecturer at RMIT University, where he co-leads the News, Technology, and Society Network. A majority of his research centres on the visual aspects of news and journalism and on the concerns and processes relevant to those who make, edit and present visual news.

Dr Aimee Hourigan is a postdoctoral research fellow in the Institute for Culture and Society at Western Sydney University. She is currently working on an ARC Linkage Project focussing on Australian adults’ experiences with identifying, navigating and assessing misinformation online.

The authors’ research covered in this article was supported by the Australian Government through the Australian Research Council’s Linkage Projects funding scheme (project LP220100208).

Originally published under Creative Commons by 360info™.

SEE ALSO

ADM+S Submission cited in new Parliament report on the Use and Governance of AI Systems by Public Sector Entities

ADM+S Submission cited in new Parliament report on the Use and Governance of AI Systems by Public Sector Entities

Author Natalie Campbell
Date 7 March 2025

The Joint Committee of Public Accounts and Audit has published its report on the Inquiry into the Use and Governance of AI by Public Sector Entities, citing the ADM+S submission throughout.

Responding to the steep increase in AI adoption by public sector entities that was found during the 2022-23 Commonwealth Financial Statements, the Committee established a specific Inquiry into the Use and Governance of AI by Commonwealth Entities in September 2024.

Chair of the Committee, Hon Linda Burney MP explained, “The issue that was fundamental to this inquiry was whether the existing governance and oversight of this technology matches its rapid and continuing advancement.

“Policy frameworks must be equipped to adequately assess the great promise that AI brings but also understand the inherent and significant risks that accompany its use.”

In February 2025 the Committee released a report titled ‘Proceed with Caution’, which provides four key recommendations.

  1. The Australian Public Service Commission to introduce questions on the use and understanding of artificial intelligence and other emerging technologies into its annual APS Employee Census.
  2. The Australian Government convenes a whole of Government working group within 12 months of this report to develop key frameworks for managing sovereign risks, and biases that result from the adoption of these technologies can be effectively mitigated.
  3. The Australian Government establishes a statutory Joint Committee on Artificial Intelligence and Emerging Technologies to provide effective and continuous Parliamentary oversight of the adoption of these systems across the Australian government and more widely.
  4. Any guidance issued by the Digital Transformation Agency, or any other Australian Government agency, should clearly define all AI systems and applications.

In addition to addressing the Inquiry’s terms of reference, the ADM+S submission led by Prof Kimberlee Weatherall included three other areas of research that raise important considerations around the use of AI in the public sector;Disability and accessibility, Environmental impact, and Trauma-informed approaches.

The 24 October submission reads, “The public sector should, in its use of AI, demonstrate the positive impacts that technology can have in achieving important public goals, such as promoting access, inclusion, and better public services.”

Key contributions and citations from the ADM+S submission:

  • Areas of stakeholder concern: Noting that while there is not a clear distinction between automation and AI, ‘whether it involves AI or not, public sector automation can significantly affect citizen’s rights and good public sector administration — and in similar ways’.
  • Australia’s AI ethics and principles: The report considers ADM+S’ concerns that the existing principles were developed prior to the widespread availability of generative AI and had not been reviewed as at September 2024
  • Policy for the responsible use of AI in government: ADM+S explains that ‘the policy is extraordinarily limited in what it requires’, as it ‘introduces a new three-part language framework that is not aligned with any of the Australia’s AI Ethics Principles, the National Framework or the proposed Mandatory Guardrails’.’
  • Current regulatory framework:  ADM+S is quoted, referring to concerns that the current arrangements do not allow for effective investigation, enforcement and direction.
  • Establishment of new policies or legislation: ADM+S is quoted for the overwhelming nature of having many slightly different guidelines, recommendations, frameworks and statements. ADM+S’ suggestion for a common baseline, one stronger than the current Commonwealth policy, is highlighted here.

The ADM+S submission was led by Kimberlee Weatherall, with contributions from Jose-Miguell Bello y Villarino, Gerard Goggin, Jake Goldenfein, Paul Henman, Rita Matulionyte, Christine Parker, Lyndal Sleep and Georgia van Toorn.

View the report.

View the ADM+S Submission.

SEE ALSO

ADM+S PhD Student undertakes fieldwork on fintech services in Indonesia

Oliver Knight (RMIT) with focus group participants who discussed financial practices including digital and informal lending.
Oliver Knight (RMIT) with focus group participants who discussed financial practices including digital and informal lending.

ADM+S PhD Student undertakes fieldwork on fintech services in Indonesia

Author Natalie Campbell
Date 6 March 2025

ADM+S PhD Student Oliver Knight from RMIT University recently returned from a fieldwork trip in Indonesia, conducting focus groups and surveys to inform his thesis on ‘Lesser Sunda, More Defaults? P2P Lending in East Indonesia’.

The objective of the trip was to investigate claims of digital financial inclusion by studying access to fintech and online credit strategies in Indonesia’s West Nusa Tenggara (NTB) and East Nusa Tenggara (NTT provinces), through qualitative focus groups and surveys.

The trip begun with a presentation at Kantor Desa in Lingsar Indonesia, where Oliver gave an overview of his research topic and plans to the village leaders, and conducted focus groups with participants.

Oliver and Abdul Basit (Universitas Islam Al-Azhar) conducting focus groups with heads of villages in Kantor Desa, Gegerung, Kec. Lingsar, Indonesia.

 

During the subsequent three-week trip, Oliver hosted focus groups in West, Central, and East Lombok regions, as well as conducting surveys with participants at two Universities in Kota Mataram.

“This field trip allowed me to deepen my connection with the areas of Indonesia that are relevant to my research by creating relationships with local FinTech users, industry, and academics,” said Oliver.

“It also provided the opportunity to develop critical contextual understanding of the important socio-cultural and community dynamics at play.”

This primary data collection across two regions will inform Oliver’s thesis and was strategically timed so that the analysis could be presented at his second milestone review in March.

While in Indonesia, Oliver worked closely with Reza Arviciena Sakti, Abdul Basit and Dr Vegalyra Novantini Samodra from the Universitas Islam Al-Azhar (Unizar), who assisted with recruitment, data analysis, and translation during his stay.

“The staff and broader community at Universitas Islam Al-Azhar have always been so welcoming, and share a deep passion and excitement for my research, which I find so motivating,” he said.

“On a personal level, the opportunity to develop my public speaking skills, Indonesian language, and the way I frame my research, will help me tremendously as I continue my career in research.”

Oliver received a Speaker Certificate for sharing his experience studying in Australia with students at Universitas Islam Negeri – Mataram.

 

When asked about a highlight of his trip, Oliver declared the many “aha!” moments he experienced during the data collection and analysis process.

“Each of these moments felt like finding a jigsaw piece that fits into my research puzzle/problem and show how valuable the fieldwork has been.”

This fieldtrip was supported by ADM+S.

SEE ALSO

Research Fellow takes ADM+S research abroad for feedback and collaboration

Dr Ashwin Nagappa and colleagues from Hans Bredow Institut
Dr Ashwin Nagappa and colleagues at Hans Bredow Institut

Research Fellow takes ADM+S research abroad for feedback and collaboration

Author Natalie Campbell
Date 6 March 2025

ADM+S Research Fellow Dr Ashwin Nagappa has returned from Europe, after attending the ECREA Communication History 2025 workshop in Geneva, Switzerland, and visiting ADM+S Partner Investigators at Hans Bredow Institut and the University of Amsterdam.

The 2025 ECREA Workshop was held at CERN in Geneva, the European Organisation for Nuclear Research Centre, one of the world’s largest and most prestigious scientific laboratories and the birth place of the web

The theme for this year’s workshop was ‘Communication Networks Before and After the Web: Historical and Long-term Perspective’, bringing together international scholars from media history, media archaeology and digital media, to explore the origins of the web and its evolution into one of the most influential technologies of our time.

“As I engaged with scholars working on contemporary AI tools and research, I found it fascinating that the web, now a central information system in our daily lives, was never originally conceived as such—it was designed as a tool for scientists to accelerate experiments.,” said Ashwin.

“It’s important to recognize that the web forms the foundation of our everyday search and social media experiences, providing the vast information that AI relies on..”

From Geneva, Ashwin then travelled to Hans Bredow Institut in Hamburg, where he was welcomed by Prof Judith Möller, Scientific Director at the Leibniz Institute for Media Research and Professor of Empirical Communication Research, Media Use and Social Media Effects the University of Hamburg.

He also visited ADM+S Partner Investigator Prof Maarten de Rijke and his team at the Information Retrieval Lab at the University of Amsterdam.

With both groups, Ashwin was given the opportunity to present the explainer, ‘What is search experience?’ – a brief introduction to the ADM+S Australian Search Experience 2.0 Project, including early developments and future plans for the research – which is set to be developed into a four-part blog series in March 2025.

“This talk draws on a literature review of both information retrieval – the technological aspect – and search experience – the social aspect, and has been refined over the past few months with feedback from colleagues across ADM+S.

“This explainer has proven to be a valuable tool for expanding different aspects of the Australian Search Experience, identifying connections across its subprojects, and exploring crossovers with other signature projects within the centre.”

These presentations were followed by Q&A sessions, providing valuable insights for refining the workflow of the project.

Being surrounded by experts in information retrieval at the University of Amsterdam, Ashwin was able to learn about their research on various aspects of AI and Search, noting synergies between their respective projects, and opportunities for potential collaboration.

“For instance, some PhD students specialize in evaluating AI-generated text for human-like quality, which could support our efforts to automate search processes.

This trip was supported by ADM+S and QUT.

SEE ALSO

#AccelerateAction: Spotlighting ADM+S research on gender bias in AI and ADM systems

#AccelerateAction: Spotlighting ADM+S research on gender bias in AI and ADM systems

Author ADM+S Centre
Date 5 March 2025

International Women’s Day celebrates the social, economic, cultural, and political achievements of women, global progress towards gender equality, and recognises that there is substantial work still to be done.

In the fields of technology, automated decision-making, and generative AI, women are still under-represented but disproportionately affected by the negative effects of emerging digital technologies.

This International Women’s Day we’re highlighting the work of ADM+S members across our research program who are investigating gender bias in AI and ADM systems.

By identifying inequalities in the ways users experience technology, these projects aim to #AccelerateAction in creating a more just and inclusive digital environment.

Advanced technology is taking us backwards on gender equity.

She might go by Siri, Alexa, or inhabit Google Home. She keeps us company, orders groceries, vacuums the floor, and turns out the light. The principal prototype for these virtual helpers – designed in male-dominated industries – is the 1950s housewife.

In The Smart Wife, Yolande Strengers and Jenny Kennedy examine the emergence of digital devices that carry out “wifework”–domestic responsibilities that have traditionally fallen to (human) wives. They offer a Smart Wife “manifesta,” proposing a rebooted Smart Wife that would promote a revaluing of femininity in society in all her glorious diversity.

In 2024, Yolande’s research on gendered voicebots was adapted into an educational school program in partnership with the Monash Tech School and Monash University’s Faculty of IT, called Superbots.

Superbots is a two-day interactive Industry Immersion program that explores the history, ethics, and societal influences on Voicebots and voice-assisted software development.

ADM+S filmmaker Jeni Lee produced a short film about the program, which observes and engages with students from Brentwood Secondary College as they ideate, test and construct their own voicebot personality.

Superbots will be available on SBS on Demand from Saturday 9 March.

This paper considers how algorithmic recommender systems and other core affordances and infrastructures of major social media platforms contribute to the harms of ‘hate speech’ against or vilification of women online.

The paper argues that this kind of speech occurring on major social media platforms exists at the intersections of patriarchy and platform power and is thus platformed.

Platforms also seek to maintain control or influence over the conditions for their own regulation and governance through use of their discursive power. Related to this is a privileging of self-regulatory action in current laws and law reform proposals for platform governance, which we argue means that platformed speech that vilifies women is also auspiced by platforms.

This auspicing, as an aspect of platforms’ discursive power, represents an additional ‘layer’ of contempt for women, for which platforms currently are not, but should be, held accountable.

 

Existing studies have examined depictions of journalists in popular culture, but artificial intelligence understandings of what a journalist is and what they look like is a different topic, yet to receive research attention.

This study analyses 84 images generated by AI from four “generic” keywords (“journalist,” “reporter,” “correspondent,” and “the press”) and three “specialized” ones (“news analyst,” “news commentator,” and “fact-checker”) over a six-month period.

The results reveal an uneven distribution of gender and digital technology between the generic and specialized roles and prompt reflection on how AI perpetuates extant biases in the social world.

 

Drawing on two ADM+S reports led by Dr Quilty (automation in transport mobilities scoping study and expert visions of future automated mobilities), this article introduces a critical concept called Pod Man that examines the gendered and racial formations embedded into technologies like self-driving cars.

Dr Quilty defines Pod Man as the technology-driven, hyper-mobile and hyper-masculine transport consumer found at the centre of sociotechnical imaginaries of automated mobilities. He represents the ideal mobility subject who is both invisible and powerful, shaping visions of the future of mobility.

Pod Man is both a provocation and an entry point for thinking about how emerging technologies, such as autonomous vehicles, are shaping unequal relations of power in visions of mobility futures.

Image: Miranda Burton

Generative AI systems learn how to create from our existing, unequal past; now, they’re embedding those same historical biases into our future.

ADM+S PhD Student Sadia Sharmin is researching how biases baked into AI models shape broader social views, amplifying and reinforcing existing power relations through their outputs.

The subtle biases produced by GenAI may seem innocuous, but they are insidious in that they shape cultural narratives, reinforce stereotypes, and influence social perceptions and opportunities for women on a potentially massive scale.

Her research seeks to tackles this subtle but pervasive problem by developing new ways to measure and identify gender bias in AI outputs – going beyond simple statistics – to understand how Generative AI systems might reinforce stereotypes about women’s place, capabilities, and value in society.

This includes creating new tools that go beyond obvious and quantifiable forms of bias, and instead assess the more subtle ways AI systems might undersell women’s achievements, limit their perceived potential, or reinforce gender-based assumptions.

 

Artificial Intelligence (AI) is increasingly being used in the delivery of social services including domestic violence services. While it offers opportunities for more efficient, effective and personalised service delivery, AI can also generate greater problems, reinforcing disadvantage, generating trauma or re-traumatising service users.

Building on work in social services on trauma-informed practice, this project identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma.

This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI.

 

This collaboration with UNED Madrid and The Polytechnic University of Valencia aimed to create an evaluation benchmark for automatic sexism characterisation in social media.

In recent years, the rapid increase in the dissemination of offensive and discriminatory material aimed at women through social media platforms has emerged as a significant concern.

The EXIST campaign has been promoting research in online sexism detection and categorization in English and Spanish since 2021. The fourth edition of EXIST, hosted at the CLEF 2024 conference, consisted of three groups of tasks analysing Tweets and Memes: sexism identification, source intention identification, and sexism categorization.

The “learning with disagreement” paradigm is adopted to address disagreements in the labelling process and promote the development of equitable systems that are able to learn from different perspectives on the sexism phenomena.

 

Crowdsourced annotation is vital to both collecting labelled data to train and test automated content moderation systems and to support human-in-the-loop review of system decisions. However, annotation tasks such as judging hate speech are subjective and therefore, highly sensitive to biases stemming from annotator beliefs, characteristics and demographics.

This research involved two crowdsourcing studies on Mechanical Turk to examine annotator bias in labelling sexist and misogynistic hate speech.

Results from 109 annotators show that annotator political inclination, moral integrity, personality traits, and sexist attitudes significantly impact annotation accuracy and the tendency to tag content as hate speech.

In exploring how workers interpret a task — shaped by complex negotiations between platform structures, task instructions, subjective motivations, and external contextual factors — we see annotations not only impacted by worker factors but also simultaneously shaped by the structures under which they labour.

 

At the ADM+S Centre, we recognise that racism, colonialism, sexism, homophobia, transphobia, and ableism are principal obstacles to equity, diversity and inclusion, and remain primary causes of injustice and inequality. We believe that gender equality for all means equality for marginalised groups, and that the cause of gender equality includes the experiences of including Indigenous and POC women, and transgender and non-binary people. You can read about how we are working to foster diversity and inclusion in the ADM+S community and through our research via our Equity and Diversity Strategy and Action Plan (website link).

Dr Anjalee de Silva, an expert on harmful speech and its regulation in online contexts and a member of the ADM+S Equity and Diversity Committee, explains “AI and ADM technologies have the potential to, and consistently have been evidenced to, replicate ‘real world’ biases against and harms to structurally vulnerable groups, including women and minorities.

“Scholarship considering these biases and harms is thus a crucial part of systemically informed and equitable approaches to the development, use, and regulation of such technologies.”

Prof Yolande Strengers adds, “Now more than ever we need to work hard to protect the progress we have made to support the unequal opportunities women and other minorities in technology fields experience.

“We also need research and programs that bring less heard voices into the public domain and push for further advances in equity.”

Watch: ADM+S community celebrates IWD

SEE ALSO

‘I can’t be friends with the machine’: what audio artists working in games think of AI

Illustration with two people in a recording studio
Credit: Visual Generation/Shutterstock

‘I can’t be friends with the machine’: what audio artists working in games think of AI

Author Sam Whiting
Date 5 March 2025

The Media, Entertainment and Arts Alliance, the union for voice actors and creatives, recently circulated a video of voice actor Thomas G. Burt describing the impact of generative artificial intelligence (GenAI) on his livelihood.

Voice actors have been hit hard by GenAI, particularly those working in the video game sector. Many are contract workers without ongoing employment, and for some game companies already feeling the squeeze, supplementing voice-acting work with GenAI is just too tempting.

Audio work – whether music, sound design or voice acting – already lacks strong protections. Recent research from my colleagues and I on the use of GenAI and automation in producing music for Australian video games reveals a messy picture.

Facing the crunch

A need for greater productivity, increased turnarounds, and budget restraints in the Australian games sector is incentivising the accelerated uptake of automation.

The games sector is already susceptible to “crunch”, or unpaid overtime, to reach a deadline. This crunch demands faster workflows, increasing automation and the adoption of GenAI throughout the sector.

The Australian games industry is also experiencing a period of significant contraction, with many workers facing layoffs. This has constrained resources and increased the prevalence of crunch, which may increase reliance on automation at the expense of re-skilling the workforce.

One participant told us:

the fear that I have going forward for a lot of creative forms is I feel like this is going to be the fast fashion of art and of text.

Mixed emotions and fair compensation

Workers in the Australian games industry have mixed feelings about the impact of GenAI, ranging from hopeful to scared.

Audio workers are generally more pessimistic than non-audio games professionals. Many see GenAI as extractive and potentially exploitative. When asked how they see the future of the sector, one participant responded:

I would say negative, and the general feeling being probably fear and anxiety, specifically around job security.

Others noted it will increase productivity and efficiency:

[when] synthesisers started being made, people were like, ‘oh, it’s going to replace musicians. It’s going to take jobs away’. And maybe it did, but like, it also opened up this whole other world of possibilities for people to be creative.

A vintage keyboard.
There were once fears about what synthesisers would mean for musicians’ livelihoods.
Peter Albrektsen/Shutterstock

Regardless, most participants expressed concerns about whether a GenAI model was ethically trained and whether licensing can be properly remunerated, concerns echoed by the union.

Those we spoke with believed the authors of any material used to train AI data-sets should be fairly compensated and/or credited.

An “opt-in” licensing model has been proposed by unions as a compromise. This states a creators’ data should only be used for training GenAI under an opt-in basis, and the use of content to train generative AI models should be subject to consent and compensation.

Taboos, confusion and loss of community

Some audio professionals interested in working with GenAI do not feel like they can speak openly about the subject, as it is seen as taboo:

There’s like this feeling of dread and despair, just completely swirling around our entire creative field of people. And it doesn’t need to be like that. We just need to have the right discussions, and we can’t have the right discussions if everyone’s hair is on fire.

The technology is clearly divisive, despite perceived benefits.

Several participants expressed concerns the prevalence of GenAI may reduce collaboration across the sector. They feared this could result in an erosion of professional community, as well as potential loss of institutional knowledge and specific creative skills:

I really like working with people […] And handing that over to a machine, like, I can’t be friends with the machine […] I want to work with someone who’s going to come in and completely shake up the way, you know, our project works.

The Australian games sector is reliant on a highly networked but often precarious set of workers, who move between projects based on need and demand for certain skills.

The ability to replace such skills with automation may lead to siloing and a deterioration of greater professional collaboration.

But there are benefits to be had

Many workers in the games audio sector see automation as helpful in terms
of administration, ideation, workshopping, programming and as an educational tool:

In terms of automation, I see it as, like, utilities. For example, being a developer, I write scripts. So, if I’m doing something and it’s gonna take me a long time, I’ll automate it by writing a script.

These systems also have helpful applications for neurodivergent professionals and workers who may struggle with time management or other attention-related issues.

Over half of participants said AI and automation allows more time for creativity, as workers can automate the more tedious elements of their workflow:

I suffer like anyone else from writer’s block […] If you can give me a piece of software that is trained off me, that I could say, ‘I need something that’s in my house style, make me something’, and a piece of software could spit back at me a piece of music that sounds like me that I could go, ‘oh, that’s exactly it’, I would do it. That would save me an incalculable amount of time.

Many professionals who would prefer not to use AI said they would consider using it in the face of time or budget constraints. Others stated GenAI allows teams and individuals to deliver more work than they would without it:

Especially with deadlines always being as short as they are, I think a lot of automation can help to focus on the more creative and decision-based aspects.

Many workers within the digital audio space are already working hard to create ethical alternatives to AI theft.

Although GenAI may be here to stay, a balance between the efficiencies provided should not come at the cost of creative professions.The Conversation

Sam Whiting, Vice-Chancellor’s Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Microsoft cuts data centre plans and hikes prices in push to make users carry AI costs

Image: bluestork / Shutterstock.com
Image: bluestork / Shutterstock.com

Microsoft cuts data centre plans and hikes prices in push to make users carry AI costs

Author Kevin Witzenberger and Michael Richardson
Date 3 March 2025

After a year of shoehorning generative AI into its flagship products, Microsoft is trying to recoup the costs by raising prices, putting ads in products, and cancelling data centre leases. Google is making similar moves, adding unavoidable AI features to its Workspace service while increasing prices.

Is the tide finally turning on investments into generative AI? The situation is not quite so simple. Tech companies are fully committed to the new technology – but are struggling to find ways to make people pay for it.

Shifting costs

Last week, Microsoft unceremoniously pulled back on some planned data centre leases. The move came after the company increased subscription prices for its flagship 365 software by up to 45%, and quietly released an ad-supported versionof some products.

The tech giant’s CEO, Satya Nadella, also recently suggested AI has so far not produced much value.

Microsoft’s actions may seem odd in the current wave of AI hype, coming amid splashy announcements such as OpenAI’s US$500 billion Stargate data centre project.

But if we look closely, nothing in Microsoft’s decisions indicates a retreat from AI itself. Rather, we are seeing a change in strategy to make AI profitable by shifting the cost in non-obvious ways onto consumers.

The cost of generative AI

Generative AI is expensive. OpenAI, the market leader with a claimed 400 million active monthly users, is burning money.

Last year, OpenAI brought in US$3.7 billion in revenue – but spent almost US$9 billion, for a net loss of around US$5 billion.

OpenAI CEO Sam Altman says the company is losing money on US$200 per month ChatGPT Pro subscriptions. Aurelien Morissard / EPA
OpenAI CEO Sam Altman says the company is losing money on US$200 per month ChatGPT Pro subscriptions. Aurelien Morissard / EPA

 

Microsoft is OpenAI’s biggest investor and currently provides the company with cloud computing services, so OpenAI’s spending also costs Microsoft.

What makes generative AI so expensive? Human labour aside, two costs are associated with AI models: training (building the model) and inference (using the model).

While training is an (often large) up-front expense, the costs of inference grow with the user base. And the bigger the model, the more it costs to run.

Smaller, cheaper alternatives

A single query on OpenAI’s most advanced models can cost up to US$1,000 in compute power alone. In January, OpenAI CEO Sam Altman said even the company’s US$200 per month subscription is not profitable. This signals the company is not only losing money through use of its free models, but through its subscription models as well.

Both training and inference typically take place in data centres. Costs are high because the chips needed to run them are expensive, but so too are electricity, cooling, and the depreciation of hardware.

The growing cost of running data centres to power generative AI products has sent tech companies scrambling for ways to recoup their costs. Aerovista Luchtfotografie / Shutterstock
The growing cost of running data centres to power generative AI products has sent tech companies scrambling for ways to recoup their costs. Aerovista Luchtfotografie / Shutterstock

 

To date, much AI progress has been achieved by using more of everything. OpenAI describes its latest upgrade as a “giant, expensive model”. However, there are now plenty of signs this scale-at-all-costs approach might not even be necessary.

Chinese company DeepSeek made waves earlier this year when it revealed it had built models comparable to OpenAI’s flagship products for a tiny fraction of the training cost. Likewise, researchers from Seattle’s Allen Institute for AI (Ai2) and Stanford University claim to have trained a model for as little as US$50.

In short, AI systems developed and delivered by tech giants might not be profitable. The costs of building and running data centres are a big reason why.

What is Microsoft doing?

Having sunk billions into generative AI, Microsoft is trying to find the business model that will make the technology profitable.

Over the past year, the tech giant has integrated the Copilot generative AI chatbot into its products geared towards consumers and businesses.

It is no longer possible to purchase any Microsoft 365 subscription without Copilot. As a result subscribers are seeing significant price hikes.

As we have seen, running generative AI models in data centres is expensive. So Microsoft is likely seeking ways to do more of the work on users’ own devices – where the user pays for the hardware and its running costs.

Microsoft says the Copilot key will ‘empower people to participate in the AI transformation’. Microsoft
Microsoft says the Copilot key will ‘empower people to participate in the AI transformation’. Microsoft

 

A strong clue for this strategy is a small button Microsoft began to put on its devices last year. In the precious real estate of the QWERTY keyboard, Microsoft dedicated a key to Copilot on its PCs and laptops capable of processing AI on the device.

Apple is pursuing a similar strategy. The iPhone manufacturer is not offering most of its AI services in the cloud. Instead, only new devices offer AI capabilities, with on-device processing marketed as a privacy feature that prevents your data travelling elsewhere.

Pushing costs to the edge

There are benefits to the push to do the work of generative AI inference on the computing devices in our pockets, on our desks, or even on smart watches on our wrists (so-called “edge computing”, because it occurs at the “edge” of the network).

It can reduce the energy, resources and waste of data centres, lowering generative AI’s carbon, heat and water footprint. It could also reduce bandwidth demands and increase user privacy.

But there are downsides too. Edge computing shifts computation costs to consumers, driving demand for new devices despite economic and environmental concerns that discourage frequent upgrades. This could intensify with newer, bigger generative AI models.

A shift to more ‘on-device’ AI computing could create more problems with electronic waste. SibFilm / Shutterstock

 

And there are more problems. Distributed e-waste makes recycling much harder. What’s more, the playing field for users won’t be level if a device dictates how good your AI can be, particularly in educational settings.

And while edge computing may seem more “decentralised”, it may also lead to hardware monopolies. If only a handful of companies control this transition, decentralisation may not be as open as it appears.

As AI infrastructure costs rise and model development evolves, shifting the costs to consumers becomes an appealing strategy for AI companies. While big enterprises such as government departments and universities may manage these costs, many small businesses and individual consumers may struggle.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S Affiliate Dr Christopher O’Neill awarded prestigious Fulbright Scholarship

Dr Chris O’Neill is awarded a Fulbright Scholarship from the Governor General Sam Mostyn, at Parliament House. Image: bencalvertphoto.com
Dr Chris O’Neill is awarded a Fulbright Scholarship from the Governor General Sam Mostyn, at Parliament House. Image: bencalvertphoto.com

ADM+S Affiliate Dr Christopher O’Neill awarded prestigious Fulbright Scholarship

Author Natalie Campbell
Date 3 March 2025

ADM+S Affiliate Dr Christopher O’Neill, who recently completed a Research Fellowship with Prof Mark Andrejevic at the Monash University node of ADM+S has been awarded a 2025-2026 Fulbright Scholarship at the University of Southern California.

Commemorating the achievement at Parliament House in Canberra on 27 February, Dr O’Neill was presented his Fulbright Scholarship from the Governor General, Sam Mostyn.

Dr O’Neill will spend four months working with Assoc Prof Mike Ananny, Associate Professor of Communication and Journalism at the USC Annenberg School of Journalism, studying automation, work and error.

The Fulbright Program is the largest educational scholarship of its kind, created by US Senator J. William Fulbright and the US Government in 1946, and is the flagship foreign exchange scholarship program of the United States.

Successful Fulbright recipients are interviewed and selected by panels of experts from academia, government, professional organisations and the U.S. Embassy in a competitive process which assesses academic and professional merit, a strong program proposal with defined potential outcomes, and ambassadorial skills.

Dr O’Neill is currently a Research Fellow at the Alfred Deakin Institute, where his work draws upon science and technology studies and critical media theory to study the place of automation in contemporary biopower.

Prior to his role at Deakin, he spent three years as a Postdoctoral Research Fellow working with ADM+S Chief Investigator Prof Mark Andrejevic at the Monash University node of ADM+S, where among other projects, he developed a critical analysis of the role of the human in automated work and surveillance systems.

ADM+S Prof Mark Andrejevic said, “Chris did amazing work during his time at the Centre, and It’s great to see his well-deserved success in the Fulbright Program and beyond.

“I know he will make the most of the opportunity and this will continue to build his burgeoning international reputation.”

Notably, an international workshop he co-organised alongside fellow ADM+S member Lauren Kelly has led to a forthcoming special issue of Work Organisation, Labour and Globalisation on ‘new worlds of logistical labour’.

Dr O’Neill has also appeared as a public commentator on recent industrial relations issues regarding the place of automation and surveillance in warehouse work.

“Having the opportunity to develop my work on labour and automation at the ADM+S Centre has led to me receiving a Fulbright Scholar Award,” says Dr O’Neill.

“The opportunity that’s been given early career researchers s at ADM+S is astounding. You have an incredible amount of freedom and encouragement to develop your own path as a researcher.

“I made so many connections with talented and brilliant researchers from all over Australia, but also with international networks that the Centre opened me up to.”

In 2022 Dr O’Neill received support from ADM+S to take part in a two-month AI and Humanity Research Cluster at the University of Southern California in Berkley, collaborating with researchers from across America.

“During that experience, I made lots of new relationships with American researchers, and I’ve subsequently organized workshops and streams of international conferences in collaboration with those colleagues.”

Dr O’Neill will commence his exchange in August 2025, where he will work with Associate Prof Ananny in the Media as SocioTechnical Systems (MASTS) research group, studying the way that errors in automated systems can reveal the dynamics and assumptions which are sometimes hidden within automated work infrastructures.

View the 2025 Fulbright announcement.

SEE ALSO

Supporting the next generation of researchers at the 2025 ADM+S Summer School

Research Fellow William He (QUT) leading the 'Transformers Alive' workshop
Research Fellow William He (QUT) leading the 'Transformers Alive' workshop

Supporting the next generation of researchers at the 2025 ADM+S Summer School

Author Natalie Campbell
Date 3 March 2025

The 2025 ADM+S Summer School, hosted by the University of Melbourne Law School, brought together over 120 students, researchers and mentors for a curated program spanning research methodologies, ethics advice, writing and publishing, and more.

Bringing together higher degree research students (HDRs) and early career researchers (ECRs) from all nine ADM+S nodes, the annual Summer School provides a perfect opportunity for community members to ask questions, share concerns, learn from one another, and get the most out of their research journey in the ADM+S community.

ADM+S Manager of Research Training and Development and member of the Summer School working group Sally Storey, said “This event would not be possible without the incredible generosity of our Centre’s research community.”

“I want to say a huge thank you to all our presenters and mentors for sharing your knowledge and expertise with our attendees, and the time leading up to the Summer School preparing presentations, materials, wrangling, scheduling… the effort is outstanding!”

The program encourages PhD students to engage with topics across disciplines, learn about different research methods, and create connections with peers and mentors from across the national ADM+S network – an invaluable experience for all early career researchers.

PhD Student Tace McNamara from Monash University explained, “I’m looking at AI and its capacity to understand art and music as an audience.

“It’s been really interesting talking to people from other disciplines because I think what I’m doing is inherently interdisciplinary, so hearing about law, media, culture, that’s something I don’t do on a daily basis in my lab, and it’s been really valuable.”

Sessions ranged from ‘Ethical uses of GenAI in research’, to ‘Unpacking ideas animating technology governance’, ‘Interviewing with digital trace data’, ‘How to study socio-technical networks’, ‘Harnessing technology for remote research, and more.

“The Transformers Alive session, led by Aaron Snoswell, was such a didactic way of learning more about how generative AI operates and how people can embody the experience of how the information system operates in the background,” said PhD Student Miguel Loor Paredes from Monash University.

“It gave me another understanding of how artificial intelligence works and also how it relates to my research problem, and how to frame it from the humanities perspective.”

A highlight of the program was the closing plenary session hosted by the ADM+S Research Training and Capability Development Committee, inviting input from the HDR community on the design and delivery of the ADM+S Research Training program.

The Summer School also provides an occasion for HDR’s and ECR’s to engage in our formal mentoring program, connecting with senior researchers from within, or outside their discipline, to share their research, ask questions, get feedback, and build their network across ADM+S institutions.

“A real highlight for me is seeing our students and research fellows from across the Centre, building that community spirit, getting involved, making new research connections and friendships that will see them over their career,” said Sally Storey.

Senior Research Fellow Sam Whiting from RMIT University said, “I’m a new Affiliate at the Centre so I’m a bit out of my comfort zone, but that’s been really interesting because I’ve been exposed to a lot of new ideas and meeting people, connecting, and thinking about future collaborations.

“I’m really looking forward to more events like this, opportunities to connect with people outside of my usual networks, opportunities to collaborate on projects.”

Many thanks to all speakers, mentors, and student participants for making this event possible, and especially the ADM+S Research training Committee for their hard work behind the scenes in delivering this brilliant event.

View the 2025 Summer School photo library.

SEE ALSO

ADM+S professional staff recognised at the 2024 RMIT Research Service Awards

RMIT 2024 Research Service Awards. Image: Matt Houston, RMIT
RMIT 2024 Research Service Awards. Image: Matt Houston, RMIT

ADM+S professional staff recognised at the 2024 RMIT Research Service Awards

Author Natalie Campbell
Date 28 February 2025

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is thrilled to congratulate members of the Professional Staff team, who have been recognised for their service excellence in the annual RMIT Research Service Awards.

The awards ceremony was held on 21 February 2025 at The Capitol Theatre in Melbourne, dedicated to celebrating the achievements of the RMIT research community and research support staff.

The Research Service Awards invited peers to nominate those in their community who demonstrate tremendous effort in supporting and delivery successful research outcomes.

ADM+S Chief Operating Officer Nicholas Walsh was awarded the Service Excellence award, which honours an individual who demonstrate excellence in research and innovation support.

ADM+S COO Nicholas Walsh receives the Service Excellence Award at RMIT's 2024 Research Service Awards
Callum Drummond (DVC Research and Innovation) presents the Service Excellence Award to ADM+S COO Nicholas Walsh. Image: Matt Houston, RMIT

 

Announcing the award, Jane Holt, Executive Director of RMIT’s Research Strategy & Services, highlighted Nick’s pioneering role as the first COO of RMIT’s inaugural centre of excellence, and commended his coordination and delivery of a remarkable ARC mid-term report working with ADM+S’ global partners.

“His dedication to meeting the centre’s needs and resolving challenges within RMIT’s enterprise systems demonstrates his outstanding service and leadership,” she said.

The RMIT ADM+S Operations team, consisting of Nicholas Walsh, Julie Stuart, Leah Hawkins, Natalie Campbell, Lucy Valenta, Mathew Warren and Sally Storey, were awarded a special commendation for the Service Excellence in the Collaboration category, recognising the team’s collaborative efforts in supporting the delivery of high-impact outcomes from ADM+S research.

Pictured: Leah Hawkins, Julie Stuart, Nick Walsh, Callum Drummond (DVC Research and Innovation) and Mathew Warren.Absent: Natalie Campbell, Sally Storey and Lucy Valenta
Pictured: Leah Hawkins, Julie Stuart, Nick Walsh, Callum Drummond (DVC Research and Innovation) and Mathew Warren. Absent: Natalie Campbell, Sally Storey and Lucy Valenta. Image: Matt Houston, RMIT

 

This commendation was presented by Tim McLennan, Executive Director of Research Partnerships and Translation, and Prof Swee Mak, Director of Strategic Innovation and Innovation at RMIT University. The announcement emphasised the team’s ability to drive significant research outcomes through innovative cross-departmental initiatives.

“By fostering synergies across diverse teams, they have created a dynamic ecosystem that amplifies research potential and enhances the impact of institutional research,” Prof Mak explained.

All awards were presented by Distinguished Professor Calum Drummond, AO the Deputy Vice-Chancellor in Research and Innovation, and Vice-President of RMIT University.

Learn more about the RMIT Research Service Awards.

SEE ALSO

ADM+S Chief Investigator announced co-director of the Centre for AI, Trust and Governance at the University of Sydney

Prof Kim Weatherall Credit: University of Sydney
Prof Kim Weatherall. Image credit: University of Sydney

ADM+S Chief Investigator announced co-director of the Centre for AI, Trust and Governance at the University of Sydney

Author Natalie Campbell
Date 27 February 2025

On 25 February 2025, the University of Sydney unveiled its new Centre for AI, Trust and Governance (CAITG), appointing ADM+S Chief Investigator Prof Kimberlee Weatherall as co-director alongside Prof Terry Flew.

As co-director, Prof Weatherall will lead groundbreaking research to ensure AI is transparent, fair, and accountable, championing the critical role of law and policy in shaping ethical AI.

“Universities have a critical role to play in ensuring that AI develops for the benefit of everyone, all the way across society,” says Prof Weatherall.

 “I’m proud to be co-directing CAITG that can bring together the University of Sydney’s outstanding researchers and students, from different research disciplines, to understand how the technology is developing, its impacts in the world and how to shape it for the better.”

CAITG’s research agenda is focused on AI’s relationship to digital creative industries platforms and information, law and policy, education and equity, organisations and work, and civic technology and participation.

Some of the themes being investigated include:

  • how to restore trust in social institutions, and whether AI presents new threats to trust and social cohesion
  • how laws and regulations need to change in order to ensure that AI systems serve the public interest
  • how the community can be better involved in decisions about the uses of AI in secondary and tertiary education
  • foreign actors using AI to undermine democracy in Australia and in the Asia-Pacific region.

Prof Weatherall has an extensive background in technology regulation and intellectual property law and policy. She co-leads two ADM+S Signature Projects, The Regulatory Project, where her work focuses on questions relating of accountability and government ADM use, as well as GenAISim where she is exploring legal and policy implications of using LLM-based agents in policymaking.

Prof Weatherall is a member of multiple State and Federal level policy advisory groups, including her appointment to the Australian Government’s temporary AI Expert Group alongside ADM+S colleagues Prof Jeannie Paterson and Prof Nicolas Suzor in 2024. She is also a member of the Copyright and AI Reference Group convened by the Commonwealth Attorney-General’s Department.

Prof Weatherall has led multiple ADM+S Submissions, informing responsible, ethical and inclusive development of ADM in Australia, including:

  1. Submission to the Joint Parliamentary Committee of Public Accounts and Audit inquiry into public sector AI use (2024)
  2. Safe and responsible AI in Australia: proposals paper for introducing mandatory guardrails for AI in high-risk settings (2024)
  3. Submission to the Senate Select Committee on Adopting Artificial Intelligence (2024)

In early 2024, Prof Weatherall and a team of ADM+S researchers delivered a report in partnership with the New South Wales Ombudsman, mapping and evaluating the use of ADM systems by Local and State governments, following a 12-month collaboration with the Ombudsman.

In her role as co-director of CAITG, Prof Weatherall will expand on this impressive resume, furthering her impact in the field of AI and AI governance.

Learn more.

SEE ALSO

‘Dark ads’ challenge truth and our democracy

Composite art featuring logos from Facebook, TikTok, X and YouTube
Composite art by Michael Joiner, 360info CC BY 4.0

‘Dark ads’ challenge truth and our democracy

Author Daniel Angus and Mark Andrejevic
Date 25 February 2025

The rise of ‘dark advertising’ — personalised advertisements increasingly powered by artificial intelligence that evade public scrutiny — means Australians face a murky information landscape going into the federal election.

It’s already happening and, combined with Australia’s failure to enact truth-in-advertising legislation and big tech’s backtracking on fact-checking, means voters are left vulnerable to ad-powered misinformation campaigns. And that’s not good for democracy.

Tackling misinformation requires legislative action, international collaboration and continued pressure on platforms to open their systems to scrutiny.

The failures of US tech platforms during their own elections should serve as a clear warning to Australia that industry self-regulation is not an option.

Political advertising plays a pivotal role in shaping elections, even while it is shrouded in opacity and increasing misinformation.

In the lead-up to the 2025 federal election, a significant volume of deceptive advertising and digital content has already surfaced. That’s not surprising, given the Australian Electoral Commission (AEC) limits its oversight to the official campaign period, meaning false claims can proliferate freely before the official campaign.

At the heart of this challenge lies the evolution of digital political advertising.

What is ‘dark advertising’?
Modern campaigns rely heavily on social media platforms, leveraging associative ad models that tap into beliefs or interests to deliver digital advertising. Unlike traditional media, where ads are visible and subject to better regulatory and market scrutiny, digital ads are often fleeting and hidden from public view.

Recent AI developments make it easier and cheaper to create false and misleading political ads in large volumes with multiple variations increasingly difficult to detect.

This ‘dark advertising’ creates information asymmetries, in this case one where groups have access to information and can control and shape how it’s delivered. That leaves voters exposed to tailored messages that may distort reality.

Targeted messaging makes it possible to selectively provide voters with very different views of the same candidate. In the recent US presidential election, a political action committee linked to X owner Elon Musk targeted Arab-American voters with the message that Kamala Harris was a diehard Israel ally, while simultaneously messaging Jewish voters that she was an avid supporter of Palestine.

Ad targeting online also lets political advertisers single out groups more likely to be influenced by selective, misleading or false information. Conservative lobby group Advance Australia’s recent campaign basically followed this playbook, disseminating outdated news articles on Facebook, a tactic known as malinformation, where factual information is deliberately spread misleadingly to harm individuals or groups.

The vulnerabilities
The Albanese government recently withdrew a proposed truth-in-political-advertising bill, leaving voters vulnerable to misleading content that undermines democratic integrity.

The bill was never introduced to parliament and its future remains uncertain.

The transparency tools provided by Meta, which covers Facebook and Instagram, and Google parent company Alphabet — which include ad libraries and “Why Am I Seeing This Ad?” explanations — also fall woefully short of enabling meaningful oversight.

These tools reveal little about the algorithms that determine ad delivery or the audiences being targeted. They do include some demographic breakdowns, but say little about the combination of ads an individual user might have seen and in what context.

Recent findings from the US highlight the vulnerabilities of political advertising in the digital age. An investigation by ProPublica and the Tow Center for Digital Journalism revealed that deceptive political ads thrived on platforms like Facebook and Instagram in the lead-up to the 2024 US elections.

Ads frequently employed AI-generated content, including fabricated audio of political figures, to mislead users and harvest personal information. One ad account network has run about 100,000 misleading ads, significantly exploiting Meta’s advertising systems.

The Australian story
The US developments are alarming, but it’s important to recognise Australia’s unique political and regulatory landscape.

Australians have seen what happened in the US but fundamental differences in media consumption, political structure and culture and regulatory frameworks mean that Australia may not necessarily follow the same trajectory.

The AEC does enforce specific rules on political advertising, particularly during official campaign periods, yet oversight is weak outside these periods, meaning misleading content can circulate unchecked.

The failure to pass truth-in-political-advertising laws only exacerbates the problem.

The media blackout period bans political ads on radio and TV three days before the federal election, but it does not apply to online advertising, meaning there is little time to identify or challenge misleading ads.

Ad-driven technology firms like Meta and Alphabet have backed away from previous initiatives to curb misinformation and deceptive advertising and enforce minimum standards.

Despite Meta’s public commitments to prevent misinformation from spreading, deceptive ads still flourished throughout the 2024 US election, raising significant concerns about the effectiveness of platform self-regulation while backtracking on fact-checking raises concerns about Meta’s overall commitment to combating misinformation.

Given these developments, it is unrealistic to expect platforms to proactively police content effectively, especially in a jurisdiction like Australia.

Some solutions
Independent computational tools have emerged in an attempt to address these issues. They include browser plugins and mobile apps that allow users to donate their ad data. During the 2022 election, the ADM+S Australian Ad Observatory project collected hundreds of thousands of advertisements, uncovering instances of undisclosed political ads.

In the lead-up to the 2025 election, that project will rely on a new mobile advertising toolkit capable of detecting mobile digital political advertising served on Facebook, Instagram and TikTok.

Regulatory solutions like the EU’s Digital Services Act (DSA) offer another potential path forward, mandating access to political advertising data for researchers and policymakers although Australia lags in adopting similar measures.

Without some of these solutions platforms remain free to follow their economic incentive to pump the most sensational, controversial and attention-getting content into people’s news feeds, regardless of accuracy.

This creates a fertile environment for misleading ads, not least because platforms have been given protection from liability. That is not an information system compatible with democracy.

Professor Daniel Angus is a leading expert in computational communication and digital media, specialising in the analysis of online discourse, AI, and media transparency. He is the director of the Digital Media Research Centre at the Queensland University of Technology.

Professor Mark Andrejevic is an expert in the social and cultural implications of data mining, and online monitoring at Monash University’s School of Media, Film and Journalism.

Professor Angus’ research receives funding from the Australian Research Council through the Centre of Excellence for Automated Decision Making & Society and LP190101051 ‘Young Australians and the Promotion of Alcohol on Social Media’.

Professor Andrejevic is also a chief investigator in the Australian Research Council Centre of Excellence for Automated Decision Making & Society, and he also has an ARC Discovery Project ‘The Australian experience of automated advertising on digital platforms’.

Originally published under Creative Commons by 360info™.

SEE ALSO

AI in Journalism: new report reveals growing concerns over misleading content and industry impact

Front cover of Generative AI & Journalism report
Image: T.J Thomson

AI in Journalism: new report reveals growing concerns over misleading content and industry impact

Author ADM+S Centre
Date 19 February 2025

A new industry report has found audiences and journalists are growing increasingly concerned by generative artificial intelligence (AI) in journalism.

Summarising three years of research, the Generative AI & Journalism report was launched at the ARC Centre of Excellence for Automated Decision-Making and Society this week.

Report lead author, Dr T.J. Thomson from RMIT University in Melbourne, Australia, said the potential of AI-generated or edited content to mislead or deceive was of most concern.

“The concern of AI being used to spread misleading or deceptive content topped the list of challenges for both journalists and news audiences,” he said.

“We found journalists are poorly equipped to identify AI-generated or edited content, leaving them open to unknowingly propelling this content to their audiences.”

This is partly because few newsrooms have systematic processes in place for vetting user-generated or community contributed visual material.

Most journalists interviewed were not aware of the extent to which AI is increasingly and often invisibly being integrated into both cameras and image or video editing and processing software.

“AI is sometimes being used without the journalists or news outlet even knowing,” Thompson said.

While only one quarter of news audiences surveyed thought they had encountered generative AI in journalism, about half were unsure or suspected they had.

“This points to a potential lack of transparency from news organisations when they use generative AI or to a lack of trust between news outlets and audiences,” Thomson said.

News audiences were found to be more comfortable with journalists using AI when they themselves have used it for similar purposes, such as to blur parts of an image.

“The people we interviewed mentioned how they used similar tools when on video conferencing apps or when using the portrait mode on smartphones,” Thomson said.

“We also found this with journalists using AI to add keywords to media since audiences had themselves experienced AI describing images in word processing software.”

Thomson said news audiences and journalists alike were overall concerned about how news organisations are – and could be – using generative AI.

“Most of our participants were comfortable with turning to AI to create icons for an infographic but quite uncomfortable with the idea of an AI avatar presenting the news, for example,” he said.

Part-problem, part-opportunity
The technology, which has advanced significantly in recent years, was found to be both an opportunity and threat to journalism.

For example, Apple recently suspended its automatically generated news notification feature after it produced false claims about high-profile individuals, including false deaths and arrests, and attributed these false claims to reputable outlets, including BBC News and The New York Times.

While AI can perform tasks like sorting and generating captions for photographs, it has well-known biases against, for example, women and people of colour.

But the research also identified lesser-known biases, such as favouring urban over non-urban environments, showing women less often in more specialised roles, and ignoring people living with disabilities.

“These biases exist because of human biases embedded in training data and/or the conscious or unconscious biases of those who develop AI algorithms and models,” Thomson said.

But not all AI tools are equal. The study found those which explain their decisions, disclose their source material, and ensure transparency in outputs regarding their use are less risky for journalists compared to tools that lack these features.

Journalists and audience members were also concerned about generative AI replacing humans in newsrooms, leading to fewer jobs and skills in the industry.

“These fears reflect a long history of technologies impacting on human labour forces in journalism production,” Thompson said.

The report, designed for the media industry, identifies dozens of ways journalists and news organisations can use generative AI and summarises how comfortable news audiences are with each.

It summarises several of the team’s research studies, including the latest peer-reviewed study, published in Journalism Practice.

Report authors: Dr T.J Thomson (ADM+S Affiliate), Ryan Thomas, Assoc Prof Michelle Riedlinger (ADM+S Affiliate), and Dr Phoebe Matich (ADM+S Research Fellow).

Portions of the underlying research in the report were financially supported by the Design and Creative Practice, Information in Society, and Social Change Enabling Impact Platforms at RMIT University, the Weizenbaum Institute for the Networked Society / German Internet Institute, the Centre for Advanced Internet Studies, the Global Journalism Innovation Lab, the QUT Digital Media Research Centre, and the Australian Research Council through DE230101233 and CE200100005.

Generative AI and Journalism: Content, Journalistic Perceptions, and Audience Experiences is published by RMIT University (DOI: 10.6084/m9.figshare.28068008).

Old Threats, New Name? Generative AI and Visual Journalism is published in Journalism Practice (DOI: 10.1080/17512786.2025.2451677).

View the original article AI-generated journalism falls short of audiences’ expectations: report published by RMIT University Media.

SEE ALSO

Vibes are something we feel but can’t quite explain. Now researchers want to study them

AI Generated image - white and red human figures
Shutterstock/Efe Murat

Vibes are something we feel but can’t quite explain. Now researchers want to study them

Author Ash Watson
Date 19 February 2025

When we’re uncomfortable we say the “vibe is off”. When we’re having a good time we’re “vibing”. To assess the mood we do a “vibe check”. And when the atmosphere in the room changes we call it a “vibe shift”.

In a broad sense, a “vibe” is something akin to a mood, atmosphere or energy.

But this is an imperfect definition. Often, we’ll use this term to describe something we feel powerfully, but find hard to articulate.

As journalist and cultural critic Kyle Chayka described in 2021, a vibe is “a placeholder for an unplaceable feeling or impression, an atmosphere that you couldn’t or didn’t want to put into words”.

Being able to understand the subtleties of social interactions – that is, to “feel the vibes” – is extremely valuable, not just for our social interactions, but also for researchers who study people.

What’s behind the rise of vibes? And how can sociologists like myself unpack “vibe culture” to make sense of the world?

A history of vibes

The nuance and complexity of vibes makes them an interesting cultural trend. Vibes can be very specific, but can also totally resist specificity.

Australians (and fans of Australiana) will remember the iconic line from the beloved 1997 film The Castle: “It’s just the vibe of the thing… I rest my case.”

While it may seem like a recent cultural development, vibe isn’t the first example of cryptic language being used to express an ambiguous thing or situation. There are similar concepts with long histories, such as “quintessence” in Ancient Greek philosophy and “auras” in mysticism.

More recently, vibes rose in popularity through music including 1960s rock, epitomised by the Beach Boys (“pickin’ up good vibrations”) and Black American rap vernacular from the 1990s, such as in the song Vibes and Stuff by A Tribe Called Quest (“we got, we got, we got the vibes”).

‘Vibes’ rose in popularity through music including 1960s rock and 1990s Black American rap.
Shutterstock

While we don’t know when the term was first used as it is today, it seems to have taken hold in the 1970s.

I trawled the online archive of The New Yorker and found an early mention of vibes in a 1971 report about communes in New York City.

One interviewee spoke about the “vibration of togetherness” that drew them to the commune. Ending the day on the subway, the author Hendrik Hertzberg (now a senior editor at the magazine) “just sat there and soaked up the good vibes”.

New uses and meanings have emerged in the years since.

Vibes today

As vibe is used in more ways, its meaning becomes expanded and diffused. A person or situation can have good vibes, bad vibes, weird vibes, laid-back vibes, or any other adjective you can imagine.

Language is a central part of qualitative research. While new phrases and slang can be casual and superficial, they can also represent broader, more complex concepts. Vibe is a great example of this: a simple term that refers to something potent yet ephemeral, affecting yet ambiguous.

By paying attention to the words people use to describe their experiences, sociologists can identify patterns of social interactions and shifts in social attitudes.

Perhaps vibes work like a heuristic – a mental shortcut – but for feeling rather than thinking.

People use heuristics to make everyday decisions or draw conclusions based on their experiences. Heuristics are, in essence, our common sense. And “vibes” might be best described as our common feeling, as they speak to a subtle aspect of how we collectively relate and interact.

Sociologists have long studied complex common feelings. Ambivalence, for instance, has been a focus in research on digital privacy. Studying when and why people feel ambivalent about digital technology can help us understand their seemingly contradictory behaviour, such as when they say they are concerned about privacy, but do very little to protect their information.

Ambivalence reveals how people make decisions via small, everyday compromises – moments and feelings that may be overlooked in quantitative research. A qualitative approach can help us to align policies with people’s real-world behaviours.

Researchers react

Then again, it’s difficult to study something people find hard to articulate in the first place. Asking participants to rank the “vibes” of something in a survey doesn’t quite work.

So researchers are finding new ways to feel the vibe: to see what participants see, to feel what they feel and get a deeper understanding of their lived experiences.

For instance, such study could provide insight into how senior clinicians make important decisions amid uncertainty. We already know making decisions in complex situations involves more than logic and rationality.

In one Australian study published last year, researchers assessed how vibes have become part of online advertising algorithms. The researchers analysed the social media feeds of more than 200 young people, using the concept of vibes to show how advertising models attune to individuals and social groups.

Such approaches can complement, or even update, tried-and-tested research methods, expanding on what we know about human relationships and experiences.The Conversation

Ash Watson, Scientia Fellow and Senior Lecturer, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Generative AI is already being used in journalism – here’s how people feel about it

AI generated image of news presenter
Indonesia’s TVOne launched an AI news presenter in 2023. T.J. Thomson

Generative AI is already being used in journalism – here’s how people feel about it

Author ADM+S Centre
Date 19 February 2025

Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception.

A new report published today finds that news audiences and journalists alike are concerned about how news organisations are – and could be – using generative AI such as chatbots, image, audio and video generators, and similar tools.

The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France).

Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.

This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences.

Who or what makes your news – and how – matters for a host of reasons.

Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources – such as politicians or experts – more than others.

Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet’s staff themselves aren’t representative of their audience.

Carelessly using AI to produce or edit journalism can reproduce some of these inequalities.

Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each.

The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic.

But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower.

The problem – and opportunity

Generative AI can be used in just about every part of journalism.

For example, a photographer could cover an event. Then, a generative AI tool could select what it “thinks” are the best images, edit the images to optimise them, and add keywords to each.

An image of a field with towers in the distance and computer-generated labels superimposed that try to identify certain objects in the image.
Computer software can try to recognise objects in images and add keywords, leading to potentially more efficient image processing workflows.
Elise Racine/Better Images of AI/Moon over Fields, CC BY

These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make “good” images are different to what a computer might think? These criteria may also change over time or in different contexts.

Even something as simple as lightening or darkening an image can cause a furore when politics are involved.

AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context.

Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others’ content.

AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC.

What do people think about journalists using AI?

Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes.

For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the “portrait” mode on smartphones.

Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who’d previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media.

A screenshot of an image with the alt-text description that reads A view of the beach from a stone arch.
Popular word processing and presentation software can automatically generate alt-text descriptions for images that are inserted into documents or presentations.
T.J. Thomson

The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral.

For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles’s coronation, news outlets reported on this false image.

Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns.

An overview of twelve opinion columns published by The Daily Telegraph and each featuring an image generated by an AI tool.
The Daily Telegraph frequently turns to generative AI to illustrate its opinion columns, sometimes generating more photorealistic illustrations and sometimes less photorealistic ones.
T.J. Thomson

Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use.

Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example.

On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to “enliven” an otherwise static image in the hopes of attracting viewer interest and engagement.

A historical photograph from the State Library of Western Australia’s collection has been animated with AI (a tool called Runway) to introduce motion to the still image.
T.J. Thomson

Your role as an audience member

If you’re unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can’t find one, consider asking the outlet to develop and publish a policy.

Consider supporting media outlets that use AI to complement and support – rather than replace – human labour.

Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Michelle Riedlinger, Associate Professor in Digital Media, Queensland University of Technology; Phoebe Matich, Postdoctoral Research Fellow, Generative Authenticity in Journalism and Human Rights Media, ADM+S Centre, Queensland University of Technology, and Ryan J. Thomas, Associate Professor, Washington State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S researcher cited in Parliament’s report on the Future of Work

ADM+S researcher cited in Parliament’s report on the Future of Work

Author Natalie Campbell
Date 17 February 2025

The House of Representatives Standing Committee on Employment, Education and Training has published The Future of Work: Inquiry into the Digital Transformation of Workplaces, following their Inquiry into the Digital Transformation of Workplaces, citing contributions from ADM+S Affiliate Emmanuelle Walkowiak’s 19 June submission.

The inquiry found that Imminent support is required for employers, workers, students, and regulators, and that Australia needs to increase investment in research and development to ensure the safe, responsible and effective use of ADM and AI in the workplace.

The report explains, “digital transformation has exposed significant risks, including gaps in Australia’s regulatory frameworks and workplace protections. This is especially the case with data and privacy.”

A Vice-Chancellor Senior Research Fellow in Economics at RMIT’s Blockchain Innovation Hub, Dr Walkowiak’s research primarily focuses on technology driven inclusion at work, and the changing nature of work in a digital economy.

Her submission to the Inquiry outlined evidence-based recommendations on harnessing AI for productivity, skill development, and job creation in Australia while addressing risks like impacts on hiring, job design, and work quality. It explored AI’s effect on labour rights, fairness, and dignity at work, as well as its influence on small businesses and vulnerable groups, including neurodiverse workers.

Dr Walkowiak’s submission is cited in the report’s discussion of Regulating Technology: Public views (p.19), Opportunities in productivity and efficacy (p.28), and Data and Privacy: Disclosure and breach of privacy (p.49).

Dr Walkowiak said, “I’m honoured that my insights have been cited in the final report, which outlines key recommendations on the digital transformation of work and its implications for workers, businesses, and policymakers.

“Engaging with policymakers to support evidence based decision is an important part of my research, and I look forward to further discussions on shaping more inclusive and productive workplaces in the digital age.”

The Inquiry into the Digital Transformation of Workplaces was adopted on 9 April 2024, following a referral from the Minister for Employment and Workplace Relations, to report on the rapid development and uptake of automated decision making and machine learning techniques in the workplace.

Dr Walkowiak was invited to present evidence to the Committee as part of an academic roundtable on 2 September 2024.

ADM+S Affiliate Dr Kobi Leins and PhD Student Lauren Kelly were also involved in the public hearings.

View the full report.

SEE ALSO

AI is being used in social services – but we must make sure it doesn’t traumatise clients

AI is being used in social services – but we must make sure it doesn’t traumatise clients

Author Suvradip Maitra, Lyndal Sleep, Paul Henman, Suzana Fay
Date 10 February 2025

Late last year, ChatGPT was used by a Victorian child protection worker to draft documents. In a glaring error, ChatGPT referred to a “doll” used for sexual purposes as an “age-appropriate toy”. Following this, the Victorian information commissioner banned the use of generative artificial intelligence (AI) in child protection.

Unfortunately, many harmful AI systems will not garner such public visibility. It’s crucial that people who use social services – such as employment, homelessness or domestic violence services – are aware they may be subject to AI. Additionally, service providers should be well informed about how to use AI safely.

Fortunately, emerging regulations and tools, such as our trauma-informed AI toolkit, can help to reduce AI harm.

How do social services use AI?

AI has captured global attention with promises of better service delivery. In a strained social services sector, AI promises to reduce backlogs, lower administrative burdens and allocate resources more effectively while enhancing services. It’s no surprise a range of social service providers are using AI in various ways.

Chatbots simulate human conversation with the use of voice, text or images. These programs are increasingly used for a range of tasks. For instance, they can provide mental health support or offer employment advice. They can also speed up data processing or help quickly create reports.

However, chatbots can easily produce harmful or inaccurate responses. For instance, the United States National Eating Disorders Association deployed the chatbot Tessa to support clients experiencing eating disorders. But it was quickly pulled offline when advocates flagged Tessa was providing harmful weight loss advice.

Recommender systems use AI to make personalised suggestions or options. These could include targeting job or rental ads, or educational material based on data available to service providers.

But recommender systems can be discriminatory, such as when LinkedIn showed more job ads to men than women. They can also reinforce existing anxieties. For instance, pregnant women have been recommended alarming pregnancy videos on social media.

Recognition systems classify data such as images or text to compare one dataset to another. These systems can complete many tasks, such as face matching to verify identity or transcribing voice to text.

Such systems can raise surveillance, privacy, inaccuracy and discriminationconcerns. A homeless shelter in Canada stopped using facial recognition cameras because they risked privacy breaches – it’s difficult to obtain informed consent from mentally unwell or intoxicated people using the shelter.

Risk-assessment systems use AI to predict the likelihood of a specific outcome occurring. Many systems have been used to calculate the risk of child abuse, long-term unemployment, or tax and welfare fraud.

Often data used in these systems can recreate societal inequalities, causing harm to already-marginalised peoples. In one such case, a tool in the US used for identifying risk of child mistreatment unfairly targeted poor, black and biracial families and families with disabilities.

A Dutch risk assessment tool seeking to identify childcare benefits fraud was shut down for being racist, while an AI system in France faces similar accusations.

The need for a trauma-informed approach

Concerningly, our research shows using AI in social services can cause or perpetuate trauma for the people who use the services.

The American Psychological Association defines trauma as an emotional response to a range of events, such as accidents, abuse or the death of a loved one. Broadly understood, trauma can be experienced at an individual or group level and be passed down through generations. Trauma experienced by First Nations people in Australia as a result of colonisation is an example of group trauma.

Between 57% and 75% of Australians experience at least one traumatic event in their lifetime.

Many social service providers have long adopted a trauma-informed approach. It prioritises trust, safety, choice, empowerment, transparency, and cultural, historical and gender-based considerations. A trauma-informed service provider understands the impact of trauma and recognises signs of trauma in users.

Service providers should be wary of abandoning these core principles despite the allure of the often hyped capabilities of AI.

Can social services use AI responsibly?

To reduce the risk of causing or perpetuating trauma, social service providers should carefully evaluate any AI system before using it.

For AI systems already in place, evaluation can help monitor their impact and ensure they are operating safely.

We have developed a trauma-informed AI assessment toolkit that helps service providers to assess the safety of their planned or current use of AI. The toolkit is based on the principles of trauma-informed care, case studies of AI harms, and design workshops with service providers. An online version of the toolkit is about to be piloted within organisations.

By posing a series of questions, the toolkit enables service providers to consider whether risks outweigh the benefits. For instance, is the AI system co-designed with users? Can users opt out of being subject to the AI system?

It guides service providers through a series of practical considerations to enhance the safe use of AI.

Social services do not have to avoid AI altogether. But social service providers and users should be aware of the risks of harm from AI – so they can intentionally shape AI for good.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Call for a more comprehensive regulatory framework for automated decision-making in the public sector

Report cover for Submission to AG Department on ADM Reform

Call for a more comprehensive regulatory framework for automated decision-making in the public sector

Author ADM+S Centre
Date 10 February 2025

In a new submission to the Attorney-General Department’s Automated Decision-Making (ADM) Reform consultation, experts from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) urge the government to adopt a more comprehensive regulatory framework for ADM in the public sector. 

The submission argues that the current focus of the consultation paper on legislation and regulation overlooks essential aspects like enforcement and accountability, which are critical to ensuring responsible use of technology in government decision-making.

The response highlights that the existing approach is too narrow, focusing primarily on AI-based systems, while neglecting broader systemic issues. The authors contend that the government should lead by example, setting a standard for safe and accountable technology use that applies to all technical systems, not just AI.

Among the key recommendations outlined in the submission are calls for stronger enforcement mechanisms, including active monitoring and independent oversight. The experts also emphasize the need for transparency in the acquisition of ADM systems, urging the government to adopt robust measures to prevent misuse and ensure accountability across all public sector applications of automated decision-making.

Key recommendations

  • The ADM framework should include enforcement and accountability mechanisms.
  • Systemic and preventative measures, including ex-ante control and active monitoring, are needed.
  • An independent oversight body should monitor and enforce standards across government.
  • Qualified transparency mechanisms should be adopted.
  • Key transparency requirements should be incorporated into the acquisition of ADM systems.

As the public sector increasingly integrates automated technologies, the submission urges policymakers to act quickly to address these gaps, advocating for a regulatory framework that goes beyond individual cases to tackle systemic risks.

Authors:
Dr José-Miguel Bello y Villarino, Prof Emeritus Terry Carney, Prof Kimberlee Weatherall, Dr Rita Matulionyte, Prof Julian Thomas, Prof Paul Henman and Veronica Lenard.

SEE ALSO

Elections mean more misinformation. Here’s what we know about how it spreads in migrant communities

Individual reading news on their phone while riding the bus.
Individual reading news on their phone while riding the bus.

Elections mean more misinformation. Here’s what we know about how it spreads in migrant communities

Author Fan Yang and Sukhmani Khorana
Date 6 February 2025

Migrants in Australia often encounter disinformation targeting their communities. However, disinformation circulated in non-English languages and within private chat groups often falls beyond the reach of Australian public agencies, national media and platform algorithms.

This regulatory gap means migrant communities are disproportionately targetedduring crises, elections and referendums when misinformation and disinformation are amplified.

With a federal election just around the corner, we wanted to understand how migrants come across disinformation, how they respond to it, and importantly, what can be done to help.

Our research

Our research finds political disinformation circulates both online and in person among friends and family.

Between 2023 and 2024, we carried out a survey with 192 respondents. We then conducted seven focus groups with 14 participants who identify as having Chinese or South Asian cultural heritage.

We wanted to understand their experiences of political engagement and media consumption in Australia.

An important challenge faced by research participants is online disinformation. This issue was already long-standing and inadequately addressed by Australian public agencies and technology companies, even before Meta ended its fact-checking program.

Lack of diversity in news

Our study finds participants read news and information from a diverse array of traditional and digital media services with heightened sense of caution.

They encounter disinformation in two ways.

The first is information misrepresenting their identity, culture, and countries of origin, particularly found in English-language Australian national media.

The second is targeted disinformation distributed across non-English social media services, including in private social media channels.

Image: Misinformation is often spread on Chinese social media platforms to target their users. Shutterstock

 

From zero (no trust) to five (most trusted), we asked our survey participants to rank their trust towards Australian national media sources. This included the ABC, SBS, The Age, Sydney Morning Herald, 9 News and the 7 Network.

Participants reported a medium level of trust (three).

Our focus groups explained the mistrust participants have towards both traditional and social media news sources. Their thoughts echoed other research with migrants. For instance, a second-generation South Asian migrant said:

it feels like a lot of marketing with traditional media […] they use marketing language to persuade people in a certain way.

Several participants of Chinese and South Asian cultural backgrounds reported that Australian national media misrepresent their culture and identity due to a lack of genuine diversity within news organisations. One said:

the moment you’re a person of colour, everyone thinks that you’re Chinese. And we do get painted with the same paintbrush. It is very frustrating […]

Another added:

Sri Lanka usually gets in the media for cricket mainly, travel and tourism. So apart from that, there’s not a lot of deep insight.

For migrants, the lack of genuine engagement with their communities and countries of origin distorts public understanding, reducing migrants to a one-dimensional, often stereotypical, portrayal. This oversimplification undermines migrants’ trust in Australian national media.

Participants also expressed minimal trust in news and information on social media. They often avoid clicking on headline links, including those shared by Australian national media outlets. According to a politically active male participant of Chinese-Malaysian origin:

I don’t really like reading Chinese social media even though I’m very active on WeChat and subscribe to some news just to see what’s going on. I don’t rely on them because I usually don’t trust them and can often spot mistakes and opinionated editorials rather than actual news.

Consuming news from multiple sources to understand a range of political leanings is a strategy many participants employed to counteract biased or partial news coverage. This was particularly the case on issues of personal interest, such as human rights and climate change.

What can be done?

Currently, Australia lacks effective mechanisms to combat online disinformation targeting migrant communities, especially those whose first language is not English.

Generalised counter-disinformation approaches (such as awareness camapaigns) fail to be effective even when translated into multiple languages.

This is because the disinformation circulating in these communities is often highly targeted and tailored. Scaremongering around geopolitical, economic and immigration policies is a common theme. These narratives are too specific for a population-level approach to work.

Our focus groups revealed that the burden of addressing disinformation often falls on family members or close friends. This responsibility is particularly carried by community-minded individuals with higher levels of media and digital knowledge. Women and younger family members play a key role.

Image: Women and younger family members play a key role in debunking misinformation in migrant families. Shutterstock

 

Focus group members told us how they explained Australian political events to their families in terms they were more familiar with.

During the Voice to Parliament referendum, one participant referenced China’s history of resistance against Japanese Imperialism to help a Chinese-Australian friend better understand the consequences of colonialism and its impacts on Australia’s First Nations communities.

Younger women participants shared that combating online disinformation is an emotionally taxing process. This is especially so when it occurs within the family, often leading to conflicts. One said:

I’m so tired of intervening to be honest, and mostly it’s family […] my parents and close friends and alike. There is so much misinformation passed around on WhatsApp or socials. When I do see someone take a very strong stand, usually my father or my mother, I step in.

Intervening in an informal way doesn’t always work. Family dynamics, gender hierarchies and generational differences can impede these efforts.

Countering disinformation requires us to confront deeper societal issues related to race, ethnicity, gender, power and the environment.

International research suggests community-based approaches work better for combating misinformation in specific cohorts, like migrants. This sort of work could take place in settings people trust, be that community centres or public libraries.

This means not relying exclusively on changes in the law or the practices of online platforms.

Instead, the evidence suggests developing community-based interventions that are culturally resonant and attuned to historical disadvantage would help.

Our recently-released toolkit makes a suite of recommendations for Australian public services and institutions, including the national media, to avoid alienating and inadvertently misinforming Asian-Australians as we approach a crucial election campaign.


Read more: About half the Asian migrants we surveyed said they didn’t fully understand how our voting systems work. It’s bad for our democracy


This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Open source and under control: the DeepSeek paradox

DeepSeek is a Monkey King moment in the global AI landscape. : Illustration by Michael Joiner, 360info. Images by William Tung, Wikimedia & Akash Tetwal, Pexels. CC BY-SA 4.0

Open source and under control: the DeepSeek paradox

Author Haiqing Yu
Date 5 February 2025

DeepSeek has emerged on the front line of debates determining the future of AI, but its arrival poses questions over who decides what ‘intelligence’ we need.

Chinese company DeepSeek stands at the crossroads of two major battles shaping artificial intelligence development: whether source code should be freely available and whether development should happen in free or controlled-information environments.

That also highlights the DeepSeek paradox. It champions open-source AI — where the source code of the underlying model is available for others to use or modify — while operating in China, one of the world’s most-controlled data environments.

That means DeepSeek prompts obvious questions about who decides what kind of ‘intelligence’ we need. Such questions are obviously front of mind for some governments, with several already placing restrictions on the use of DeepSeek.

DeepSeek, a Chinese startup, unveiled its AI chatbot late last month. It seemed to equal the performance of US models at a fraction of the cost and the news triggered a massive sell-off of tech company shares on the US sharemarket.

It also sparked concerns about data security and censorship. In Australia, DeepSeek has been banned from all federal government devices, the NSW government has reportedly banned it from its devices and systems and other state governments are considering their options. The Australian ban followed similar action by Taiwan, Italy and some US government agencies.

The Australian government says the bans are not related to DeepSeek’s country of origin, but the issues being raised now are similar to those discussed when Chinese-based social media app TikTok was banned on Australian government devices two years ago.

Yet aside from those concerns and DeepSeek’s role in reshaping the power dynamics in the US-China AI rivalry, it also gives hope to less well-resourced countries to develop their own large language models using DeepSeek’s model as a starting point.

For those seeking Chinese-related pop culture references, DeepSeek is a Monkey King moment in the global AI landscape.
Monkey King, or Wukong in Chinese, was a character featured in the 16th century novel Journey to the West.

The story was popularised in the 1980s television series Monkey and later iterations. In these stories, Wukong was the unpredictable force challenging established power, wreaking havoc in the Heavenly Palace and embodying both defiance and restraint.

That’s a pretty apt description for where DeepSeek stands in the AI world in 2025.

A new benchmark
As the author of a recent Forbes piece rightly points out, the real story about DeepSeek is not about geopolitics but “about the growing power of open-source AI and how it’s upending the traditional dominance of closed-source models”.

Author Kolawole Samuel Adebayo says it’s a line of thought that Meta chief AI scientist Yann LeCun also shares.

The AI industry has long been divided between closed-source titans like OpenAI, Google, Amazon, Microsoft and Baidu and the open-source movement, which includes Meta, Stability AI, Mosaic ML as well as universities and research institutes.

DeepSeek’s adoption of open-source methodologies — building on Meta’s open-source Llama models and the PyTorch ecosystem — places it firmly in the open-source camp.

While closed-source large language models prioritise controlled innovation, open-source large language models are built on the principles of collaborative innovation, sharing and transparency.

DeepSeek’s innovative methods challenge the notion that AI development is backed by vast proprietary datasets and computational power, measured by the number and capacity of chips.

It also demonstrates a point made by the Australian Institute for Machine Learning’s Deval Shah three months before DeepSeek made global headlines: “The future of LLM [large language model] scaling may lie not just in larger models or more training data, but in more sophisticated approaches to training and inference.”

The DeepSeek case illustrates that algorithmic ingenuity can compensate for hardware and computing limitations, which is significant in the context of US export controls on high-end AI chips to China. That’s a crucial lesson for any nation or company restricted by computational bottlenecks.

It suggests that an alternative path exists — one where innovation is driven by smarter algorithms rather than sheer hardware dominance.

Just as Wukong defied the gods with his wit and agility, DeepSeek has shown that brute strength, or in this case raw computing power, is not the only determinant of AI success.

However, DeepSeek’s victory in the open-source battle does not mean it has won the war.

It faces the toughest challenges for the road ahead, particularly when it comes to scale, refinement and two of the greatest strengths of US AI companies — data quality and reliability.

The Achilles’ heel
DeepSeek appears to have broken free from the limitations of computing dependence, but it remains bound by China’s controlled information environment, which is an even greater constraint.

Unlike ChatGPT or Llama, which train on vast, diverse and uncensored global datasets, DeepSeek operates in the palm of the Buddha — the walled garden that is the Chinese government-approved information ecosystem.

While China’s AI models are technically impressive and perform brilliantly on technical or general questions, they are fundamentally limited by the data they can access, the responses they can generate and the narratives they are allowed to shape.

This is particularly so when it comes to freedom of expression and was illustrated by a small test conducted on 29 January 2025. DeepSeek was asked questions about the 1989 Tiananmen Square protests and massacre.

Image above: Screenshot and translation of DeepSeek test provided by author

In the test, DeepSeek was asked three questions, two in Chinese and one in English. It refused to answer the first and third question and evaded the second question.

ChatGPT, on the other hand, gives a thorough analysis to all three questions.

The test — among many other queries on sensitive topics — exposes the double bind facing Chinese AI: Can its large language model be truly world-class if it is constrained in what data it can ingest and what output it can generate? Can it be trustworthy if it fails the reliability test?

This is not merely a technical issue — it’s a political and philosophical dilemma.

In contrast to models like GPT-4, which can engage in free-form debate, DeepSeek operates within an internet space where sensitive topics must be avoided.

DeepSeek may have championed open-source large language models with its Chinese discourse of efficiency and ingenuity, but it remains imprisoned by a deeper limitation: data and regulatory constraints.

While its technical prowess lies in its reliance on and contribution to openness in code, it operates within an information ‘greenhouse’, where production of and access to critical and diverse datasets are ‘protected’. In other words such datasets are restricted.

This is where the Monkey King metaphor comes full circle. Just as Wukong believed he had escaped but only to realize he was still inside the Buddha’s palm, DeepSeek appears to have achieved independence — yet remains firmly within the grip of the Chinese Communist Party.

It embodies the most radical spirit of AI transparency, yet it is fundamentally constrained in what it can see and say. No matter how powerful it becomes, it is hard to evolve beyond the ideological limits imposed upon it.

The true disruption in generative AI is not technical; it is philosophical.

As we move toward generative AI agency and superintelligent AI, the debate might no longer be about finding our own place in the workforce or cognitive hierarchy, or whether large language models should be open or closed.

Instead, we could be asking: What kind of ‘intelligence’ do we need and — more importantly — who gets to decide?

Professor Haiqing Yu is a professor of media and communication and ARC Future Fellow at RMIT University. She is also a Chief Investigator with the ARC Centre of Excellence for Automated Decision-Making & Society. Professor Yu researches the sociopolitical and economic impact of China’s digital media, communication and culture on China, Australia and the Asia Pacific.

Originally published under Creative Commons by 360info™.

SEE ALSO

ADM+S researchers to collaborate on Data and Society’s new Climate, Technology, and Justice Program

Data and Society project announcement

ADM+S researchers to collaborate on Data and Society’s new Climate, Technology, and Justice Program

Author Data and Society 
Date 30 January 2025

Data & Society (D&S) today announced the launch of its Climate, Technology, and Justice program. Climate change is perhaps the most urgent social issue of our time and is only accelerating in importance. Already disproportionately impacting communities in the majority world, energy-intensive technologies like artificial intelligence only worsen the problem. Data & Society has spent a decade building an empirical research base on data-driven technologies, and fostering a network that is influencing how these technologies are studied and governed. The organization is uniquely well-positioned to examine the social and environmental repercussions of the expanded global infrastructures and labor practices needed to sustain the growth of digital technologies, from AI and blockchain to streaming and data storage.

The new program will be led by Tamara Kneese, who joined D&S in 2023 as senior researcher and project director of the Algorithmic Impact Methods Lab (AIMLab), and whose experience in human-centered technology and climate activism in the tech industry make her an ideal leader for this work. Joining her are two affiliates: Zane Griffin Talley Cooper, who studies data, resource extraction, and the Arctic; and Xiaowei R. Wang, whose body of multidisciplinary work, over the past 15 years, sits at the intersection of tech, digital media, art, and environmental justice.

Succeeding Kneese as AIMLab project director is D&S Senior Researcher Meg Young, whose leadership of the Lab’s participatory efforts and impact engagement since its 2023 launch have been key to its early successes. A champion for participatory methods in the AI impact space and for making technology more accountable to the public, Young’s work with communities across the country has positioned AIMLab for the future.

Before joining D&S, Kneese was lead researcher at Green Software Foundation (GSF), where she was part of the policy working group and the author of GSF’s first State of Green Software Report, which provided insight into the people and planet impacts of AI. Earlier, she was director of developer engagement on the green software team at Intel and assistant professor of media studies and director of gender and sexualities studies at the University of San Francisco. She and Young recently co-authored ”Carbon Emissions in the Tailpipe of Generative AI” in the Harvard Data Science Review, offering an overview of the current state of measuring, regulating, and mitigating AI’s environmental impacts and underscoring that the real existential threat posed by AI is its impact on climate.

“While this program will first tackle the environmental impacts of AI, we have expansive visions of how D&S’s considerable skillset can help us understand the complex relationships between climate, the environment, climate change, technology, and justice — areas like e-waste and tech reuse, algorithmic disaster prediction, and low-carbon tech adoption, centering the experiences and voices of the communities most affected,” said Alice E. Marwick, Data & Society’s director of research. “I am thrilled about the new body of scholarship that we will develop under Tamara’s leadership.”

“I am very excited to begin to build an empirical research base that will demonstrate the impact that AI and other data-driven technologies are having on the environment and on communities,” Kneese said. “Most importantly, we are doing this work in partnership with other researchers, academics, and grassroots groups who are essential to our vision of being able to investigate how data-driven technologies shape the environment, and how communities participate in or resist these processes.”

The program begins its research with two related projects. The first, conducted in partnership with researchers at the University of Virginia School of Data Science, is an assessment of the environmental and social impacts of AI, going beyond quantitative measurements of energy, carbon, and water costs to include the human rights impacts of data centers and energy infrastructures on communities. The second is an ethnographic and historical study of the practices of measurement, resistance, contestation, and refusal that emerge within and alongside the tech industry, focusing on sustainability practitioners, tech worker activist groups, and grassroots community organizations that organize across the digital value chain to mitigate the environmental and labor implications of data-driven technologies. Both projects involve participatory workshops that center the perspectives and needs of impacted communities to ensure that policymakers understand the full spectrum of environmental impacts related to computing and its global supply chains and underlying infrastructures.

These projects are supported in part by the National Science Foundation under Grant No. 2427700 and the Internet Society Foundation’s Greening the Internet program. Data & Society believes this type of work is most successful when done in partnership with others. In addition to UVA, other current research collaborators include the ARC Centre of Excellence for Automated Decision-Making and Society, Athena Coalition, and Athena’s multi-state Data Center Working Group, in particular Green Web Foundation, Green Software Foundation, and UC Berkeley’s Human Rights Center.

SEE ALSO

Changing the narrative about regional women and technology

Report Cover: Improving digital inclusion for women in regional Victoria

Changing the narrative about regional women and technology

Author ADM+S Centre
Date 30 January 2025

A newly released evaluation report highlights the success of the Victorian Women’s Trust’s Rural Women Online program in addressing digital exclusion among women in regional Victoria.

Involving hands-on digital skills workshops on a range of topics, a help desk for one-on-one support, stands from local services providers, and keynotes from leading thinkers and writers on digital inclusion in Australia, Rural Women Online was delivered in August and September 2024 in Greater Shepparton and North East Victoria following extensive community consultation. Hundreds of women from across regional Victoria participated in the program, gaining new skills, confidence, and forging new social connections and opportunities.

Key outcomes include

  1. Boosting confidence: The program saw significant increases in participants’ confidence with digital technologies, with 43% reporting they felt more capable using digital tools and navigating online platforms. Workshops focused on practical skills like managing passwords, identifying scams, and safely using online services, helping participants overcome fears and avoid common pitfalls.
  2. Tailored support: Over half of participants sought personalised assistance from local mentors at the program’s help desks. Mentors were not necessarily ‘tech experts’ but were relatable and were happy to learn alongside participants, setting up a space for mutual empowerment for a range of often highly personal tasks.
  3. Strengths-based learning: By focusing on participants’ existing capabilities and reframing digital challenges as opportunities for empowerment, the program created a supportive environment. This approach empowered women to see themselves as capable digital users, shifting the narrative from vulnerability to resilience.
  4. Social connection: The program fostered a sense of community among participants, enabling them to share experiences and build networks for ongoing support. Informal workshops and “chat corners” encouraged open dialogue and connection, reducing feelings of isolation and promoting collaborative learning.

Rural women are among the most digitally excluded groups in Victoria, facing barriers like limited access to technology, low digital confidence, and a lack of locally relevant resources.

To address these challenges, Rural Women Online adopted a place-based approach, creating tailored learning environments that recognised and responded to the unique needs of each community.

The program was independently evaluated by a team of digital inclusion researchers at the ARC Centre of Excellence for Automated Decision-Making and Society, with the evaluation report detailing how place-based, community-driven programming boosted digital skills, confidence, and resilience for participants.

The evaluation also revealed that Rural Women Online effectively engaged participants from diverse cultural and economic backgrounds and different age groups. In Shepparton, where 44% of participants spoke a language other than English at home, the program provided sessions with local translators and culturally sensitive information about the online world. Meanwhile, sessions in Yackandandah in North East Victoria addressed disaster preparedness, reflecting local concerns in the region.

The program also supported older participants, with 66% of attendees aged 55 or older. eSafety sessions were particularly popular, with 79% of participants reporting they felt safer online after attending these sessions. For many, it was the first opportunity they had to learn collectively in a supportive environment.

Sustained impact
The program’s ripple effect is likely to extend beyond the workshops. Participants reported that they were keen to share their newfound knowledge with family and friends, helping to spread digital inclusion throughout their communities. The program also connected women with local resources and organisations for continued learning, connection and support.

In a keynote delivered as part of the program in Shepparton, ADM+S Director Distinguished Professor Julian Thomas noted importance of programs like Rural Women Online in building digital inclusion in local communities: “Tackling [digital exclusion] in isolation can be debilitating and discouraging… The genius of the Rural Women Online program is recognising we can share the labour of learning, and that we often learn best from each other and in company”.

The success of Rural Women Online underscores the importance of listening to community needs and designing solutions that empower everyone to thrive in an increasingly digital world.

The full evaluation report is available here.

Learn more about the program in this video.

SEE ALSO

President Trump’s move to dismantle AI safety measures could have global impact

President Trump’s move to dismantle AI safety measures could have global impact

Author ADM+S Centre
Date 28 January 2025

On 20 January 2025 U.S. President Donald Trump revoked a 2023 executive order aimed at regulating artificial intelligence (AI), prioritising innovation over regulation of the rapidly advancing technology.

The executive order previously signed by former President Joe Biden, required AI companies to submit safety testing data to federal authorities, to establish safety standards around AI development.

ADM+S Affiliate and Director of the Centre for AI and Digital Ethics at the University of Melbourne Prof Jeannie Paterson joined ABC radio last week to discuss some of the implications of this decision.

“It’s definitely a statement that the guardrails have come off for the development on AI,” she explains.

“The Executive order said that anybody who was releasing AI to be used with government had to put in place safeguards to prevent bias, to protect privacy, to reduce error, and to keep it cyber secure.

Those requirements aren’t there anymore so it’s hard to say what sort of safety measures will be reduced.”

Trump’s decision to revoke the order comes amid an escalating global race for AI supremacy and coincides with a decision to invest $800 billion to speed up its development.

It marks a significant shift in the US government’s approach to AI oversight, and contracts sharply with the approach of other nations.

Prof Paterson explains, “Australia is quite a small player here. We’ve made some steps, we’ve got some AI safety standards of our own in place that are very aligned with what’s happening in Europe, Canada, and indeed Singapore.

What I’d expect to see is Australia continue down that path, and perhaps make some allegiances with those countries, so we’ve got that alliance of other countries also making those demands.”

She concludes, “It will be interesting to see how the competitive pressures go for those big tech companies that still want to sell to other jurisdictions.”

Listen on ABC.

SEE ALSO

Don’t rely on social media users for fact-checking. Many don’t care much about the common good.

AI Generated Image: Hands on phones

Don’t rely on social media users for fact-checking. Many don’t care much about the common good.

Author Mark Andrejevic
Date 20 January 2025

In the wake of Donald Trump’s election victory, Meta chief executive Mark Zuckerberg fired the fact-checking team for his company’s social media platforms. At the same time, he reversed Facebook’s turn away from political content.

The decision is widely viewed as placating an incoming president with a known penchant for mangling the truth.

Meta will replace its fact-checkers with the “community notes” model used by X, the platform owned by avid Trump supporter Elon Musk. This model relies on users to add corrections to false or misleading posts.

Musk has described this model as “citizen journalism, where you hear from the people. It’s by the people, for the people.”

For such an approach to work, both citizen journalists and their readers need to value good-faith deliberation, accuracy and accountability. But our new research shows social media users may not be the best crowd to source in this regard.

Our research

Working with Essential Media, our team wanted to know what social media users think of common civic values.

After reviewing existing research on social cohesion and political polarisation and conducting ten focus groups, we compiled a civic values scale. It aims to measure levels of trust in media institutions and the government, as well as people’s openness to considering perspectives that challenge their own.

We then conducted a large-scale survey of 2,046 Australians. We asked people how strongly they believed in a common public interest. We also asked about how important they thought it was for Australians to inform themselves about political issues and for schools to teach civics.

Importantly, we asked them where they got their news: social media, commercial television, commercial radio, newspapers or non-commercial media.

What did we find?

We found people who rely on social media for news score significantly lower on a civic values scale than those who rely on newspapers and non-commercial broadcasters such as the ABC.

By contrast, people who rely on non-commercial radio scored highest on the civic values scale. They scored 11% higher than those who rely mainly on social media and 12% higher than those who rely on commercial television.

The lowest score was for people who rely primarily on commercial radio.



People who relied on newspapers, online news aggregators, and non-commercial TV all scored significantly higher than those who relied on social media and commercial broadcasting.

The survey also found that as the number of different media sources people use daily increased, so too did their civic values score.

This research does not indicate whether platforms foster lower civic values or simply cater to them.

But it does raise concerns about social media becoming an increasingly important source of political information in democratic societies like Australia.

Why measure values?

The point of the civic values scale we developed is to highlight the fact that the values people bring to news about the world is as important as the news content.

For example, most people in the United States have likely heard about the violence of the attack on the Capitol protesting Trump’s loss in 2020.

That Trump and his supporters can recast this violent riot as “a day of love” is not the result of a lack of information.

It is, rather, a symptom of people’s lack of trust in media and government institutions and their unwillingness to confront facts that challenge their views.

In other words, it is not enough to provide people with accurate information. What counts is the mindset they bring to that information.

No place for debate

Critics have long been concerned that social media platforms do not serve democracy well, privileging sensationalism and virality over thoughtful and accurate posts. As the critical theorist Judith Butler put it:

the quickness of social media allows for forms of vitriol that do not exactly support thoughtful debate.

Sociologist Zeynep Tufekci said social media is less about meaningful engagement than bonding with like-minded people and mocking perceived opponents. She notes, “belonging is stronger than facts”.

Her observation is likely familiar to anyone who has tried to engage in a politically charged discussion on social media.

These criticisms are commonplace in discussions of social media but have not been systematically tested until now.

Social media platforms are not designed to foster democracy. Their business model is based on encouraging people to see themselves as brands competing for attention, rather than as citizens engaged in meaningful deliberation.

This is not a recipe for responsible fact-checking. Or for encouraging users to care much about it.

Platforms want to wash their hands of the fact-checking process, because it is politically fraught. Their owners claim they want to encourage the free flow of information.

However, their fingers are on the scale. The algorithms they craft play a central role in deciding which forms of expression make it into our feeds and which do not.

It’s disingenuous for them to abdicate responsibility for the content they chose to pump into people’s news feeds, especially when they have systematically created a civically challenged media environment.


The author would like to acknowledge Associate Professor Zala Volcic, Research Fellow Isabella Mahoney and Research Assistant Fae Gehren for their work on the research on which this article is based.The Conversation

Mark Andrejevic, Professor of Media, School of Media, Film, and Journalism, Monash University, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S Partner Investigator receives a Presidential Early Career Award for Scientists and Engineers from the U.S. Government

Julia Stoyanovich
ADM+S PI Julia Stoyanovich receives Presidential Early Career Award for Scientists and Engineers

ADM+S Partner Investigator receives a Presidential Early Career Award for Scientists and Engineers from the U.S. Government

Author Natalie Campbell
Date 17 January 2025

Congratulations to ADM+S Partner Investigator Assoc Prof Julia Stoyanovich from New York University, a 2025 recipient of a Presidential Early Career Award for Scientists and Engineers in the United States.

Assoc Prof Stoyanovich is amongst nearly 400 scientists and engineers who received the award on 15 January, the highest honour bestowed by the U.S. government on outstanding scientists and engineers early in their careers.

“I am immensely grateful to my mentors and long-term collaborators, students, and postdocs for making this possible.

And I am thrilled to be able to call New York University my home, where all doors are open and the sky is the limit,” said Assoc Prof Stoyanovich.

Announced via the White House website, the media release reads, “Established by President Clinton in 1996, PECASE recognizes scientists and engineers who show exceptional potential for leadership early in their research careers.

“The award recognizes innovative and far-reaching developments in science and technology, expands awareness of careers in science and engineering, recognizes the scientific missions of participating agencies, enhances connections between research and impacts on society, and highlights the importance of science and technology for our nation’s future.”

Julia is an Associate Professor at New York University in the Department of Computer Science and Engineering at the Tandon School of Engineering, and the Center for Data Science.

Her research focuses on responsible data management and analysis practices: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data acquisition and processing lifecycle.

Learn more about Julia’s work.   

SEE ALSO

Meta is abandoning fact checking – this doesn’t bode well for the fight against misinformation

Image credit: David Paul Morris/Bloomberg via Getty Images
Image credit: David Paul Morris/Bloomberg via Getty Images

Meta is abandoning fact checking – this doesn’t bode well for the fight against misinformation

Authors  Ned Watt, Michelle Riedlinger and Silvia Montaña-Niño
Date 8 January 2025

Meta has announced it will abandon its fact-checking program, starting in the United States. It was aimed at preventing the spread of online lies among more than 3 billion people who use Meta’s social media platforms, including Facebook, Instagram and Threads.

In a video, the company’s chief, Mark Zuckerberg, said fact checking had led to “too much censorship”.

He added it was time for Meta “to get back to our roots around free expression”, especially following the recent presidential election in the US. Zuckerberg characterised it as a “cultural tipping point, towards once again prioritising speech”.

Instead of relying on professional fact checkers to moderate content, the tech giant will now adopt a “community notes” model, similar to the one used by X.

This model relies on other social media users to add context or caveats to a post. It is currently under investigation by the European Union for its effectiveness.

This dramatic shift by Meta does not bode well for the fight against the spread of misinformation and disinformation online.

Independent assessment

Meta launched its independent, third-party, fact-checking program in 2016.

It did so during a period of heightened concern about information integrity coinciding with the election of Donald Trump as US president and furore about the role of social media platforms in spreading misinformation and disinformation.

As part of the program, Meta funded fact-checking partners – such as Reuters Fact Check, Australian Associated Press, Agence France-Presse and PolitiFact – to independently assess the validity of problematic content posted on its platforms.

Warning labels were then attached to any content deemed to be inaccurate or misleading. This helped users to be better informed about the content they were seeing online.

A backbone to global efforts to fight misinformation

Zuckerberg claimed Meta’s fact-checking program did not successfully address misinformation on the company’s platforms, stifled free speech and lead to widespread censorship.

But the head of the International Fact-Checking Network, Angie Drobnic Holan, disputes this. In a statement reacting to Meta’s decision, she said:

Fact-checking journalism has never censored or removed posts; it’s added information and context to controversial claims, and it’s debunked hoax content and conspiracy theories. The fact-checkers used by Meta follow a Code of Principles requiring nonpartisanship and transparency.

A large body of evidence supports Holan’s position.

In 2023 in Australia alone, Meta displayed warnings on over 9.2 million distinct pieces of content on Facebook (posts, images and videos), and over 510,000 posts on Instagram, including reshares. These warnings were based on articles written by Meta’s third-party, fact-checking partners.

Numerous studies have demonstrated that these kinds of warnings effectively slow the spread of misinformation.

Meta’s fact‐checking policies also required the partner fact‐checking organisations to avoid debunking content and opinions from political actors and celebrities and avoid debunking political advertising.

Fact checkers can verify claims from political actors and post content on their own websites and social media accounts. However, this fact‐checked content was still not subject to reduced circulation or censorship on Meta platforms.

The COVID pandemic demonstrated the usefulness of independent fact checking on Facebook. Fact checkers helped curb much harmful misinformation and disinformation about the virus and the effectiveness of vaccines.

Importantly, Meta’s fact-checking program also served as a backbone to global efforts to fight misinformation on other social media platforms. It facilitated financial support to up to 90 accredited fact-checking organisations around the world.

What impact will Meta’s changes have on misinformation online?

Replacing independent, third-party fact checking with a “community notes” model of content moderation is likely to hamper the fight against misinformation and disinformation online.

Last year, for example, reports from The Washington Post and The Centre for Countering Digital Hate in the US found that X’s community notes feature was failing to stem the flow of lies on the platform.

Meta’s turn away from fact checking will also create major financial problems for third-party, independent fact checkers.

The tech giant has long been a dominant source of funding for many fact checkers. And it has often incentivised fact checkers to verify certain kinds of claims.

Meta’s announcement will now likely force these independent fact checkers to turn away from strings-attached arrangements with private companies in their mission to improve public discourse by addressing online claims.

Yet, without Meta’s funding, they will likely be hampered in their efforts to counter attempts to weaponise fact checking by other actors. For example, Russian President Vladimir Putin recently announced the establishment of a state fact-checking network following “Russian values”, in stark difference to the International Fact-Checking Network code of principles.

This makes independent, third-party fact checking even more necessary. But clearly, Meta doesn’t agree.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO