A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

Plants growing out of an old computer
Image credit: Pictus Photography / Canva

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

Authors Aaron J. Snoswell, Kevin Witzenberger, Rayane El Masr
Date 15 April 2025

Earlier this year, scientists discovered a peculiar term appearing in published papers: “vegetative electron microscopy”.

This phrase, which sounds technical but is actually nonsense, has become a “digital fossil” – an error preserved and reinforced in artificial intelligence (AI) systems that is nearly impossible to remove from our knowledge repositories.

Like biological fossils trapped in rock, these digital artefacts may become permanent fixtures in our information ecosystem.

The case of vegetative electron microscopy offers a troubling glimpse into how AI systems can perpetuate and amplify errors throughout our collective knowledge.

A bad scan and an error in translation

Vegetative electron microscopy appears to have originated through a remarkable coincidence of unrelated errors.

First, two papers from the 1950s, published in the journal Bacteriological Reviews, were scanned and digitised.

However, the digitising process erroneously combined “vegetative” from one column of text with “electron” from another. As a result, the phantom term was created.

Excerpts from scanned papers show how incorrectly parsed column breaks lead to the term 'vegetative electron micro...' being introduced.
Excerpts from scanned papers show how incorrectly parsed column breaks lead to the term ‘vegetative electron micro…’ being introduced. Bacteriological Reviews 

 

Decades later, “vegetative electron microscopy” turned up in some Iranian scientific papers. In 2017 and 2019, two papers used the term in English captions and abstracts.

This appears to be due to a translation error. In Farsi, the words for “vegetative” and “scanning” differ by only a single dot.

Screenshot from Google Translate showing the similarity of the Farsi terms for 'vegetative' and 'scanning'.
Screenshot from Google Translate showing the similarity of the Farsi terms for ‘vegetative’ and ‘scanning’. Google Translate 

An error on the rise

The upshot? As of today, “vegetative electron microscopy” appears in 22 papers, according to Google Scholar. One was the subject of a contested retraction from a Springer Nature journal, and Elsevier issued a correction for another.

The term also appears in news articles discussing subsequent integrity investigations.

Vegetative electron microscopy began to appear more frequently in the 2020s. To find out why, we had to peer inside modern AI models – and do some archaeological digging through the vast layers of data they were trained on.

Empirical evidence of AI contamination

The large language models behind modern AI chatbots such as ChatGPT are “trained” on huge amounts of text to predict the likely next word in a sequence. The exact contents of a model’s training data are often a closely guarded secret.

To test whether a model “knew” about vegetative electron microscopy, we input snippets of the original papers to find out if the model would complete them with the nonsense term or more sensible alternatives.

The results were revealing. OpenAI’s GPT-3 consistently completed phrases with “vegetative electron microscopy”. Earlier models such as GPT-2 and BERT did not. This pattern helped us isolate when and where the contamination occurred.

We also found the error persists in later models including GPT-4o and Anthropic’s Claude 3.5. This suggests the nonsense term may now be permanently embedded in AI knowledge bases.

Screenshot of a command line program showing the term 'vegetative electron microscopy' being generated by GPT-3.5 (specifically, the model gpt-3.5-turbo-instruct). The top 17 most likely completions of the provided text are 'vegetative electron microscopy
Screenshot of a command line program showing the term ‘vegetative electron microscopy’ being generated by GPT-3.5 (specifically, the model gpt-3.5-turbo-instruct). The top 17 most likely completions of the provided text are ‘vegetative electron microscopy’, and these suggestions are 2.2 times more likely than the next most likely prediction. OpenAI

By comparing what we know about the training datasets of different models, we identified the CommonCrawl dataset of scraped internet pages as the most likely vector where AI models first learned this term.

The scale problem

Finding errors of this sort is not easy. Fixing them may be almost impossible.

One reason is scale. The CommonCrawl dataset, for example, is millions of gigabytes in size. For most researchers outside large tech companies, the computing resources required to work at this scale are inaccessible.

Another reason is a lack of transparency in commercial AI models. OpenAI and many other developers refuse to provide precise details about the training data for their models. Research efforts to reverse engineer some of these datasets have also been stymied by copyright takedowns.

When errors are found, there is no easy fix. Simple keyword filtering could deal with specific terms such as vegetative electron microscopy. However, it would also eliminate legitimate references (such as this article).

More fundamentally, the case raises an unsettling question. How many other nonsensical terms exist in AI systems, waiting to be discovered?

Implications for science and publishing

This “digital fossil” also raises important questions about knowledge integrity as AI-assisted research and writing become more common.

Publishers have responded inconsistently when notified of papers including vegetative electron microscopy. Some have retracted affected papers, while others defended them. Elsevier notably attempted to justify the term’s validity before eventually issuing a correction.

We do not yet know if other such quirks plague large language models, but it is highly likely. Either way, the use of AI systems has already created problems for the peer-review process.

For instance, observers have noted the rise of “tortured phrases” used to evade automated integrity software, such as “counterfeit consciousness” instead of “artificial intelligence”. Additionally, phrases such as “I am an AI language model” have been found in other retracted papers.

Some automatic screening tools such as Problematic Paper Screener now flag vegetative electron microscopy as a warning sign of possible AI-generated content. However, such approaches can only address known errors, not undiscovered ones.

Living with digital fossils

The rise of AI creates opportunities for errors to become permanently embedded in our knowledge systems, through processes no single actor controls. This presents challenges for tech companies, researchers, and publishers alike.

Tech companies must be more transparent about training data and methods. Researchers must find new ways to evaluate information in the face of AI-generated convincing nonsense. Scientific publishers must improve their peer review processes to spot both human and AI-generated errors.

Digital fossils reveal not just the technical challenge of monitoring massive datasets, but the fundamental challenge of maintaining reliable knowledge in systems where errors can become self-perpetuating.The Conversation

Aaron J. Snoswell, Research Fellow in AI Accountability, Queensland University of Technology; Kevin Witzenberger, Research Fellow, GenAI Lab, Queensland University of Technology, and Rayane El Masri, PhD Candidate, GenAI Lab, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

What’s your TikTok personality profile? New citizen science project helps you find out

AI Generated" TikTok event stage with logos, vertical screens, smoke, neon lights, and fruits like oranges, apples, and coconuts
Shutterstock AI/Shutterstock

What’s your TikTok personality profile? New citizen science project helps you find out

Author ADM+S Centre
Date 17 April 2025

Ever wondered why certain videos show up in your TikTok Feed? Does TikTok know exactly what you like, or does it nudge you to like things? Whether you’re scrolling, liking, or making content, the TikTok algorithm is learning from you.

Using a new tool created by researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at QUT and The University of Sydney, TikTok users can now see themselves through the eyes of the algorithm. 

Launched ahead of the upcoming federal election, the For You Research Project explores how TikTok’s powerful recommendation algorithm is influencing culture, creativity, and public discourse in Australia.

“We’re using a new and exciting approach which is based on citizen science,” says lead researcher Patrik Wikström.

“TikTok is a highly personalised platform,” says Professor Wikström, and it’s “shaping what Australians see online”.

Participants are invited to join the project to learn what is shaping their TikTok Feed and get a new perspective on the role of TikTok in their lives.

This research will help researchers understand TikTok’s recommendation system and how it shapes culture, creativity, and public debates in Australia. 

The project explores three key questions:

  • What content does TikTok recommend to Australian users, and how does this shape our culture?
  • How do different people experience and interpret the algorithm?
  • How do TikTok creators adapt their strategies to stay visible and relevant?

Whether you’re a casual scroller, an avid content creator, or just curious about what your feed says about you, the For You Research Project is an opportunity to see TikTok from a whole new perspective.

SEE ALSO

ADM+S Research Fellow shares research on libraries and public values at international conferences

Dr Hegarty at the AlgoSoc International Scientific Conference 2025
Dr Hegarty at the AlgoSoc International Scientific Conference 2025 on ‘The Future of Public Values in the Algorithmic Society’ in Amsterdam on 11 April (photo credit: Sander Kruit)

ADM+S Research Fellow shares research on libraries and public values at international conferences

Author ADM+S Centre
Date 15 April 2025

In April 2025, ADM+S Research Fellow Dr Kieran Hegarty visited the Netherlands and United Kingdom to present his research on how changing publishing and distribution markets are reshaping how cultural institutions, particularly libraries, fulfil their mandates and serve their users. Libraries are long-standing public institutions that remain key social and cultural infrastructure, but like other civil society actors and public institutions, they face significant challenges in an age of AI and automated decision-making (ADM).

Funded by ADM+S Research Training Support Funds, Dr Hegarty presented his research at the inaugural Born-Digital Collections, Archives and Memory (BDCAM) Conference. The conference was organised by the Digital Humanities Research Hub in the School of Advanced Study at the University of London from 2-4 April 2025 and brought together leading academics and professionals developing and researching digital collections and archives across the world.

In a panel session on how large commercial digital platforms have reshaped the work of building and studying cultural collections, Dr Hegarty presented findings from his PhD research on how the twin forces of automation and commercialisation have changed how major public library collections are formed and studied. He drew on his ethnographic and historical fieldwork at the National Library of Australia and the State Library of New South Wales to detail how libraries negotiate an environment where access to information of long-term public interest is increasingly controlled by commercial platforms.

ADM+S Research Fellow Dr Kieran Hegarty presents a paper on platformisation and archives at the Born-Digital Collections, Archives and Memory Conference at the University of London, 2 April 2025
ADM+S Research Fellow Dr Kieran Hegarty presents a paper on platformisation and archives at the Born-Digital Collections, Archives and Memory Conference at the University of London, 2 April 2025 (photo credit: Alex Rumford and the School of Advanced Study)

In the Q&A, Dr Hegarty signalled to the data donation approach—taken by ADM+S researchers in signature projects such as the Australian Ad Observatory and the Australian Search Experience—as a possible alternative or supplement to platform-controlled access to social media data for libraries, as well the associated challenges of ethics, privacy, and inclusion. Dr Hegarty’s talk was raised in the final plenary, where leading researchers from across Europe and America reflected on the themes and highlights of the conference.

The BDCAM conference was held at the historic Senate House in London. The 1930s building housed the British Ministry of Information during the Second World War, responsible for censorship and propaganda, and was purportedly the inspiration for the ‘Ministry of Truth’ in George Orwell’s Nineteen Eighty-Four. Given ongoing commercial and state control over what and how knowledge is produced, disseminated, and authorised continue to be critical issues, the building was a fitting site to explore how power over archives has operated and continues to operate.

Dr Hegarty then joined ADM+S Centre Director, Distinguished Professor Julian Thomas, at the inaugural AlgoSoc Conference, held at the historic Felix Meritis building in Amsterdam from 10-11 April.

Also at the AlgoSoc Conference, Distinguished Prof Julian Thomas presented at the opening panel discussion on ‘Rethinking public values and AI governance in the algorithmic age’ and Laura Gartry, ADM+S Research Student presented her poster ‘Implementing editorial values in audio recommendations’.

Prof Julian Thomas (left) presenting with Prof José van Dijck, Prof Abaham Bernstein and Prof Natali Helberger (left to right) at the AlgoSoc Conference.
Prof Julian Thomas presenting with Prof José van Dijck, Prof Abaham Bernstein and Prof Natali Helberger (left to right) at the AlgoSoc Conference. Photo credit: Kieran Hegarty.

Funded by the Dutch Ministry of Education, Culture and Science, AlgoSoc is a major ten-year research program that explores how to ensure public values like fairness, accountability, and transparency are protected in a society where more and more decisions are made by algorithms and AI systems.

AlgoSoc shares many affinities with ADM+S. Both research programs have a mutual interest not just in the technical design of automated systems, but in the institutional, social, and political arrangements that shape them, and how these arrangements are reshaped as sectors of public interest increasingly engage in a shifting constellation of actors and interests surrounding AI and ADM systems.

Dr Hegarty presented his paper, “Public libraries in the algorithmic society: An evolving site for the negotiation of public values”, co-authored with Professor Thomas and ADM+S Chief Investigator Professor Anthony McCosker, as part of a panel on “Sociotechnical infrastructures” chaired by Professor José van Dijck from Utrecht University.

The paper focused on how changing publishing and distribution markets over the past three decades have led to a renegotiation and rearticulation of the public values associated with libraries, particularly their commitment to ongoing and inclusive public access to published material. Other panellists shared similar challenges in relation to education, urban planning, and welfare provision across Europe, illustrating the cross-cutting issues affecting a range of sectors of public interest in different parts of the world.

“My attendance at BDCAM and AlgoSoc allowed me to share research from ADM+S with an international network of scholars, practitioners, and policymakers working at the intersection of technology and society,” said Dr Hegarty. “It also provided valuable opportunities to build and strengthen connections with leading research centres and explore future collaborations around how public values are being rearticulated as sectors of public interest engage with automated decision-making systems and AI and, in doing so, become increasingly entangled in the cultures and politics that surround these technologies”.

These events highlighted the growing global interest in understanding how public interest and values-led institutions are negotiating their roles, responsibilities, and the values they’re expected to uphold in an age of ADM, particularly when increasingly reliant on actors with very different interests, priorities, and resources.

The issues raised also underscored the importance of interdisciplinary and cross-sectoral work like that undertaken by ADM+S in ensuring that public values are not only considered but actively embedded in the design, governance, and operation of automated systems.

Dr Hegarty’s participation in these conferences reflects the ADM+S’s ongoing commitment to supporting early career researchers to develop international partnerships and contribute to global conversations about digital futures grounded in equity, inclusion, and the public good.

SEE ALSO

DeepSeek and the Future of AI: Congressional Testimony from Julia Stoyanovich

Julia Stoyanovich testifying at U.S House Committee
Image supplied by Assoc Prof Julia Stoyanovich

DeepSeek and the Future of AI: Congressional Testimony from Julia Stoyanovich

Author ADM+S Centre
Date 12 April 2025

On 9 April, Associate Prof Julia Stoyanovich, Director of the Center for Responsible AI at NYU Tandon School of Engineering and Partner Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society, testified at the Research & Technology Subcommittee Hearing – DeepSeek. A Deep Dive.

Her testimony focused on the national security and competitive advantage implications of DeepSeek for the US.

“It was an honor and a privilege to testify at the U.S. House of Representatives today, at a Research & Technology Subcommittee Hearing of the Committee on Science, Space, and Technology,” said Prof Stoyanovich.

In her remarks, Professor Stoyanovich offered three key recommendations with regards to the technology implications of DeepSeek:

Recommendation 1: Foster an Open Research Environment
To close the strategic gap, the federal government must support an open, ambitious
research ecosystem. This includes robust funding for fundamental AI science, public datasets, model development, and compute access. The National AI Research Resource (NAIRR) is essential here—providing academic institutions, startups, and public agencies with tools to compete globally. Federal support for the National Science Foundation and other agencies is vital to sustaining open research and building a skilled AI workforce.

Recommendation 2: Incentivise Transparency Across the AI Lifecycle
Transparency drives progress, safety, and accountability. The government should require public disclosure of model architecture, training regimes, and evaluation protocols in federally funded AI work—and incentivize similar practices in commercial models. Public benchmarks, shared leaderboards, and reproducibility audits can raise the floor for all developers.

Recommendation 3: Establish a strong data protection regime
The U.S. must lead not only in AI performance, but in responsible, privacy-respecting AI infrastructure. This includes clear guardrails on how AI models collect and use data, especially when deployed in sensitive sectors. It also means restricting exposure of U.S. data to jurisdictions that lack safeguards. International frameworks like GDPR offer useful reference points—but our approach must reflect U.S. values and strategic interests.

About the Hearing

The hearing examined DeepSeek’s AI models, which have drawn international attention for achieving comparable performance to U.S. models while using less advanced chips and appearing more cost-effective. The session also explored the role of U.S. technologies in DeepSeek’s development and how federal support can drive innovation in the private sector.

Other expert witnesses included Adam Thierer (R Street Institute), Gregory Allen (Center for Strategic and International Studies), and Tim Fist (Institute for Progress).

Another related hearing will be held Wednesday by the House Energy and Commerce Committee, focusing on the federal role in accelerating advancements in computing.

View the hearing Research and Technology Subcommittee Hearing – DeepSeek: A Deep Dive on YouTube.

SEE ALSO

ADM+S Researchers to present at International conference on research and development in information retrieval

Abstract image with laptop and search bar

ADM+S Researchers to present at International conference on research and development in information retrieval

Author ADM+S Centre
Date 11 April 2025

Several researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) have been accepted to present their work at the ACM SIGIR 2025 Conference on Research and Development in Information Retrieval, the leading international forum in the field.

SIGIR is the premiere international forum for the presentation of new research results and for the demonstration of new systems and techniques in information retrieval. The conference consists of five days of full papers, short papers, resource & reproducibility papers, perspectives papers, system demonstrations, doctoral consortium, tutorials, and workshops focused on research and development in the area of information retrieval. The Conference will be held 13-18 July 2025 in Podova, Italy.

The following ADM+S research will be presented at the conference:

  • Classifying Term Variants in Query Formulation (full paper)
    Nuha AbuOnq (ADM+S Research Student), co-authored with Prof Falk Scholer (ADM+S Associate Investigator).
  • The Effects of Demographic Instructions on LLM Personas (short paper)
    Angel Magnossão de Paula (ADM+S Affiliate), co-authored with Prof Shane Culpepper, Prof Alistair Moffat, Sachin Pathiyan Cherumanal (ADM+S Research Student), Prof Falk Scholer (ADM+S Associate Investigator), and Dr Johanne Trippas.
  • PUB: An LLM-Enhanced Personality-Driven User Behaviour Simulator for Recommender System Evaluation (paper)
    Dr Chenglong Ma (ADM+S Research Student)
  • Characterising Topic Familiarity and Query Specificity Using Eye-Tracking Data (short paper)
    Jiaman He (ADM+S Research Student), co-authored with Zikang Leng, Dr Dana McKay, Dr Johanne Trippas, and Dr Damiano Spina (ADM+S Associate Investigator).

Prof Flora Salim (ADM+S Chief Investigator) and Prof Maarten de Rijke (ADM+S Partner Investigator) will also be co-hosting the second edition of the MANILA – SIGIR Workshop, a series focused on leveraging information retrieval to address the impacts of climate change.

SEE ALSO

Tools like Apple’s photo Clean Up are yet another nail in the coffin for being able to trust our eyes

AI Generated image of coloured sillhouette figures in a city street
Apple Clean Up highlights photo elements that might be deemed distracting. Image credit: T.J. Thomson

Tools like Apple’s photo Clean Up are yet another nail in the coffin for being able to trust our eyes

Author T.J. Thomson
Date 11 April 2025

You may have seen ads by Apple promoting its new Clean Up feature that can be used to remove elements in a photo. When one of these ads caught my eye this weekend, I was intrigued and updated my software to try it out.

The feature has been available in Australia since December for Apple customers with certain hardware and software capabilities. It’s also available for customers in New Zealand, Canada, Ireland, South Africa, the United Kingdom and the United States.

The tool uses generative artificial intelligence (AI) to analyse the scene and suggest elements that might be distracting. You can see those highlighted in the screenshot below.

Screenshot of a photo in editing software, a city square with various people highlighted in red.
Apple uses generative AI to identify elements, highlighted here in red, that might be distracting in photos. It then allows users to remove these with the tap of a finger.
T.J. Thomson

You can then tap the suggested element to remove it or circle elements to delete them. The device then uses generative AI to try to create a logical replacement based on the surrounding area.

Easier ways to deceive

Smartphone photo editing apps have been around for more than a decade, but now, you don’t need to download, pay for, or learn to use a new third-party app. If you have an eligible device, you can use these features directly in your smartphone’s default photo app.

Apple’s Clean Up joins a number of similar tools already offered by various tech companies. Those with Android phones might have used Google’s Magic Editor. This lets users move, resize, recolour or delete objects using AI. Users with select Samsung devices can use their built-in photo gallery app to remove elements in photos.

There have always been ways – analogue and, more recently, digital – to deceive. But integrating them into existing software in a free, easy-to-use way makes those possibilities so much easier.

Using AI to edit photos or create new images entirely raises pressing questions around the trustworthiness of photographs and videos. We rely on the vision these devices produce in everything from police body and traffic cams to insurance claims and verifying the safe delivery of parcels.

If advances in tech are eroding our trust in pictures and even video, we have to rethink what it means to trust our eyes.

How can these tools be used?

The idea of removing distracting or unwanted elements can be attractive. If you’ve ever been to a crowded tourist hotspot, removing some of the other tourists so you can focus more on the environment might be appealing (check out the slider below for an example).

But beyond removing distractions, how else can these tools be used?

Some people use them to remove watermarks. Watermarks are typically added by photographers or companies trying to protect their work from unauthorised use. Removing these makes the unauthorised use less obvious but not less legal.

Others use them to alter evidence. For example, a seller might edit a photo of a damaged good to allege it was in good condition before shipping.

As image editing and generating tools become more widespread and easier to use, the list of uses balloons proportionately. And some of these uses can be unsavoury.

AI generators can now make realistic-looking receipts, for example. People could then try to submit these to their employer to get reimbursed for expenses not actually incurred.

Can anything we see be trusted anymore?

Considering these developments, what does it mean to have “visual proof” of something?

If you think a photo might be edited, zooming in can sometimes reveal anomalies where the AI has stuffed up. Here’s a zoomed-in version of some of the areas where the Clean Up feature generated new content that doesn’t quite match the old.

Tools like Clean Up sometimes create anomalies that can be spotted with the naked eye.
T.J. Thomson

It’s usually easier to manipulate one image than to convincingly edit multiple images of the same scene in the same way. For this reason, asking to see multiple outtakes that show the same scene from different angles can be a helpful verification strategy.

Seeing something with your own eyes might be the best approach, though this isn’t always possible.

Doing some additional research might also help. For example, with the case of a fake receipt, does the restaurant even exist? Was it open on the day shown on the receipt? Does the menu offer the items allegedly sold? Does the tax rate match the local area’s?

Manual verification approaches like the above obviously take time. Trustworthy systems that can automate these mundane tasks are likely to grow in popularity as the risks of AI editing and generation increase.

Likewise, there’s a role for regulators to play in ensuring people don’t misuse AI technology. In the European Union, Apple’s plan to roll out its Apple Intelligence features, which include the Clean Up function, was delayed due to “regulatory uncertainties”.

AI can be used to make our lives easier. Like any technology, it can be used for good or bad. Being aware of what it’s capable of and developing your visual and media literacies is essential to being an informed member of our digital world.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Amazon’s new Alexa policy sparks privacy concerns

Alexa smart speaker
Credit: Olemedia/Gettyimages

Amazon’s new Alexa policy sparks privacy concerns

Author ADM+S Centre
Date 10 April 2025

Users of Amazon’s Alexa-enabled Echo devices in Australia and around the world may have noticed something different — or perhaps they haven’t. That’s part of the concern, according to experts.

In a recent interview with ABC Radio National’s Life Matters, Prof Daniel Angus, Chief Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society, explained that Amazon has made a significant and controversial change: all audio captured by its Echo smart speakers is now automatically sent to the cloud by default.

Users can opt out, but doing so limits the device’s ability to personalise responses and learn user preferences. The move has sparked a new wave of concern over consumer privacy, AI hype, and the growing power of Big Tech.

“This move is diabolical,” Prof Angus told Life Matters. “It breaks that fundamental trust.”

A Symptom of the AI Hype Cycle

Prof Angus argues that Amazon’s decision is not just a privacy issue, but part of a broader, more concerning trend.

“We’re in a hype cycle around AI,” he said. “Companies need us to believe in the idea of AI to maintain growth. This is not just about functionality — it’s about market dominance and feeding the myth of inevitable AI revolution.”

At the core of this trend is data. More data means better AI models, and smart speaker interactions — even something as simple as setting a timer — are a rich source of training material.

He pointed to Amazon’s market saturation and reliance on growth-at-all-costs as motivations for expanding data collection practices without clear consumer benefit.

Privacy or Access: A Sophie’s Choice?

Virtual assistants have real benefits, particularly for people with accessibility needs. But according to Angus, users should not have to choose between functionality and their right to privacy.

“Audio is incredibly private,” he said. “It’s gold for accessibility, but it can also be incredibly revealing.”

Historically, much of the processing by virtual assistants happened on-device, a method known as edge computing. This approach enabled commands to be interpreted locally, enhancing both performance and privacy. But the shift toward cloud-based processing threatens this balance.

Angus urged regulators to act, warning that without intervention, consumers could be locked into unfair trade-offs.

“We do this through regulation. Specifically, through privacy reform,” he said. “Privacy settings are fundamental to stopping companies from exploiting our data for capital gain.”

Reform on the Horizon?

Australia is currently reviewing its privacy frameworks, with new attention on children’s data and AI regulation. Angus suggested that Amazon’s move may be a catalyst for change.

“I think they’ve overplayed their hand,” he said. “This could be a wake-up call for both the public and policymakers.”

Listen to the full interview How your virtual assistant is listening to you on ABC Listen.

SEE ALSO

New study explores how autistic adults use non-human supports for wellbeing

Report cover: Autism Supports for comfort, care and connection. Megan Catherine Rose, Deborah Lupton

New study explores how autistic adults use non-human supports for wellbeing

Author ADM+S Centre
Date 4 April 2025

A new autistic-led project, Autistic Supports for Comfort, Care, and Connection, reveals the everyday and creative ways autistic adults use objects, services, and creatures to support their wellbeing.

Conducted by Dr Megan Rose, research fellow, and Prof Deborah Lupton, from the ARC Centre of Excellence for Automated Decision-Making and Society at UNSW, the study interviewed 12 autistic Australians about the non-human supports they rely on for entertainment, social connection, special interests, burnout recovery, sensory challenges, and overall wellbeing.

Participants also imagined their ideal new support system tailored to their needs.

To visually represent these experiences, autistic graphic illustrator Sarah Firth was commissioned to create unique ‘portraits’ of each participant. Using anonymised interview transcripts, Sarah crafted illustrations that depict the challenges, coping strategies, and special interests of each individual—without ever seeing or meeting them.

The resulting booklet combines these portraits with lay-language participant narratives, offering a powerful and personal look at how autistic people engage with non-human supports in their daily lives.

“Importantly, this is an autistic-led project with a strengths-based approach. Megan and I wanted to focus on identifying not only the challenges faced by autistic people, but also the amazingly inventive ways they made their lives more comfortable and joyful,” Professor Lupton said.

Watch the online report launch on Youtube Autism Supports for Comfort, Care and Connection
View the publication Autistic Supports for Comfort, Care, and Connection
Watch the documentary Non-Human Supports Used by Autistic People for Connection, Health and Wellbeing

SEE ALSO

Can you tell the difference between real and fake news photos? Take the quiz to find out

A (real) photo of a protester dressed as Pikachu in Paris on March 29 2025.
A (real) photo of a protester dressed as Pikachu in Paris on March 29 2025. Remon Haazen / Getty Images

Can you tell the difference between real and fake news photos? Take the quiz to find out

Author T.J. Thomson
Date 2 April 2025

You wouldn’t usually associate Pikachu with protest.

But a figure dressed as the iconic yellow Pokémon joined a protest last week in Turkey to demonstrate against the country’s authoritarian leader.

And then a virtual doppelgänger made the rounds on social media, raising doubt in people’s minds about whether what they were seeing was true. (Just to be clear, the image in the post shown below is very much fake.)

This is the latest in a spate of incidents involving AI-generated (or AI-edited) images that can be made easily and cheaply and that are often posted during breaking news events.

Doctored, decontextualised or synthetic media can cause confusion, sow doubt, and contribute to political polarisation. The people who make or share these media often benefit financially or politically from spreading false or misleading claims.

How would you go at telling fact from fiction in these cases? Have a go with this quiz and learn more about some of AI’s (potential) giveaways and how to stay safer online.



How’d you go?

As this exercise might have revealed, we can’t always spot AI-generated or AI-edited images with just our eyes. Doing so will also become harder as AI tools become more advanced.

Dealing with visual deception

AI-powered tools exist to try to detect AI content, but these have mixed results.

Running suspect images through a search engine to see where else they have been published – and when – can be a helpful strategy. But this relies on there being an original “unedited” version published somewhere online.

Perhaps the best strategy is something called “lateral reading”. It means getting off the page or platform and seeing what trusted sources say about a claim.

Ultimately, we don’t have time to fact-check every claim we come across each day. That’s why it’s important to have access to trustworthy news sources that have a track record of getting it right. This is even more important as the volume of AI “slop” increases.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Generative AI is already being used in journalism – here’s how people feel about it

AI Generated image of newspresenter in newsroom
Indonesia’s TVOne launched an AI news presenter in 2023. T.J. Thomson

Generative AI is already being used in journalism – here’s how people feel about it

Authors T.J. Thomson, Michelle Riedlinger, Phoebe Matich, Ryan J. Thomas
Date 2 April 2025

Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception.

A new report published today finds that news audiences and journalists alike are concerned about how news organisations are – and could be – using generative AI such as chatbots, image, audio and video generators, and similar tools.

The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France).

Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.

This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences.

Who or what makes your news – and how – matters for a host of reasons.

Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources – such as politicians or experts – more than others.

Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet’s staff themselves aren’t representative of their audience.

Carelessly using AI to produce or edit journalism can reproduce some of these inequalities.

Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each.

The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic.

But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower.

The problem – and opportunity

Generative AI can be used in just about every part of journalism.

For example, a photographer could cover an event. Then, a generative AI tool could select what it “thinks” are the best images, edit the images to optimise them, and add keywords to each.

An image of a field with towers in the distance and computer-generated labels superimposed that try to identify certain objects in the image.
Computer software can try to recognise objects in images and add keywords, leading to potentially more efficient image processing workflows.
Elise Racine/Better Images of AI/Moon over Fields, CC BY

These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make “good” images are different to what a computer might think? These criteria may also change over time or in different contexts.

Even something as simple as lightening or darkening an image can cause a furore when politics are involved.

AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context.

Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others’ content.

AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC.

What do people think about journalists using AI?

Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes.

For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the “portrait” mode on smartphones.

Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who’d previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media.

A screenshot of an image with the alt-text description that reads A view of the beach from a stone arch.
Popular word processing and presentation software can automatically generate alt-text descriptions for images that are inserted into documents or presentations.
T.J. Thomson

The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral.

For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles’s coronation, news outlets reported on this false image.

Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns.

An overview of twelve opinion columns published by The Daily Telegraph and each featuring an image generated by an AI tool.
The Daily Telegraph frequently turns to generative AI to illustrate its opinion columns, sometimes generating more photorealistic illustrations and sometimes less photorealistic ones.
T.J. Thomson

Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use.

Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example.

On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to “enliven” an otherwise static image in the hopes of attracting viewer interest and engagement.

A historical photograph from the State Library of Western Australia’s collection has been animated with AI (a tool called Runway) to introduce motion to the still image.
T.J. Thomson

Your role as an audience member

If you’re unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can’t find one, consider asking the outlet to develop and publish a policy.

Consider supporting media outlets that use AI to complement and support – rather than replace – human labour.

Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Michelle Riedlinger, Associate Professor in Digital Media, Queensland University of Technology; Phoebe Matich, Postdoctoral Research Fellow, Generative Authenticity in Journalism and Human Rights Media, ADM+S Centre, Queensland University of Technology, and Ryan J. Thomas, Associate Professor, Washington State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

How is rental tech changing the way we rent? Share your experience

Rear view of woman looking using smartphone while looking at real estate sign, planning to rent a house. Buying a new home. Property investment. Mortgage loans.
Credit: Oscar Wong/GettImages

How is rental tech changing the way we rent? Share your experience

AuthorADM+S Centre
Date 31 March 2025

Are you a renter in Australia with experience using digital rental platforms? A new research project is looking for participants to share their experiences with ‘RentTech’ and its impact on housing justice.

PhD researcher Samantha Floreani from the ARC Centre of Excellence for Automated Decision-Making and Society at Monash University is conducting a study on the growing influence of digital technologies in the residential real estate sector.

These technologies—sometimes referred to as ‘RentTech’—include online rental application platforms (such as 2Apply, Sorted, Ignite, and Snug), property management apps (Kolmeo, Cubbi, ConsoleTenant), and rent payment platforms (Rental Rewards, Ailo, SimpleRent), among others. The research aims to explore how these technologies affect renters’ experiences and housing justice in Australia.

Samantha says, “Against the backdrop of the ongoing housing crisis, renters are increasingly interacting with digital technologies at every stage of their housing experience.

“These tools come with promises of increased convenience, efficiency, and profit for real estate agents and landlords—but what do they mean for renters? Through this study, I aim to find out.”

Participants will take part in a one-on-one interview, discussing their interactions with RentTech and demonstrating an app, website, or platform they have used. The interview, which lasts approximately 60 minutes, can be conducted online via Zoom or in person at a mutually convenient location.

To participate, you should have some experience with, or opinion on, RentTech and also experience with Australia’s private rental market, though you do not need to have a current tenancy agreement.

All interviews will be recorded, transcribed, and anonymized to protect confidentiality.

Your insights will contribute to research that aims to center renters’ voices in discussions about digital real estate technology. Findings from the study will help inform advocacy and policy making efforts related to renters’ rights and housing justice.

For more information visit The Machine-Readable Renter website.

SEE ALSO

What makes a good search engine? These 4 models can help you use search in the age of AI

Internet search, computer search, hand out of computer with magnifying glass, quick search, search, internet icon.
Credit: beast01/Shutterstock

What makes a good search engine? These 4 models can help you use search in the age of AI

Authors Simon Coghlan, Damiano Spina, Falk Scholer and Hui Chia
Date 26 March 2025

Every day, users ask search engines millions of questions. The information we receive can shape our opinions and behaviour.

We are often not aware of their influence, but internet search tools sort and rank web content when responding to our queries. This can certainly help us learn more things. But search tools can also return low-quality information and even misinformation.

Recently, large language models (LLMs) have entered the search scene. While LLMs are not search engines, commercial web search engines have started to include LLM-based artificial intelligence (AI) features into their products. Microsoft’s Copilot and Google’s Overviews are examples of this trend.

AI-enhanced search is marketed as convenient. But, together with other changes in the nature of search over the last decades, it raises the question: what is a good search engine?

Our new paper, published in AI and Ethics, explores this. To make the possibilities clearer, we imagine four search tool models: Customer Servant, Librarian, Journalist and Teacher. These models reflect design elements in search tools and are loosely based on matching human roles.

The four models of search tools

Customer Servant

Workers in customer service give people the things they request. If someone asks for a “burger and fries”, they don’t query whether the request is good for the person, or whether they might really be after something else.

The search model we call Customer Servant is somewhat like the first computer-aided information retrieval systems introduced in the 1950s. These returned sets of unranked documents matching a Boolean query – using simple logical rules to define relationships between keywords (e.g. “cats NOT dogs”).

Librarian

As the name suggests, this model somewhat resembles human librarians. Librarian also provides content that people request, but it doesn’t always take queries at face value.

Instead, it aims for “relevance” by inferring user intentions from contextual information such as location, time or the history of user interactions. Classic web search engines of the late 1990s and early 2000s that rank results and provide a list of resources – think early Google – sit in this category.

Close-up of two people's hands exchanging a stack of books.
Librarians don’t just retrieve information, they strive for relevance.
Tyler Olson/Shutterstock

Journalist

Journalists go beyond librarians. While often responding to what people want to know, journalists carefully curate that information, at times weeding out falsehoods and canvassing various public viewpoints.

Journalists aim to make people better informed. The Journalist search model does something similar. It may customise the presentation of results by providing additional information, or by diversifying search results to give a more balanced list of viewpoints or perspectives.

Teacher

Human teachers, like journalists, aim at giving accurate information. However, they may exercise even more control: teachers may strenuously debunk erroneous information, while pointing learners to the very best expert sources, including lesser-known ones. They may even refuse to expand on claims they deem false or superficial.

LLM-based conversational search systems such as Copilot or Gemini may play a roughly similar role. By providing a synthesised response to a prompt, they exercise more control over presented information than classic web search engines.

They may also try to explicitly discredit problematic views on topics such as health, politics, the environment or history. They might reply with “I can’t promote misinformation” or “This topic requires nuance”. Some LLMs convey a strong “opinion” on what is genuine knowledge and what is unedifying.

No search model is best

We argue each search tool model has strengths and drawbacks.

The Customer Servant is highly explainable: every result can be directly tied to keywords in your query. But this precision also limits the system, as it can’t grasp broader or deeper information needs beyond the exact terms used.

The Librarian model uses additional signals like data about clicks to return content more aligned with what users are really looking for. The catch is these systems may introduce bias. Even with the best intentions, choices about relevance and data sources can reflect underlying value judgements.

The Journalist model shifts the focus toward helping users understand topics, from science to world events, more fully. It aims to present factual information and various perspectives in balanced ways.

This approach is especially useful in moments of crisis – like a global pandemic – where countering misinformation is critical. But there’s a trade-off: tweaking search results for social good raises concerns about user autonomy. It may feel paternalistic, and could open the door to broader content interventions.

The Teacher model is even more interventionist. It guides users towards what it “judges” to be good information, while criticising or discouraging access to content it deems harmful or false. This can promote learning and critical thinking.

But filtering or downranking content can also limit choice, and raises red flags if the “teacher” – whether algorithm or AI – is biased or simply wrong. Current language models often have built-in “guardrails” to align with human values, but these are imperfect. LLMs can also hallucinate plausible-sounding nonsense, or avoid offering perspectives we might actually want to hear.

Staying vigilant is key

We might prefer different models for different purposes. For example, since teacher-like LLMs synthesise and analyse vast amounts of web material, we may sometimes want their more opinionated perspective on a topic, such as on good books, world events or nutrition.

Yet sometimes we may wish to explore specific and verifiable sources about a topic for ourselves. We may also prefer search tools to downrank some content – conspiracy theories, for example.

LLMs make mistakes and can mislead with confidence. As these models become more central to search, we need to stay aware of their drawbacks, and demand transparency and accountability from tech companies on how information is delivered.

Striking the right balance with search engine design and selection is no easy task. Too much control risks eroding individual choice and autonomy, while too little could leave harms unchecked.

Our four ethical models offer a starting point for robust discussion. Further interdisciplinary research is crucial to define when and how search engines can be used ethically and responsibly.The Conversation

Simon Coghlan, Senior Lecturer in Digital Ethics, Centre for AI and Digital Ethics, School of Computing and Information Systems, The University of Melbourne; Damiano Spina, Senior Lecturer, School of Computing Technologies, RMIT University; Falk Scholer, Professor of Information Access and Retrieval, RMIT University, and Hui Chia, PhD Candidate in Law, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Why voting in a fact-checking void should worry you

False/Fact words overlaid abstract image of Elon Musk and Mark Zuckerburg
Illustration by Michael Joiner, 360info, images via James Duncan Davidson & Jose Luis Magana CC BY 4.0

Why voting in a fact-checking void should worry you

Authors Ned Watt and Michelle Riedlinger
Date 25 March 2025

The loss of Australia’s go-to political fact-checker and the rise of AI tools has created a crisis for political accountability just as the nation’s voters prepare to go to the polls.

Professional fact-checkers have never been under more pressure and social media users face a complex and fast-evolving misinformation landscape.

It’s crucial voters understand the situation in the lead-up to the vote.

This federal election will be the first without ABC RMIT Fact Check, which completed its first fact check during the Rudd-Abbott election of 2013.

It will also be Australia’s first federal election since the release of ChatGPT and other generative AI tools that have heralded a new normal of AI-generated political advertising and propaganda.

The risk to accountability is a win for vested interests in Australia’s political and media systems. It means there is even more potential for those vested interests to manipulate information for their own benefit rather than the public good.

Australia needs political parties to commit to ethical use of AI in their campaigning and bipartisan support for improved human-AI detection tools created by fact-checkers and for fact-checkers and journalists, to improve media information integrity systems.

What happened to political fact-checking

Independent fact-checking has faced a public legitimacy crisis in the past few years, mirroring similar crises of trust in news.

The crisis is driven in part by politicians’ denigrations of online investigative research activities, which is related to distrust of the fact-checking movement by far-right politicians and their allies around the world.

In Australia, the end of fact-checking arrangements between the ABC and RMIT University happened in 2024 amid a media furore in the lead-up to the Voice to Parliament referendum. Conservative media depicted RMIT Fact Lab, another entity under RMIT’s professional fact-checking wing, as grossly biased.

Claims of fact-checkers’ political bias hinge on observations that right-leaning voices tend to share news content that diverges from established consensus more often, resulting in a relatively high proportion of their claims being fact-checked.

The suspension of RMIT Fact Lab’s membership of Meta’s third-party fact-checking program cast a long shadow over the credibility of fact-checking, reflecting similar questions to those recently posed in the United States about the role of truth in politics.

Australia still has two locally-owned fact-checking units – ABC’s in-house fact-checker ABC News Verify and Australian Associated Press (AAP) fact-checking service – as well as AFP Australia, the local division of Agence France-Presse’s (AFP) fact-checking operation.

Australian fact-checkers have been part of a push for political accountability and depolarisation, responding to the concerns of Australians about the interplay between private interests in politics and media organisations and the public interest, including the roles of big tech in moderating information online.

There have been calls for greater accountability and transparency in news reporting, but fact-checkers worldwide have experienced setbacks.

At the start of the year, Meta announced that it was ending its third-party fact-checking program in the United States and making changes to its content-moderation policies. The changes would amplify political content and allow content targeting vulnerable minorities that it previously considered contentious and divisive.

This move signalled a crisis for professional fact-checkers, journalists and misinformation researchers.

Meta boss Mark Zuckerberg, under pressure from Donald Trump and other conservative critics of Meta’s third-party fact-checking programme, claimed that US fact-checking was akin to censorship. That echoed accusations of partisan censorship in Europe, the Philippines and Australia.

Alternatives to independent fact-checking

Zuckerberg claims the answers to Meta’s controversial information integrity problems will be found in a Community Notes-style program, modelled on the program developed on Twitter and currently employed on Elon Musk’s X.

While such an approach could provide some value in terms of contextualising misleading content, it does little to address complex online harms.

Recent studies have found independent fact-checkers are frequently cited in Community Notes, and that successful community moderation relies on professional fact-checking.

Human-AI approaches are also increasing, with X’s Community Notes employing a bridging algorithm that assesses contributors before posting a correction to content.

However, there are flaws in that system.

Professional fact-checking has been notoriously challenging to scale so fact-checkers have also been experimenting with AI-based approaches. However, they are limited by time and resources.

What this means for Australia

There are already signs of problematic AI use in political communication, including politicians being edited using AI to engage audiences, often at the expense of other candidates or parties.

This is done through carrying unsanctioned or uncharacteristic messaging to attack or cause confusion around certain policies or politicians, through parody as well as deception.

While Meta recently committed to labelling content that it identifies as being generated with AI, evidence suggests that labelling content as generated does little to reduce its perceived credibility. In other words, the power of AI for political communication is not just its ability to deceive, but to persuade — both cheaply and at scale.

These practices could deceive or manipulate voters and even lead to a loss of faith in institutional systems or authentic evidence being discredited.

The defunding and delegitimisation of professional fact-checkers threatens their ability to provide context and explanation and impedes their investigative abilities to better understand the problematic media landscape.

The end of platform-supported fact-checking in the United States also sets a precedent for digital platforms to enter into covert agreements with elected officials, furthering individual political or economic agendas, instead of creating policies that serve the public interest.

In Australia, there is the potential for future political dealmaking between influencers or power brokers, platform owners like Musk and Zuckerberg and segments of the Australian elite, which would cause more public confusion and disillusionment.

Ned Watt is a PhD candidate at the ARC Centre of Excellence for Automated Decision-Making and Society at the Queensland University of Technology Digital Media Research Centre.

Mr Watt’s research is funded in part by the Global Journalism Innovation Lab (GJIL).

Michelle Riedlinger is an Associate Professor and Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society at the Queensland University of Technology’s School of Communication.

Originally published under Creative Commons by 360info™.

SEE ALSO

Chinese social media platform RedNote a new battleground ahead of federal election

Silhoette of three people on mobile phones with RedNote logo in the background

Chinese social media platform RedNote a new battleground ahead of federal election

Author ADM+S Centre
Date 24 March 2025

As Australia approaches its federal election, concerns are mounting over the spread of misinformation and disinformation on Chinese social media platform RedNote or known as the little red book for Mandarin-speakers. RedNote is a platform increasingly used by Australian politicians to connect with Chinese Australians.

In addition to informational and educational content, deepfake videos, political-or-commerical-driven misleading content, and shadow banning are emerging as key issues in the digital landscape, raising alarm over the integrity of online political discourse.

In a recent investigation, the ABC uncovered a deepfake video featuring a manipulated clip of Opposition Leader Peter Dutton speaking Mandarin on RedNote.

The video uses legitimate footage from an interview where Dutton discusses the Indigenous flag, but AI has altered it to make it appear as though he is speaking Mandarin. In the video, Dutton appears to suggest that Indigenous flags should not be displayed at press conferences, a claim that is misleading and taken out of context.

ARC Centre of Excellence for Automated Decision-Making and Society researchers Dr Fan Yang, from Melbourne University and Dr Robbie Fordyce from Monash University discussed the issue in an interview on ABC’s World Today.

Dr Fan Yang studies Australian political information on Chinese-language social media services and warns that such deepfake videos are not isolated incidents. She notes that other misleading content has spread on the platform, such as videos implying the Albanese government is arresting temporary migrants and commercial-driven threatening messages about Australia’s new policies on immigration and housing.

In these cases, the videos are often taken out of context, with captions misrepresenting the events.

Further complicating matters, Dr Yang points out the lack of official voices, such as those from the Australian Electoral Commission (AEC) on RedNote to prebunk and debunk false or misleading information; and that the narrow scope of what public agencies classify as “misinformation” and “disinformation” limits their capacity for effective intervention. She highlights troubling instances of shadow banning, where Australian politicians’ accounts and content are hidden from Chinese Australian users.

“If you search for the name of a politician, you wouldn’t even be able to find their account,” Dr Yang explains.

“This raises concerns that Chinese Australians are being exposed to an increasingly one-sided view of political events.”

Following the publication of an ABC investigative report, on 24 March, ADM+S affiliated PhD researcher Dan Dai identified platform intervention of the latest content in relation to the hashtags #澳大利亚大选 or #澳洲大选(meaning Australian election) on RedNote. No recent content appears in search results for the term.

The impact of misinformation on Chinese Australians became particularly apparent during last year’s Voice referendum, with many expressing anxiety over the potential constitutional changes. The research team has released an interim report on the issue.

Dr Robbie Fordyce notes that misinformation often exploited existing fears among migrant communities, portraying the referendum as granting undue power to Indigenous Australians, which in turn would disadvantage migrant communities.

“They were interpreting the referendum as giving Indigenous Australians massive constitutional power, which would subordinate other groups,” Fordyce explained.

Although experts have raised questions about the potential influence of international actors, such as the Chinese government, Dr Fordyce stressed that their research found no evidence of a coordinated campaign to manipulate the platform for political purposes, aside from the influence of Chinese internet governance, which regulates permissible discussions.

Despite this, he acknowledged that existing fear and concerns often drive people to share misleading content.

Experts agree that better access to reliable, Chinese-language journalism could alleviate some of these issues.

Dr Fordyce believes that providing accurate, well-researched news could help Chinese Australians better navigate the complex political landscape.

“[With sufficient funding and support], if there was a rich Chinese language news source with good journalistic ethics, that could address concerns and provide correct information, it would really help these people,” he said.

In response to growing concerns, an AEC spokesperson assured that the commission is continuously monitoring the social media environment to engage with voters, despite limited resources.

As Australian politicians continue to use RedNote and WeChat as a tool to engage with Chinese Australians, the integrity of information on the platform remains a critical issue, with both misinformation and the silencing of political voices posing significant challenges to the upcoming election.

This project is led and conducted by Dr Fan Yang, with research assistance from Dan Dai, Stevie Zhang, and Mengjie Cai at the University of Melbourne, and co-led by Dr Robbie Fordyce at Monash University and Dr Luke Heemsbergen at Deakin University. Between 2024 and 2025, the project is funded by the Susan McKinnon Foundation.

SEE ALSO

Researchers to investigate the use of Generative AI by non-english speaking students in tertiary education

generative ai word on world map. concept showing artificial intelligence creative mind for generat music, image and speech.

Researchers to investigate the use of Generative AI by non-english speaking students in tertiary education

Author Kathy Nickels
Date 17 March 2025

Associate Professor Michelle Riedlinger from the ARC Centre of Excellence for Automated Decision-Making and Society at QUT along with colleagues Dr Xiaoting Yu and Dr Mimi Tsai, have secured funding to investigate the factors driving non-english speaking backgrounds (NESB) students’ uses of GenAI and strategies to improve learning outcomes in applying AI ethically and professionally. 

The study will take place as a longitudinal study of master’s students using a combination of sprint interviews and follow-up discussions.

“We’re grappling with how higher education, the Australian research community and the professional communication sector are responding to these technologies and so we’re excited to investigate these understudied use cases, which are so important for our students,” says Associate Professor Riedlinger.

The findings from this study are expected to have potential benefits for international students across various programs at QUT.

Dr Mimi Tsai, a co-researcher on the project and QUT Learning Designer, explained that the study aims to reduce added stress experienced by NESB students. 

“NESB students already balance new professional commitments, visa restrictions, unfamiliar educational systems, and the need for stronger industry connections and enhanced digital skills,” she said.

Dr Xiaoting Yu, an Affiliate Investigator from the Digital Media Research Centre at QUT and the lead researcher on the project, emphasised the importance of the study for filling gaps in higher education research. 

“Although there has been significant research on GenAI in tertiary education, little attention has been given to master’s coursework students from Non-English Speaking Backgrounds,” she said.

The anticipated outcomes of the study include the development of an adaptable framework that addresses the needs of NESB student cohorts, with broad applicability across the faculty’s undergraduate and master’s programs.

This study Investigating the Generative AI capabilities and needs of students from non-English speaking backgrounds: A longitudinal study of master students’ evolving engagement with AI at QUT has received funding through QUT’s CIESJ Learning and Teaching seed funding scheme. 

SEE ALSO

RMIT partners with the Office of the National Broadcasting and Telecommunications Commission of Thailand to address digital access and policy

MoU signatories (left to right) Mr Trairat Viriyasirikul, Professor Saskia Loer Hansen and Distinguished Professor Julian Thomas.
MoU signatories (left to right) Mr Trairat Viriyasirikul, Professor Saskia Loer Hansen and Distinguished Professor Julian Thomas.

RMIT partners with the Office of the National Broadcasting and Telecommunications Commission of Thailand to address digital access and policy

Author Kathy Nickels
Date 17 March 2025

The Office of the National Broadcasting and Telecommunications Commission of Thailand (Office of the NBTC), and RMIT University, Australia, have formalised a new partnership with the signing of a Memorandum of Understanding (MOU) on 17 February 2025.

This landmark agreement aims to foster international collaboration in academic and research endeavors, contributing to the shared goals of addressing global challenges related to digital access and policy-oriented research.

Office of the NBTC, a leading independent state body that regulates broadcasting, television, radiocommunications, and telecommunications across Thailand and the 10 other countries in the Association of Southeast Asian Nations (ASEAN), will collaborate with RMIT.

The partnership will also involve researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) to advance research and development in areas critical to shaping future policies and regulatory decisions.

By addressing critical issues related to the digital divide, the collaboration aims to promote more equitable access to technology, while also strengthening the policy frameworks essential for fostering social and economic development.

“Bringing together the expertise of Office of the NBTC, RMIT, and the ADM+S, this MOU will create new avenues for impactful research and policy analysis,” said Distinguished Professor Julian Thomas, Director of ADM+S.

The MOU emphasises a commitment to conducting in-depth research that will influence policy development, guiding the future of broadcasting and telecommunications regulation in both nations.

“We are proud to enter into this collaboration with Office of the NBTC, as it represents an important step toward addressing complex challenges in the digital domain,” said Distinguished Professor Thomas.

“Through our shared expertise and combined efforts, we aim to make a lasting impact on global digital policy and regulatory frameworks.”

Pictured above MoU signatories: Mr Trairat Viriyasirikul, Acting Secretary-General, Office of The National Broadcasting and Telecommunications Commission of the Kingdom of Thailand; Professor Saskia Loer Hansen, Deputy Vice-Chancellor International and Engagement and Vice-President, RMIT University; and Distinguished Professor Julian Thomas, Director of the ARC Centre of Excellence for Automated Decision-Making and Society at RMIT University.

SEE ALSO

How digital giants let poll scrutiny fall

Meta sign
Wikimedia Commons: Nokia621 CC BY-SA 4.0

How digital giants let poll scrutiny fall

Authors Axel Bruns and Samantha Vilkins
Date 27 February 2025

The changing social media world, already hostile to oversight, is making monitoring election activity even more difficult. Yet policymakers still have options.

A seismic change in the social media landscape — described by one industry insider as a ‘Cambrian explosion’ in digital options — poses fundamental challenges to those who would monitor the digital world.

Mature social media platforms like Facebook are being challenged by new players in an environment increasingly hostile to researchers and regulators.

That has huge ramifications heading towards the Australian federal election — for politicians as well as those who would monitor them.

That does not mean Australia is powerless in the fight for online transparency. There are initiatives policymakers could and should adopt, including some already in place in other jurisdictions.

While Australia lags in its approach and such initiatives will not necessarily guarantee full transparency, they would represent at least a step in the right direction policymakers appear reluctant to pursue.

An evolving environment
Online campaigning for the 2025 Australian federal election takes place in a rapidly changing online environment.

The online platform landscape was broadly stable for the past few federal elections.

Twitter was a central place for news tracking by journalists, politicians, activists and other dedicated news followers and hashtags like #ausvotes and #auspol were reliable gathering points.

Public outreach to voters and occasional discussion was most common on Facebook. The increased popularity of platforms like Instagram and TikTok required parties to come up with more visually engaging campaign content.

In addition to their ‘organic’ posting, political parties and lobby groups spent millions on advertising across these platforms, sometimes mixing authorised campaign messaging with covert attack ads and disinformation.

Online advertising might at first seem easier to track than physical flyers but in practice, layers of obfuscation enable misleading ads to go largely unnoticed.

Problematic content spread by front groups that are loosely associated with official campaigns can exploit Australia’s lack of ‘truth in political advertising’ laws as well as the lax enforcement of advertising standards by digital platforms.

What is changing
The environment is substantially different in 2025 as old platforms decline — in both use and quality — and new social media spaces emerge.

Market leader Facebook has continued its slow decline as its userbase ages and younger Australians opt for what they see as more interesting platforms like TikTok. Twitter, now known as X, has turned into a cesspool of unchecked abuse, hate speech, disinformation and even fascist agitation under Elon Musk’s leadership.

A substantial proportion of X users has moved to new platforms such as Mastodon and Bluesky or reduced their overall online activity.

Other new operators are also seeking to attract some of these X refugees.

This epochal change – which the former head of trust and safety for pre-Musk Twitter, Yoel Roth, described as a ‘Cambrian Explosion’ — has substantial consequences for how politicians and parties must approach their online campaigning.

It also has consequences for those who have to scrutinise that campaigning, such as the Australian Electoral Commission.

For such observers, it has become considerably more difficult to identify and highlight unethical campaigning, disinformation, and formal violations of campaign rules. Even when they do, it is unlikely that platforms like X and Facebook will act to address these issues.

Evading scrutiny
Several of the major platforms now actively undermine critical scrutiny of themselves and of the actions of the political actors using their platforms. Before the 2024 US election, Meta shut down its previous data access tool CrowdTangle, which had enabled limited tracking of public pages and groups on Facebook and a selection of public profiles on Instagram.

Its replacement, the Meta Content Library, is accessible only to academic researchers who face a complicated and exclusionary sign-on process and is still largely untested and unknown.

X shut down its Academic API, a free data service that enabled the large-scale and in-depth analysis of user activities on the platform. Its new API offering is priced out of reach of any researcher or watchdog.

TikTok claims to provide a researcher API, but this has been unreliable and is not available to Australia, while YouTube also offers a researcher API, but the accreditation process is cumbersome.

Only new kid on the block Bluesky offers the kind of full and free access to public posting activity on its platform that Twitter once did.

This active and deliberate evasion of critical scrutiny matters, opening the door to nefarious political operators to operate without fear of retribution.

The lack of direct visibility also makes it much harder to generate robust and comprehensive evidence of those activities and easier for platforms to dismiss legitimate concerns.

Lacking effective access to platform data, researchers and other scrutineers have been forced to resort to unauthorised methods that include user data donations and web scraping.

In those cases, platforms now often act more forcefully against this scrutiny itself, rather than against the actual problems scrutiny has revealed.

Mandating research access
There are promising initiatives to enforce greater platform transparency, but Australia still lags.

The European Union’s Digital Services Act (DSA) requires any social media platforms with more than 45 million EU-based users a month to provide data access for legitimate research purposes.

This is a crucial initiative, but platforms have interpreted their obligations differently, from the Meta Content Library’s compliance with the letter, if not the spirit, of the law to X’s outright refusal to comply, despite EU threats.

Meta’s Mark Zuckerberg and X’s Elon Musk have already asked the Trump administration for protection from EU regulation, which they falsely describe as ‘censorship’.

Australia does not have the regulatory clout of the European Union, but has the opportunity to ride the DSA’s coat-tails by implementing similar regulation here.

Regulatory alignment with other nations makes it easier for digital platforms to simply extend their DSA compliance responses, such as they are, to Australia. Those responses will still be grudging and insufficient in many cases, but are better than nothing.

Australian policymakers should support the aims of the DSA. They have recently shown a surprising appetite for digital media regulation – albeit largely misdirected towards the failed News Media Bargaining Code or the disastrous idea of banning young people from social media.

Whether that appetite also extends to making social media platforms more transparent remains to be seen.

Much like campaign finance reforms or truth in political advertising regulation, greater transparency on social media campaigning would also curtail their election campaigning opportunities, after all.

Professor Axel Bruns is an Australian Laureate Fellow, Professor in the Digital Media Research Centre at Queensland University of Technology, and a Chief Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society.

Dr Samantha Vilkins is a research associate at QUT’s Digital Media Research Centre. She researches how evidence and expertise are distributed and discussed online, especially their role in the dynamics of political polarisation.

Professor Bruns is a member of Meta’s Instagram Expert Group. He and Dr Vilkins receive funding from the Australian Research Council through Laureate Fellowship FL210100051 Dynamics of Partisanship and Polarisation in Online Public Debate.

Originally published under Creative Commons by 360info™.

SEE ALSO

Research reveals potential bias in Large Language Models’ text relevance assessments

Conceptual and abstract digital generated image of multiple AI chat icons hovering over a digital surface
Getty Images/J Studios

Research reveals potential bias in Large Language Models’ text relevance assessments

Author ADM+S Centre
Date 14 March 2025

A recent study has uncovered significant concerns surrounding the use of Large Language Models (LLMs) to assess the relevance of information, particularly in passage labelling tasks.

This research investigates how LLMs label passages of text as “relevant” or “non-relevant,” raising new questions about the accuracy and reliability of these models in real-world applications, especially when they are used to train ranking systems or replace humans for relevance assessment.

The study, which received the “Best Paper Honorable Mention” at the SIGIR-AP Conference on Information Retrieval in Tokyo in December 2024, compares the relevance labels produced by various open-source and proprietary LLMs with human judgments.

It finds that, while some LLMs agree with human assessors at similar levels of human-to-human agreement as measured in past research, they are more likely to label passages as relevant. This suggests that while LLMs’ “non-relevant” labels are generally reliable, their “relevant” labels may not be as dependable.

Marwah Alaofi, a PhD student at the ARC Centre of Excellence for Automated Decision-Making and Society, supervised by Prof Mark Sanderson, Prof Falk Scholer, and Paul Thomas, conducted the study as part of her research into measuring the reliability of LLMs for creating relevance labels.

“Our study highlights a critical blind spot in how Large Language Models (LLMs) assess document relevance to user queries,” said Marwah.

This discrepancy, the research finds, is often due to LLMs being fooled by the presence of the user query terms within the labelled passages, even if the passage is unrelated to the query or even random.

“We found that LLMs are likely to overestimate relevance, influenced by the mere presence of query words in documents, and can be easily misled into labelling irrelevant or even random passages as relevant.”

The research suggests that in production environments, LLMs might be vulnerable to keyword stuffing and other SEO strategies, which are often used to promote the relevance of web pages.

“This raises concerns about their use in replacing human assessors for evaluating and training search engines. These limitations could be exploited through keyword stuffing and other Search Engine Optimization (SEO) strategies to manipulate rankings.”

This study underscores the critical need to go beyond the traditional evaluation metrics to better assess the reliability of LLMs in relevance assessment.

SEE ALSO

5 signs of toxic division — and how to beat them

Online voting concept. Man and woman near latop with referendum and election campaign. Freedom of choic and speech. Electronic vote. Cartoon flat vector illustration isolated on white background
Credit: Rudzhan Nagiev/Getty Images

5 signs of toxic division — and how to beat them

Authors Katharina Esau, Axel Bruns and Tariq Choucair
Date 13 March 2025

Australian voters are being targeted by divisive ‘them vs us’ strategies that overshadow policy debate. Here are the signs and ways to move past the soundbites.

Politicians and media organisations are setting the stage for an Australian election where division is a deliberate strategy to mobilise supporters, discredit opponents and split undecided voters.

Polarisation is already shaping the national conversation and it’s a tactic born out of much more than just differing views.

Democratic debate thrives on differing opinions but excessive polarisation pushes discussion away from constructive engagement and into entrenched conflict that has negative consequences for democracy and broader society.

Voters need to know how to spot the signs of those ‘conflict strategies’ and to question them; to look past the soundbites for information they can trust that doesn’t break every debate down to ‘us vs them’.

Negative campaigning — attacking instead of selling your own policies — has been a feature of democratic elections for centuries. Such tactics are designed to stir emotional reactions about an opponent.

It has become a standard election strategy.

Australians might particularly associate it with former Liberal Prime Minister Tony Abbott. Abbott was known for his ruthless negativity as opposition leader from 2009 and his attack-ad-driven campaign in 2013.

How polarisation turns destructive
Now tactics have shifted to a form of strategic polarisation designed to do much more than merely discredit opponents — now the aim is to stoke all-encompassing divisions across society.

These tactics were seen in spectacular fashion in the 2016 US general election.

They were mirrored globally, with examples including Jair Bolsonaro in Brazil, Rodrigo Duterte in the Philippines and the 2017 presidential election in France.

Politicians framed their opponents as an existential threat, encouraging voters not just to support their own side but to despise their opponents.

In Australia, US influence looms large and Trump’s return to the presidency has emboldened politicians here to double down on similar strategies.

In a healthy democracy, competing parties debate ideas, disagree strongly and propose diverging solutions.

However, when polarisation becomes destructive it has potentially severe consequences for democracy and societal cohesion.

There are five key symptoms of destructive polarisation, all seen in recent Australian and global political contests.

When dialogue becomes impossible
A key symptom is that communication between opposing sides ceases to function. Rather than engaging in constructive debate, political actors, media producers and the public either avoid meaningful interaction or reduce their exchanges to misrepresenting, insulting and attacking each other.

This can be seen when party leaders trade insults and shout slogans during campaign debates, rather than debating policy, and in quips or ‘fails’ later pushed on social media.

Australia’s winner-takes-all political system — where coalitions between parties are rare — further exacerbates this. For political leaders, the ability to find compromise and build consensus is seen as a weakness rather than a strength.

When facts don’t matter
Political actors and supporters might also dismiss information outright, based on the source rather than the content.

This might target think tanks or media outlets that might be seen as aligned with one side of politics. Even independent institutions or entire professions such as researchers, public servants or journalists might be dismissed as inherently biased.

Social media users then employ the same strategy, rejecting information based on its source rather than engaging with the information.

When policy becomes a slogan war
Destructive polarisation thrives on reducing nuanced debates to misleading black-and-white choices.

Translating a complex problem or policy into simple terms is one thing, but something else is at play here.

Instead of explaining the issue and their proposed policies, candidates often oversimplify by attacking their political opponents or sometimes by blaming minorities. The message becomes: “If you support us, the problem will be solved. If you support them, we are all doomed.”

The goal of this kind of messaging is to reduce a complex issue to partisan blame or the scapegoating of entire social groups — such as migrants — while ignoring the factors contributing to a policy problem.

When the loudest dominate
In highly polarised environments, moderate perspectives are drowned out in favour of extreme voices that generate engagement and conflict.

This is what’s behind the attacks on supposedly ‘woke’ policies in the US and their importation into Australian politics in recent years.

Ordinary Australians care a great deal more about the cost of living than they care about culture wars — but such battles against imaginary enemies make for great political theatre and don’t require the long-term effort needed to manage economic policy.

When emotion is weaponised
Strategically polarising campaigns rely on stoking fear, resentment and moral outrage to mobilise supporters and silence the opposition.
Expressing emotions in debate is natural and human, but research shows that when emotions are directed at opponents rather than issues, maintaining constructive debate becomes particularly difficult.

This use of emotion is now a component of political campaigning toolkits — almost all Australian parties and their associated lobby groups have run scare campaigns at some point.

For example, the conservative lobby group Advance Australia stoked fear and doubt during the 2023 Voice referendum while Queensland Labor used 2016 election day text messages to play on fears of Medicare privatisation.

Emotional appeals in campaigning are made destructive not by the emotion itself, but when it is directed at the political ‘other’ or their supporters. This fuels a vicious cycle of accusations over who initiated the attacks, leaving voters with little choice but to take sides.

How to resist
As the federal election approaches, Australians need to be aware of how these tactics are used to manipulate them. Political leaders and media outlets will continue to frame debates to maximise division and present choices as stark moral conflicts rather than complex policy decisions.

To resist that destructive polarisation, they need to:

  • Question narratives that present opponents as enemies rather than competitors. Actively engage with people who hold different views. Consider the substance of political suggestions, not just who is making them.
  • Look for balanced sources of information that provide context, not just conflict. Find sources you can trust, and not just because they might share your views.
  • Leave space for ambivalence and compromise instead of committing fully to any one side. Consider if there are more than just two stark choices.
  • Avoid judging people and their contributions based on soundbites and headlines. Engage in longer conversations about complex issues.
  • Express emotions but don’t use them to attack, exclude or manipulate others. Beware of efforts designed to play on your own emotions.

Polarisation is not inevitable, but without critical engagement it will continue to erode democratic discourse.

Recognising the symptoms of strategic division is the first step towards restoring a political culture where debate is about ideas — not just winning or losing.

Dr Katharina Esau is a Digital Media Research Centre research fellow at Queensland University of Technology. She is Chief Investigator of the research project ‘NewsPol: Measuring and Comparing News Media Polarisation’.

Professor Axel Bruns is an Australian Laureate Fellow, Professor in the Digital Media Research Centre at QUT and a Chief Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

Dr Tariq Choucair is a QUT Digital Media Research Centre research fellow and an Affiliate at the ADM+S. He investigates online political talk and deep disagreements, especially about political minority rights.

The authors’ research covered in this article was undertaken with funding from the Australian Research Council through Laureate Fellowship FL210100051 Dynamics of Partisanship and Polarisation in Online Public Debate. Professor Bruns is also a member of Meta’s Instagram Expert Group.

The authors would like to acknowledge the contributions of Dr Samantha Vilkins, Dr Sebastian F. K. Svegaard, Kate S. O’Connor-Farfan and Carly Lubicz-Zaorski, who are leading further research in this space.

Originally published under Creative Commons by 360info™.

SEE ALSO

Half-truths and lies: an online day in Australia

Person browsing on mobile phone
Pexels/Los Muertos Crew

Half-truths and lies: an online day in Australia

Authors T.J Thomson and Aimee Hourigan
Date 13 March 2025

Australians are swamped by misinformation every day but they’re smart enough to know they need help to better navigate an untrustworthy online world.

False online claims about business and the economy top the list of misinformation concerns for Australians and research indicates they are screaming out for help on how to deal with it.

In some ways, it’s not surprising misinformation on the economy rates so highly during a cost-of-living crisis and with a federal election looming — finance-related scams are also a concern — but they’re just a few areas highlighted as Australians drown in a sea of questionable claims every day.

Online misinformation and disinformation have been labelled bigger short-term global threats than climate change or war, so improving media literacy is a critical step in fighting it.

That need is stark considering researchers — whose report, Online misinformation in Australia, was published late last year — found more than half of dodgy information is reported as coming from news sources, whether that be traditional or alternative forms of media.
Australians encounter hundreds of claims each day through channels that might include listening to a podcast, scrolling social media or reading the news and when surfing the Internet to shop, learn or seek entertainment.

The challenge is assessing how many of these claims are true and how confident Australians can be in their abilities to separate fact from fiction.

Half of us encounter misinformation weekly
The researchers found more than half of Australians encounter misinformation in a typical week and 97 percent of Australians have poor or limited ability to verify claims they encounter online.

The research has shone a light on the sources of everyday misinformation, the topics covered and how and where those claims are communicated. It also offers suggestions on how to respond.

Research participants were asked to document online news and information they saw each day for a week and rate its trustworthiness.

More than 20 percent of the 1,600 examples provided were perceived by the participants to have false or misleading claims.

Those misleading claims weren’t limited to the usual suspects such as health or political information, but ranged across other topics that included celebrity news, entertainment and sports.

False or misleading claims about business and economics were the most prevalent. A cost-of-living crisis and heightened focus on money can attract both those who don’t have it as well as those who want to exploit the vulnerable for financial, political or other gain.

It’s only logical then that scams feature high on the list of misinformation threats worrying Australians.

The research also examined the sources of false or misleading claims.

News outlets are supposed to be sources of accurate, credible information but, surprisingly, were responsible for 58 percent of the dodgy claims.

Participants were particularly critical of ‘spammy’ and clickbait headlines.

Social media accounts comprised 18 percent of the examples.

Researchers studied exactly what form misinformation took, finding written claims were the most frequent, accounting for 68 percent of all examples.

Other examples such as social media posts made up 18 percent and video 11 percent, while images (3 percent) and audio (1 percent) made up much smaller proportions.

This doesn’t necessarily mean that there are fewer spoken or visual claims that are false or misleading online. It might mean people find it harder to fact check them, don’t have the literacy to know or the opportunity to check whether what they’re seeing or hearing is true.

It’s much easier to copy a written claim and see what other sources say about it compared with trying to dictate or describe a claim found in spoken or visual form to check its accuracy.

What audiences want
In the midst of Australia recently announcing the development of a national media literacy strategy and social media platforms rolling back or abandoning fact-checking efforts, people want access to media literacy support as a response to misinformation, this research reveals.

Media literacy refers to the ability to evaluate and ask critical questions of the different media people access, use, create and share.
Adopting a media literacy approach to misinformation can be incredibly powerful, building critical knowledge and the ability to identify, evaluate and reflect on false or misleading claims.

Research participants’ interest in media literacy was high and they particularly wanted to build skills to help them evaluate sources of information and claims. They wanted to know how to gauge the reliability and trustworthiness of a source, as well as being able to to identify the intent behind different claims.

One Sydney respondent said: “It’s recognizing whether a piece of information or content is just simply trying to inform you versus a piece of information that is trying to persuade you into doing something.”

Respondents also reiterated the importance of involving key public institutions, such as schools and government, to support media education. They saw the news media as having responsibilities to deliver accurate and trustworthy information.

At its core, media literacy seeks to provide individuals with the knowledge and capabilities to thrive in society — and that can only help them better navigate an untrustworthy online world.

Dr T J Thomson is an ARC DECRA Fellow, a member of the ARC Centre of Excellence for Automated Decision-Making and a senior lecturer at RMIT University, where he co-leads the News, Technology, and Society Network. A majority of his research centres on the visual aspects of news and journalism and on the concerns and processes relevant to those who make, edit and present visual news.

Dr Aimee Hourigan is a postdoctoral research fellow in the Institute for Culture and Society at Western Sydney University. She is currently working on an ARC Linkage Project focussing on Australian adults’ experiences with identifying, navigating and assessing misinformation online.

The authors’ research covered in this article was supported by the Australian Government through the Australian Research Council’s Linkage Projects funding scheme (project LP220100208).

Originally published under Creative Commons by 360info™.

SEE ALSO

ADM+S Submission cited in new Parliament report on the Use and Governance of AI Systems by Public Sector Entities

ADM+S Submission cited in new Parliament report on the Use and Governance of AI Systems by Public Sector Entities

Author Natalie Campbell
Date 7 March 2025

The Joint Committee of Public Accounts and Audit has published its report on the Inquiry into the Use and Governance of AI by Public Sector Entities, citing the ADM+S submission throughout.

Responding to the steep increase in AI adoption by public sector entities that was found during the 2022-23 Commonwealth Financial Statements, the Committee established a specific Inquiry into the Use and Governance of AI by Commonwealth Entities in September 2024.

Chair of the Committee, Hon Linda Burney MP explained, “The issue that was fundamental to this inquiry was whether the existing governance and oversight of this technology matches its rapid and continuing advancement.

“Policy frameworks must be equipped to adequately assess the great promise that AI brings but also understand the inherent and significant risks that accompany its use.”

In February 2025 the Committee released a report titled ‘Proceed with Caution’, which provides four key recommendations.

  1. The Australian Public Service Commission to introduce questions on the use and understanding of artificial intelligence and other emerging technologies into its annual APS Employee Census.
  2. The Australian Government convenes a whole of Government working group within 12 months of this report to develop key frameworks for managing sovereign risks, and biases that result from the adoption of these technologies can be effectively mitigated.
  3. The Australian Government establishes a statutory Joint Committee on Artificial Intelligence and Emerging Technologies to provide effective and continuous Parliamentary oversight of the adoption of these systems across the Australian government and more widely.
  4. Any guidance issued by the Digital Transformation Agency, or any other Australian Government agency, should clearly define all AI systems and applications.

In addition to addressing the Inquiry’s terms of reference, the ADM+S submission led by Prof Kimberlee Weatherall included three other areas of research that raise important considerations around the use of AI in the public sector;Disability and accessibility, Environmental impact, and Trauma-informed approaches.

The 24 October submission reads, “The public sector should, in its use of AI, demonstrate the positive impacts that technology can have in achieving important public goals, such as promoting access, inclusion, and better public services.”

Key contributions and citations from the ADM+S submission:

  • Areas of stakeholder concern: Noting that while there is not a clear distinction between automation and AI, ‘whether it involves AI or not, public sector automation can significantly affect citizen’s rights and good public sector administration — and in similar ways’.
  • Australia’s AI ethics and principles: The report considers ADM+S’ concerns that the existing principles were developed prior to the widespread availability of generative AI and had not been reviewed as at September 2024
  • Policy for the responsible use of AI in government: ADM+S explains that ‘the policy is extraordinarily limited in what it requires’, as it ‘introduces a new three-part language framework that is not aligned with any of the Australia’s AI Ethics Principles, the National Framework or the proposed Mandatory Guardrails’.’
  • Current regulatory framework:  ADM+S is quoted, referring to concerns that the current arrangements do not allow for effective investigation, enforcement and direction.
  • Establishment of new policies or legislation: ADM+S is quoted for the overwhelming nature of having many slightly different guidelines, recommendations, frameworks and statements. ADM+S’ suggestion for a common baseline, one stronger than the current Commonwealth policy, is highlighted here.

The ADM+S submission was led by Kimberlee Weatherall, with contributions from Jose-Miguell Bello y Villarino, Gerard Goggin, Jake Goldenfein, Paul Henman, Rita Matulionyte, Christine Parker, Lyndal Sleep and Georgia van Toorn.

View the report.

View the ADM+S Submission.

SEE ALSO

ADM+S PhD Student undertakes fieldwork on fintech services in Indonesia

Oliver Knight (RMIT) with focus group participants who discussed financial practices including digital and informal lending.
Oliver Knight (RMIT) with focus group participants who discussed financial practices including digital and informal lending.

ADM+S PhD Student undertakes fieldwork on fintech services in Indonesia

Author Natalie Campbell
Date 6 March 2025

ADM+S PhD Student Oliver Knight from RMIT University recently returned from a fieldwork trip in Indonesia, conducting focus groups and surveys to inform his thesis on ‘Lesser Sunda, More Defaults? P2P Lending in East Indonesia’.

The objective of the trip was to investigate claims of digital financial inclusion by studying access to fintech and online credit strategies in Indonesia’s West Nusa Tenggara (NTB) and East Nusa Tenggara (NTT provinces), through qualitative focus groups and surveys.

The trip begun with a presentation at Kantor Desa in Lingsar Indonesia, where Oliver gave an overview of his research topic and plans to the village leaders, and conducted focus groups with participants.

Oliver and Abdul Basit (Universitas Islam Al-Azhar) conducting focus groups with heads of villages in Kantor Desa, Gegerung, Kec. Lingsar, Indonesia.

 

During the subsequent three-week trip, Oliver hosted focus groups in West, Central, and East Lombok regions, as well as conducting surveys with participants at two Universities in Kota Mataram.

“This field trip allowed me to deepen my connection with the areas of Indonesia that are relevant to my research by creating relationships with local FinTech users, industry, and academics,” said Oliver.

“It also provided the opportunity to develop critical contextual understanding of the important socio-cultural and community dynamics at play.”

This primary data collection across two regions will inform Oliver’s thesis and was strategically timed so that the analysis could be presented at his second milestone review in March.

While in Indonesia, Oliver worked closely with Reza Arviciena Sakti, Abdul Basit and Dr Vegalyra Novantini Samodra from the Universitas Islam Al-Azhar (Unizar), who assisted with recruitment, data analysis, and translation during his stay.

“The staff and broader community at Universitas Islam Al-Azhar have always been so welcoming, and share a deep passion and excitement for my research, which I find so motivating,” he said.

“On a personal level, the opportunity to develop my public speaking skills, Indonesian language, and the way I frame my research, will help me tremendously as I continue my career in research.”

Oliver received a Speaker Certificate for sharing his experience studying in Australia with students at Universitas Islam Negeri – Mataram.

 

When asked about a highlight of his trip, Oliver declared the many “aha!” moments he experienced during the data collection and analysis process.

“Each of these moments felt like finding a jigsaw piece that fits into my research puzzle/problem and show how valuable the fieldwork has been.”

This fieldtrip was supported by ADM+S.

SEE ALSO

Research Fellow takes ADM+S research abroad for feedback and collaboration

Dr Ashwin Nagappa and colleagues from Hans Bredow Institut
Dr Ashwin Nagappa and colleagues at Hans Bredow Institut

Research Fellow takes ADM+S research abroad for feedback and collaboration

Author Natalie Campbell
Date 6 March 2025

ADM+S Research Fellow Dr Ashwin Nagappa has returned from Europe, after attending the ECREA Communication History 2025 workshop in Geneva, Switzerland, and visiting ADM+S Partner Investigators at Hans Bredow Institut and the University of Amsterdam.

The 2025 ECREA Workshop was held at CERN in Geneva, the European Organisation for Nuclear Research Centre, one of the world’s largest and most prestigious scientific laboratories and the birth place of the web

The theme for this year’s workshop was ‘Communication Networks Before and After the Web: Historical and Long-term Perspective’, bringing together international scholars from media history, media archaeology and digital media, to explore the origins of the web and its evolution into one of the most influential technologies of our time.

“As I engaged with scholars working on contemporary AI tools and research, I found it fascinating that the web, now a central information system in our daily lives, was never originally conceived as such—it was designed as a tool for scientists to accelerate experiments.,” said Ashwin.

“It’s important to recognize that the web forms the foundation of our everyday search and social media experiences, providing the vast information that AI relies on..”

From Geneva, Ashwin then travelled to Hans Bredow Institut in Hamburg, where he was welcomed by Prof Judith Möller, Scientific Director at the Leibniz Institute for Media Research and Professor of Empirical Communication Research, Media Use and Social Media Effects the University of Hamburg.

He also visited ADM+S Partner Investigator Prof Maarten de Rijke and his team at the Information Retrieval Lab at the University of Amsterdam.

With both groups, Ashwin was given the opportunity to present the explainer, ‘What is search experience?’ – a brief introduction to the ADM+S Australian Search Experience 2.0 Project, including early developments and future plans for the research – which is set to be developed into a four-part blog series in March 2025.

“This talk draws on a literature review of both information retrieval – the technological aspect – and search experience – the social aspect, and has been refined over the past few months with feedback from colleagues across ADM+S.

“This explainer has proven to be a valuable tool for expanding different aspects of the Australian Search Experience, identifying connections across its subprojects, and exploring crossovers with other signature projects within the centre.”

These presentations were followed by Q&A sessions, providing valuable insights for refining the workflow of the project.

Being surrounded by experts in information retrieval at the University of Amsterdam, Ashwin was able to learn about their research on various aspects of AI and Search, noting synergies between their respective projects, and opportunities for potential collaboration.

“For instance, some PhD students specialize in evaluating AI-generated text for human-like quality, which could support our efforts to automate search processes.

This trip was supported by ADM+S and QUT.

SEE ALSO

#AccelerateAction: Spotlighting ADM+S research on gender bias in AI and ADM systems

#AccelerateAction: Spotlighting ADM+S research on gender bias in AI and ADM systems

Author ADM+S Centre
Date 5 March 2025

International Women’s Day celebrates the social, economic, cultural, and political achievements of women, global progress towards gender equality, and recognises that there is substantial work still to be done.

In the fields of technology, automated decision-making, and generative AI, women are still under-represented but disproportionately affected by the negative effects of emerging digital technologies.

This International Women’s Day we’re highlighting the work of ADM+S members across our research program who are investigating gender bias in AI and ADM systems.

By identifying inequalities in the ways users experience technology, these projects aim to #AccelerateAction in creating a more just and inclusive digital environment.

Advanced technology is taking us backwards on gender equity.

She might go by Siri, Alexa, or inhabit Google Home. She keeps us company, orders groceries, vacuums the floor, and turns out the light. The principal prototype for these virtual helpers – designed in male-dominated industries – is the 1950s housewife.

In The Smart Wife, Yolande Strengers and Jenny Kennedy examine the emergence of digital devices that carry out “wifework”–domestic responsibilities that have traditionally fallen to (human) wives. They offer a Smart Wife “manifesta,” proposing a rebooted Smart Wife that would promote a revaluing of femininity in society in all her glorious diversity.

In 2024, Yolande’s research on gendered voicebots was adapted into an educational school program in partnership with the Monash Tech School and Monash University’s Faculty of IT, called Superbots.

Superbots is a two-day interactive Industry Immersion program that explores the history, ethics, and societal influences on Voicebots and voice-assisted software development.

ADM+S filmmaker Jeni Lee produced a short film about the program, which observes and engages with students from Brentwood Secondary College as they ideate, test and construct their own voicebot personality.

Superbots will be available on SBS on Demand from Saturday 9 March.

This paper considers how algorithmic recommender systems and other core affordances and infrastructures of major social media platforms contribute to the harms of ‘hate speech’ against or vilification of women online.

The paper argues that this kind of speech occurring on major social media platforms exists at the intersections of patriarchy and platform power and is thus platformed.

Platforms also seek to maintain control or influence over the conditions for their own regulation and governance through use of their discursive power. Related to this is a privileging of self-regulatory action in current laws and law reform proposals for platform governance, which we argue means that platformed speech that vilifies women is also auspiced by platforms.

This auspicing, as an aspect of platforms’ discursive power, represents an additional ‘layer’ of contempt for women, for which platforms currently are not, but should be, held accountable.

 

Existing studies have examined depictions of journalists in popular culture, but artificial intelligence understandings of what a journalist is and what they look like is a different topic, yet to receive research attention.

This study analyses 84 images generated by AI from four “generic” keywords (“journalist,” “reporter,” “correspondent,” and “the press”) and three “specialized” ones (“news analyst,” “news commentator,” and “fact-checker”) over a six-month period.

The results reveal an uneven distribution of gender and digital technology between the generic and specialized roles and prompt reflection on how AI perpetuates extant biases in the social world.

 

Drawing on two ADM+S reports led by Dr Quilty (automation in transport mobilities scoping study and expert visions of future automated mobilities), this article introduces a critical concept called Pod Man that examines the gendered and racial formations embedded into technologies like self-driving cars.

Dr Quilty defines Pod Man as the technology-driven, hyper-mobile and hyper-masculine transport consumer found at the centre of sociotechnical imaginaries of automated mobilities. He represents the ideal mobility subject who is both invisible and powerful, shaping visions of the future of mobility.

Pod Man is both a provocation and an entry point for thinking about how emerging technologies, such as autonomous vehicles, are shaping unequal relations of power in visions of mobility futures.

Image: Miranda Burton

Generative AI systems learn how to create from our existing, unequal past; now, they’re embedding those same historical biases into our future.

ADM+S PhD Student Sadia Sharmin is researching how biases baked into AI models shape broader social views, amplifying and reinforcing existing power relations through their outputs.

The subtle biases produced by GenAI may seem innocuous, but they are insidious in that they shape cultural narratives, reinforce stereotypes, and influence social perceptions and opportunities for women on a potentially massive scale.

Her research seeks to tackles this subtle but pervasive problem by developing new ways to measure and identify gender bias in AI outputs – going beyond simple statistics – to understand how Generative AI systems might reinforce stereotypes about women’s place, capabilities, and value in society.

This includes creating new tools that go beyond obvious and quantifiable forms of bias, and instead assess the more subtle ways AI systems might undersell women’s achievements, limit their perceived potential, or reinforce gender-based assumptions.

 

Artificial Intelligence (AI) is increasingly being used in the delivery of social services including domestic violence services. While it offers opportunities for more efficient, effective and personalised service delivery, AI can also generate greater problems, reinforcing disadvantage, generating trauma or re-traumatising service users.

Building on work in social services on trauma-informed practice, this project identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma.

This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI.

 

This collaboration with UNED Madrid and The Polytechnic University of Valencia aimed to create an evaluation benchmark for automatic sexism characterisation in social media.

In recent years, the rapid increase in the dissemination of offensive and discriminatory material aimed at women through social media platforms has emerged as a significant concern.

The EXIST campaign has been promoting research in online sexism detection and categorization in English and Spanish since 2021. The fourth edition of EXIST, hosted at the CLEF 2024 conference, consisted of three groups of tasks analysing Tweets and Memes: sexism identification, source intention identification, and sexism categorization.

The “learning with disagreement” paradigm is adopted to address disagreements in the labelling process and promote the development of equitable systems that are able to learn from different perspectives on the sexism phenomena.

 

Crowdsourced annotation is vital to both collecting labelled data to train and test automated content moderation systems and to support human-in-the-loop review of system decisions. However, annotation tasks such as judging hate speech are subjective and therefore, highly sensitive to biases stemming from annotator beliefs, characteristics and demographics.

This research involved two crowdsourcing studies on Mechanical Turk to examine annotator bias in labelling sexist and misogynistic hate speech.

Results from 109 annotators show that annotator political inclination, moral integrity, personality traits, and sexist attitudes significantly impact annotation accuracy and the tendency to tag content as hate speech.

In exploring how workers interpret a task — shaped by complex negotiations between platform structures, task instructions, subjective motivations, and external contextual factors — we see annotations not only impacted by worker factors but also simultaneously shaped by the structures under which they labour.

 

At the ADM+S Centre, we recognise that racism, colonialism, sexism, homophobia, transphobia, and ableism are principal obstacles to equity, diversity and inclusion, and remain primary causes of injustice and inequality. We believe that gender equality for all means equality for marginalised groups, and that the cause of gender equality includes the experiences of including Indigenous and POC women, and transgender and non-binary people. You can read about how we are working to foster diversity and inclusion in the ADM+S community and through our research via our Equity and Diversity Strategy and Action Plan (website link).

Dr Anjalee de Silva, an expert on harmful speech and its regulation in online contexts and a member of the ADM+S Equity and Diversity Committee, explains “AI and ADM technologies have the potential to, and consistently have been evidenced to, replicate ‘real world’ biases against and harms to structurally vulnerable groups, including women and minorities.

“Scholarship considering these biases and harms is thus a crucial part of systemically informed and equitable approaches to the development, use, and regulation of such technologies.”

Prof Yolande Strengers adds, “Now more than ever we need to work hard to protect the progress we have made to support the unequal opportunities women and other minorities in technology fields experience.

“We also need research and programs that bring less heard voices into the public domain and push for further advances in equity.”

Watch: ADM+S community celebrates IWD

SEE ALSO

‘I can’t be friends with the machine’: what audio artists working in games think of AI

Illustration with two people in a recording studio
Credit: Visual Generation/Shutterstock

‘I can’t be friends with the machine’: what audio artists working in games think of AI

Author Sam Whiting
Date 5 March 2025

The Media, Entertainment and Arts Alliance, the union for voice actors and creatives, recently circulated a video of voice actor Thomas G. Burt describing the impact of generative artificial intelligence (GenAI) on his livelihood.

Voice actors have been hit hard by GenAI, particularly those working in the video game sector. Many are contract workers without ongoing employment, and for some game companies already feeling the squeeze, supplementing voice-acting work with GenAI is just too tempting.

Audio work – whether music, sound design or voice acting – already lacks strong protections. Recent research from my colleagues and I on the use of GenAI and automation in producing music for Australian video games reveals a messy picture.

Facing the crunch

A need for greater productivity, increased turnarounds, and budget restraints in the Australian games sector is incentivising the accelerated uptake of automation.

The games sector is already susceptible to “crunch”, or unpaid overtime, to reach a deadline. This crunch demands faster workflows, increasing automation and the adoption of GenAI throughout the sector.

The Australian games industry is also experiencing a period of significant contraction, with many workers facing layoffs. This has constrained resources and increased the prevalence of crunch, which may increase reliance on automation at the expense of re-skilling the workforce.

One participant told us:

the fear that I have going forward for a lot of creative forms is I feel like this is going to be the fast fashion of art and of text.

Mixed emotions and fair compensation

Workers in the Australian games industry have mixed feelings about the impact of GenAI, ranging from hopeful to scared.

Audio workers are generally more pessimistic than non-audio games professionals. Many see GenAI as extractive and potentially exploitative. When asked how they see the future of the sector, one participant responded:

I would say negative, and the general feeling being probably fear and anxiety, specifically around job security.

Others noted it will increase productivity and efficiency:

[when] synthesisers started being made, people were like, ‘oh, it’s going to replace musicians. It’s going to take jobs away’. And maybe it did, but like, it also opened up this whole other world of possibilities for people to be creative.

A vintage keyboard.
There were once fears about what synthesisers would mean for musicians’ livelihoods.
Peter Albrektsen/Shutterstock

Regardless, most participants expressed concerns about whether a GenAI model was ethically trained and whether licensing can be properly remunerated, concerns echoed by the union.

Those we spoke with believed the authors of any material used to train AI data-sets should be fairly compensated and/or credited.

An “opt-in” licensing model has been proposed by unions as a compromise. This states a creators’ data should only be used for training GenAI under an opt-in basis, and the use of content to train generative AI models should be subject to consent and compensation.

Taboos, confusion and loss of community

Some audio professionals interested in working with GenAI do not feel like they can speak openly about the subject, as it is seen as taboo:

There’s like this feeling of dread and despair, just completely swirling around our entire creative field of people. And it doesn’t need to be like that. We just need to have the right discussions, and we can’t have the right discussions if everyone’s hair is on fire.

The technology is clearly divisive, despite perceived benefits.

Several participants expressed concerns the prevalence of GenAI may reduce collaboration across the sector. They feared this could result in an erosion of professional community, as well as potential loss of institutional knowledge and specific creative skills:

I really like working with people […] And handing that over to a machine, like, I can’t be friends with the machine […] I want to work with someone who’s going to come in and completely shake up the way, you know, our project works.

The Australian games sector is reliant on a highly networked but often precarious set of workers, who move between projects based on need and demand for certain skills.

The ability to replace such skills with automation may lead to siloing and a deterioration of greater professional collaboration.

But there are benefits to be had

Many workers in the games audio sector see automation as helpful in terms
of administration, ideation, workshopping, programming and as an educational tool:

In terms of automation, I see it as, like, utilities. For example, being a developer, I write scripts. So, if I’m doing something and it’s gonna take me a long time, I’ll automate it by writing a script.

These systems also have helpful applications for neurodivergent professionals and workers who may struggle with time management or other attention-related issues.

Over half of participants said AI and automation allows more time for creativity, as workers can automate the more tedious elements of their workflow:

I suffer like anyone else from writer’s block […] If you can give me a piece of software that is trained off me, that I could say, ‘I need something that’s in my house style, make me something’, and a piece of software could spit back at me a piece of music that sounds like me that I could go, ‘oh, that’s exactly it’, I would do it. That would save me an incalculable amount of time.

Many professionals who would prefer not to use AI said they would consider using it in the face of time or budget constraints. Others stated GenAI allows teams and individuals to deliver more work than they would without it:

Especially with deadlines always being as short as they are, I think a lot of automation can help to focus on the more creative and decision-based aspects.

Many workers within the digital audio space are already working hard to create ethical alternatives to AI theft.

Although GenAI may be here to stay, a balance between the efficiencies provided should not come at the cost of creative professions.The Conversation

Sam Whiting, Vice-Chancellor’s Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Microsoft cuts data centre plans and hikes prices in push to make users carry AI costs

Image: bluestork / Shutterstock.com
Image: bluestork / Shutterstock.com

Microsoft cuts data centre plans and hikes prices in push to make users carry AI costs

Author Kevin Witzenberger and Michael Richardson
Date 3 March 2025

After a year of shoehorning generative AI into its flagship products, Microsoft is trying to recoup the costs by raising prices, putting ads in products, and cancelling data centre leases. Google is making similar moves, adding unavoidable AI features to its Workspace service while increasing prices.

Is the tide finally turning on investments into generative AI? The situation is not quite so simple. Tech companies are fully committed to the new technology – but are struggling to find ways to make people pay for it.

Shifting costs

Last week, Microsoft unceremoniously pulled back on some planned data centre leases. The move came after the company increased subscription prices for its flagship 365 software by up to 45%, and quietly released an ad-supported versionof some products.

The tech giant’s CEO, Satya Nadella, also recently suggested AI has so far not produced much value.

Microsoft’s actions may seem odd in the current wave of AI hype, coming amid splashy announcements such as OpenAI’s US$500 billion Stargate data centre project.

But if we look closely, nothing in Microsoft’s decisions indicates a retreat from AI itself. Rather, we are seeing a change in strategy to make AI profitable by shifting the cost in non-obvious ways onto consumers.

The cost of generative AI

Generative AI is expensive. OpenAI, the market leader with a claimed 400 million active monthly users, is burning money.

Last year, OpenAI brought in US$3.7 billion in revenue – but spent almost US$9 billion, for a net loss of around US$5 billion.

OpenAI CEO Sam Altman says the company is losing money on US$200 per month ChatGPT Pro subscriptions. Aurelien Morissard / EPA
OpenAI CEO Sam Altman says the company is losing money on US$200 per month ChatGPT Pro subscriptions. Aurelien Morissard / EPA

 

Microsoft is OpenAI’s biggest investor and currently provides the company with cloud computing services, so OpenAI’s spending also costs Microsoft.

What makes generative AI so expensive? Human labour aside, two costs are associated with AI models: training (building the model) and inference (using the model).

While training is an (often large) up-front expense, the costs of inference grow with the user base. And the bigger the model, the more it costs to run.

Smaller, cheaper alternatives

A single query on OpenAI’s most advanced models can cost up to US$1,000 in compute power alone. In January, OpenAI CEO Sam Altman said even the company’s US$200 per month subscription is not profitable. This signals the company is not only losing money through use of its free models, but through its subscription models as well.

Both training and inference typically take place in data centres. Costs are high because the chips needed to run them are expensive, but so too are electricity, cooling, and the depreciation of hardware.

The growing cost of running data centres to power generative AI products has sent tech companies scrambling for ways to recoup their costs. Aerovista Luchtfotografie / Shutterstock
The growing cost of running data centres to power generative AI products has sent tech companies scrambling for ways to recoup their costs. Aerovista Luchtfotografie / Shutterstock

 

To date, much AI progress has been achieved by using more of everything. OpenAI describes its latest upgrade as a “giant, expensive model”. However, there are now plenty of signs this scale-at-all-costs approach might not even be necessary.

Chinese company DeepSeek made waves earlier this year when it revealed it had built models comparable to OpenAI’s flagship products for a tiny fraction of the training cost. Likewise, researchers from Seattle’s Allen Institute for AI (Ai2) and Stanford University claim to have trained a model for as little as US$50.

In short, AI systems developed and delivered by tech giants might not be profitable. The costs of building and running data centres are a big reason why.

What is Microsoft doing?

Having sunk billions into generative AI, Microsoft is trying to find the business model that will make the technology profitable.

Over the past year, the tech giant has integrated the Copilot generative AI chatbot into its products geared towards consumers and businesses.

It is no longer possible to purchase any Microsoft 365 subscription without Copilot. As a result subscribers are seeing significant price hikes.

As we have seen, running generative AI models in data centres is expensive. So Microsoft is likely seeking ways to do more of the work on users’ own devices – where the user pays for the hardware and its running costs.

Microsoft says the Copilot key will ‘empower people to participate in the AI transformation’. Microsoft
Microsoft says the Copilot key will ‘empower people to participate in the AI transformation’. Microsoft

 

A strong clue for this strategy is a small button Microsoft began to put on its devices last year. In the precious real estate of the QWERTY keyboard, Microsoft dedicated a key to Copilot on its PCs and laptops capable of processing AI on the device.

Apple is pursuing a similar strategy. The iPhone manufacturer is not offering most of its AI services in the cloud. Instead, only new devices offer AI capabilities, with on-device processing marketed as a privacy feature that prevents your data travelling elsewhere.

Pushing costs to the edge

There are benefits to the push to do the work of generative AI inference on the computing devices in our pockets, on our desks, or even on smart watches on our wrists (so-called “edge computing”, because it occurs at the “edge” of the network).

It can reduce the energy, resources and waste of data centres, lowering generative AI’s carbon, heat and water footprint. It could also reduce bandwidth demands and increase user privacy.

But there are downsides too. Edge computing shifts computation costs to consumers, driving demand for new devices despite economic and environmental concerns that discourage frequent upgrades. This could intensify with newer, bigger generative AI models.

A shift to more ‘on-device’ AI computing could create more problems with electronic waste. SibFilm / Shutterstock

 

And there are more problems. Distributed e-waste makes recycling much harder. What’s more, the playing field for users won’t be level if a device dictates how good your AI can be, particularly in educational settings.

And while edge computing may seem more “decentralised”, it may also lead to hardware monopolies. If only a handful of companies control this transition, decentralisation may not be as open as it appears.

As AI infrastructure costs rise and model development evolves, shifting the costs to consumers becomes an appealing strategy for AI companies. While big enterprises such as government departments and universities may manage these costs, many small businesses and individual consumers may struggle.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S Affiliate Dr Christopher O’Neill awarded prestigious Fulbright Scholarship

Dr Chris O’Neill is awarded a Fulbright Scholarship from the Governor General Sam Mostyn, at Parliament House. Image: bencalvertphoto.com
Dr Chris O’Neill is awarded a Fulbright Scholarship from the Governor General Sam Mostyn, at Parliament House. Image: bencalvertphoto.com

ADM+S Affiliate Dr Christopher O’Neill awarded prestigious Fulbright Scholarship

Author Natalie Campbell
Date 3 March 2025

ADM+S Affiliate Dr Christopher O’Neill, who recently completed a Research Fellowship with Prof Mark Andrejevic at the Monash University node of ADM+S has been awarded a 2025-2026 Fulbright Scholarship at the University of Southern California.

Commemorating the achievement at Parliament House in Canberra on 27 February, Dr O’Neill was presented his Fulbright Scholarship from the Governor General, Sam Mostyn.

Dr O’Neill will spend four months working with Assoc Prof Mike Ananny, Associate Professor of Communication and Journalism at the USC Annenberg School of Journalism, studying automation, work and error.

The Fulbright Program is the largest educational scholarship of its kind, created by US Senator J. William Fulbright and the US Government in 1946, and is the flagship foreign exchange scholarship program of the United States.

Successful Fulbright recipients are interviewed and selected by panels of experts from academia, government, professional organisations and the U.S. Embassy in a competitive process which assesses academic and professional merit, a strong program proposal with defined potential outcomes, and ambassadorial skills.

Dr O’Neill is currently a Research Fellow at the Alfred Deakin Institute, where his work draws upon science and technology studies and critical media theory to study the place of automation in contemporary biopower.

Prior to his role at Deakin, he spent three years as a Postdoctoral Research Fellow working with ADM+S Chief Investigator Prof Mark Andrejevic at the Monash University node of ADM+S, where among other projects, he developed a critical analysis of the role of the human in automated work and surveillance systems.

ADM+S Prof Mark Andrejevic said, “Chris did amazing work during his time at the Centre, and It’s great to see his well-deserved success in the Fulbright Program and beyond.

“I know he will make the most of the opportunity and this will continue to build his burgeoning international reputation.”

Notably, an international workshop he co-organised alongside fellow ADM+S member Lauren Kelly has led to a forthcoming special issue of Work Organisation, Labour and Globalisation on ‘new worlds of logistical labour’.

Dr O’Neill has also appeared as a public commentator on recent industrial relations issues regarding the place of automation and surveillance in warehouse work.

“Having the opportunity to develop my work on labour and automation at the ADM+S Centre has led to me receiving a Fulbright Scholar Award,” says Dr O’Neill.

“The opportunity that’s been given early career researchers s at ADM+S is astounding. You have an incredible amount of freedom and encouragement to develop your own path as a researcher.

“I made so many connections with talented and brilliant researchers from all over Australia, but also with international networks that the Centre opened me up to.”

In 2022 Dr O’Neill received support from ADM+S to take part in a two-month AI and Humanity Research Cluster at the University of Southern California in Berkley, collaborating with researchers from across America.

“During that experience, I made lots of new relationships with American researchers, and I’ve subsequently organized workshops and streams of international conferences in collaboration with those colleagues.”

Dr O’Neill will commence his exchange in August 2025, where he will work with Associate Prof Ananny in the Media as SocioTechnical Systems (MASTS) research group, studying the way that errors in automated systems can reveal the dynamics and assumptions which are sometimes hidden within automated work infrastructures.

View the 2025 Fulbright announcement.

SEE ALSO

Supporting the next generation of researchers at the 2025 ADM+S Summer School

Research Fellow William He (QUT) leading the 'Transformers Alive' workshop
Research Fellow William He (QUT) leading the 'Transformers Alive' workshop

Supporting the next generation of researchers at the 2025 ADM+S Summer School

Author Natalie Campbell
Date 3 March 2025

The 2025 ADM+S Summer School, hosted by the University of Melbourne Law School, brought together over 120 students, researchers and mentors for a curated program spanning research methodologies, ethics advice, writing and publishing, and more.

Bringing together higher degree research students (HDRs) and early career researchers (ECRs) from all nine ADM+S nodes, the annual Summer School provides a perfect opportunity for community members to ask questions, share concerns, learn from one another, and get the most out of their research journey in the ADM+S community.

ADM+S Manager of Research Training and Development and member of the Summer School working group Sally Storey, said “This event would not be possible without the incredible generosity of our Centre’s research community.”

“I want to say a huge thank you to all our presenters and mentors for sharing your knowledge and expertise with our attendees, and the time leading up to the Summer School preparing presentations, materials, wrangling, scheduling… the effort is outstanding!”

The program encourages PhD students to engage with topics across disciplines, learn about different research methods, and create connections with peers and mentors from across the national ADM+S network – an invaluable experience for all early career researchers.

PhD Student Tace McNamara from Monash University explained, “I’m looking at AI and its capacity to understand art and music as an audience.

“It’s been really interesting talking to people from other disciplines because I think what I’m doing is inherently interdisciplinary, so hearing about law, media, culture, that’s something I don’t do on a daily basis in my lab, and it’s been really valuable.”

Sessions ranged from ‘Ethical uses of GenAI in research’, to ‘Unpacking ideas animating technology governance’, ‘Interviewing with digital trace data’, ‘How to study socio-technical networks’, ‘Harnessing technology for remote research, and more.

“The Transformers Alive session, led by Aaron Snoswell, was such a didactic way of learning more about how generative AI operates and how people can embody the experience of how the information system operates in the background,” said PhD Student Miguel Loor Paredes from Monash University.

“It gave me another understanding of how artificial intelligence works and also how it relates to my research problem, and how to frame it from the humanities perspective.”

A highlight of the program was the closing plenary session hosted by the ADM+S Research Training and Capability Development Committee, inviting input from the HDR community on the design and delivery of the ADM+S Research Training program.

The Summer School also provides an occasion for HDR’s and ECR’s to engage in our formal mentoring program, connecting with senior researchers from within, or outside their discipline, to share their research, ask questions, get feedback, and build their network across ADM+S institutions.

“A real highlight for me is seeing our students and research fellows from across the Centre, building that community spirit, getting involved, making new research connections and friendships that will see them over their career,” said Sally Storey.

Senior Research Fellow Sam Whiting from RMIT University said, “I’m a new Affiliate at the Centre so I’m a bit out of my comfort zone, but that’s been really interesting because I’ve been exposed to a lot of new ideas and meeting people, connecting, and thinking about future collaborations.

“I’m really looking forward to more events like this, opportunities to connect with people outside of my usual networks, opportunities to collaborate on projects.”

Many thanks to all speakers, mentors, and student participants for making this event possible, and especially the ADM+S Research training Committee for their hard work behind the scenes in delivering this brilliant event.

View the 2025 Summer School photo library.

SEE ALSO

ADM+S professional staff recognised at the 2024 RMIT Research Service Awards

RMIT 2024 Research Service Awards. Image: Matt Houston, RMIT
RMIT 2024 Research Service Awards. Image: Matt Houston, RMIT

ADM+S professional staff recognised at the 2024 RMIT Research Service Awards

Author Natalie Campbell
Date 28 February 2025

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is thrilled to congratulate members of the Professional Staff team, who have been recognised for their service excellence in the annual RMIT Research Service Awards.

The awards ceremony was held on 21 February 2025 at The Capitol Theatre in Melbourne, dedicated to celebrating the achievements of the RMIT research community and research support staff.

The Research Service Awards invited peers to nominate those in their community who demonstrate tremendous effort in supporting and delivery successful research outcomes.

ADM+S Chief Operating Officer Nicholas Walsh was awarded the Service Excellence award, which honours an individual who demonstrate excellence in research and innovation support.

ADM+S COO Nicholas Walsh receives the Service Excellence Award at RMIT's 2024 Research Service Awards
Callum Drummond (DVC Research and Innovation) presents the Service Excellence Award to ADM+S COO Nicholas Walsh. Image: Matt Houston, RMIT

 

Announcing the award, Jane Holt, Executive Director of RMIT’s Research Strategy & Services, highlighted Nick’s pioneering role as the first COO of RMIT’s inaugural centre of excellence, and commended his coordination and delivery of a remarkable ARC mid-term report working with ADM+S’ global partners.

“His dedication to meeting the centre’s needs and resolving challenges within RMIT’s enterprise systems demonstrates his outstanding service and leadership,” she said.

The RMIT ADM+S Operations team, consisting of Nicholas Walsh, Julie Stuart, Leah Hawkins, Natalie Campbell, Lucy Valenta, Mathew Warren and Sally Storey, were awarded a special commendation for the Service Excellence in the Collaboration category, recognising the team’s collaborative efforts in supporting the delivery of high-impact outcomes from ADM+S research.

Pictured: Leah Hawkins, Julie Stuart, Nick Walsh, Callum Drummond (DVC Research and Innovation) and Mathew Warren.Absent: Natalie Campbell, Sally Storey and Lucy Valenta
Pictured: Leah Hawkins, Julie Stuart, Nick Walsh, Callum Drummond (DVC Research and Innovation) and Mathew Warren. Absent: Natalie Campbell, Sally Storey and Lucy Valenta. Image: Matt Houston, RMIT

 

This commendation was presented by Tim McLennan, Executive Director of Research Partnerships and Translation, and Prof Swee Mak, Director of Strategic Innovation and Innovation at RMIT University. The announcement emphasised the team’s ability to drive significant research outcomes through innovative cross-departmental initiatives.

“By fostering synergies across diverse teams, they have created a dynamic ecosystem that amplifies research potential and enhances the impact of institutional research,” Prof Mak explained.

All awards were presented by Distinguished Professor Calum Drummond, AO the Deputy Vice-Chancellor in Research and Innovation, and Vice-President of RMIT University.

Learn more about the RMIT Research Service Awards.

SEE ALSO

ADM+S Chief Investigator announced co-director of the Centre for AI, Trust and Governance at the University of Sydney

Prof Kim Weatherall Credit: University of Sydney
Prof Kim Weatherall. Image credit: University of Sydney

ADM+S Chief Investigator announced co-director of the Centre for AI, Trust and Governance at the University of Sydney

Author Natalie Campbell
Date 27 February 2025

On 25 February 2025, the University of Sydney unveiled its new Centre for AI, Trust and Governance (CAITG), appointing ADM+S Chief Investigator Prof Kimberlee Weatherall as co-director alongside Prof Terry Flew.

As co-director, Prof Weatherall will lead groundbreaking research to ensure AI is transparent, fair, and accountable, championing the critical role of law and policy in shaping ethical AI.

“Universities have a critical role to play in ensuring that AI develops for the benefit of everyone, all the way across society,” says Prof Weatherall.

 “I’m proud to be co-directing CAITG that can bring together the University of Sydney’s outstanding researchers and students, from different research disciplines, to understand how the technology is developing, its impacts in the world and how to shape it for the better.”

CAITG’s research agenda is focused on AI’s relationship to digital creative industries platforms and information, law and policy, education and equity, organisations and work, and civic technology and participation.

Some of the themes being investigated include:

  • how to restore trust in social institutions, and whether AI presents new threats to trust and social cohesion
  • how laws and regulations need to change in order to ensure that AI systems serve the public interest
  • how the community can be better involved in decisions about the uses of AI in secondary and tertiary education
  • foreign actors using AI to undermine democracy in Australia and in the Asia-Pacific region.

Prof Weatherall has an extensive background in technology regulation and intellectual property law and policy. She co-leads two ADM+S Signature Projects, The Regulatory Project, where her work focuses on questions relating of accountability and government ADM use, as well as GenAISim where she is exploring legal and policy implications of using LLM-based agents in policymaking.

Prof Weatherall is a member of multiple State and Federal level policy advisory groups, including her appointment to the Australian Government’s temporary AI Expert Group alongside ADM+S colleagues Prof Jeannie Paterson and Prof Nicolas Suzor in 2024. She is also a member of the Copyright and AI Reference Group convened by the Commonwealth Attorney-General’s Department.

Prof Weatherall has led multiple ADM+S Submissions, informing responsible, ethical and inclusive development of ADM in Australia, including:

  1. Submission to the Joint Parliamentary Committee of Public Accounts and Audit inquiry into public sector AI use (2024)
  2. Safe and responsible AI in Australia: proposals paper for introducing mandatory guardrails for AI in high-risk settings (2024)
  3. Submission to the Senate Select Committee on Adopting Artificial Intelligence (2024)

In early 2024, Prof Weatherall and a team of ADM+S researchers delivered a report in partnership with the New South Wales Ombudsman, mapping and evaluating the use of ADM systems by Local and State governments, following a 12-month collaboration with the Ombudsman.

In her role as co-director of CAITG, Prof Weatherall will expand on this impressive resume, furthering her impact in the field of AI and AI governance.

Learn more.

SEE ALSO

‘Dark ads’ challenge truth and our democracy

Composite art featuring logos from Facebook, TikTok, X and YouTube
Composite art by Michael Joiner, 360info CC BY 4.0

‘Dark ads’ challenge truth and our democracy

Author Daniel Angus and Mark Andrejevic
Date 25 February 2025

The rise of ‘dark advertising’ — personalised advertisements increasingly powered by artificial intelligence that evade public scrutiny — means Australians face a murky information landscape going into the federal election.

It’s already happening and, combined with Australia’s failure to enact truth-in-advertising legislation and big tech’s backtracking on fact-checking, means voters are left vulnerable to ad-powered misinformation campaigns. And that’s not good for democracy.

Tackling misinformation requires legislative action, international collaboration and continued pressure on platforms to open their systems to scrutiny.

The failures of US tech platforms during their own elections should serve as a clear warning to Australia that industry self-regulation is not an option.

Political advertising plays a pivotal role in shaping elections, even while it is shrouded in opacity and increasing misinformation.

In the lead-up to the 2025 federal election, a significant volume of deceptive advertising and digital content has already surfaced. That’s not surprising, given the Australian Electoral Commission (AEC) limits its oversight to the official campaign period, meaning false claims can proliferate freely before the official campaign.

At the heart of this challenge lies the evolution of digital political advertising.

What is ‘dark advertising’?
Modern campaigns rely heavily on social media platforms, leveraging associative ad models that tap into beliefs or interests to deliver digital advertising. Unlike traditional media, where ads are visible and subject to better regulatory and market scrutiny, digital ads are often fleeting and hidden from public view.

Recent AI developments make it easier and cheaper to create false and misleading political ads in large volumes with multiple variations increasingly difficult to detect.

This ‘dark advertising’ creates information asymmetries, in this case one where groups have access to information and can control and shape how it’s delivered. That leaves voters exposed to tailored messages that may distort reality.

Targeted messaging makes it possible to selectively provide voters with very different views of the same candidate. In the recent US presidential election, a political action committee linked to X owner Elon Musk targeted Arab-American voters with the message that Kamala Harris was a diehard Israel ally, while simultaneously messaging Jewish voters that she was an avid supporter of Palestine.

Ad targeting online also lets political advertisers single out groups more likely to be influenced by selective, misleading or false information. Conservative lobby group Advance Australia’s recent campaign basically followed this playbook, disseminating outdated news articles on Facebook, a tactic known as malinformation, where factual information is deliberately spread misleadingly to harm individuals or groups.

The vulnerabilities
The Albanese government recently withdrew a proposed truth-in-political-advertising bill, leaving voters vulnerable to misleading content that undermines democratic integrity.

The bill was never introduced to parliament and its future remains uncertain.

The transparency tools provided by Meta, which covers Facebook and Instagram, and Google parent company Alphabet — which include ad libraries and “Why Am I Seeing This Ad?” explanations — also fall woefully short of enabling meaningful oversight.

These tools reveal little about the algorithms that determine ad delivery or the audiences being targeted. They do include some demographic breakdowns, but say little about the combination of ads an individual user might have seen and in what context.

Recent findings from the US highlight the vulnerabilities of political advertising in the digital age. An investigation by ProPublica and the Tow Center for Digital Journalism revealed that deceptive political ads thrived on platforms like Facebook and Instagram in the lead-up to the 2024 US elections.

Ads frequently employed AI-generated content, including fabricated audio of political figures, to mislead users and harvest personal information. One ad account network has run about 100,000 misleading ads, significantly exploiting Meta’s advertising systems.

The Australian story
The US developments are alarming, but it’s important to recognise Australia’s unique political and regulatory landscape.

Australians have seen what happened in the US but fundamental differences in media consumption, political structure and culture and regulatory frameworks mean that Australia may not necessarily follow the same trajectory.

The AEC does enforce specific rules on political advertising, particularly during official campaign periods, yet oversight is weak outside these periods, meaning misleading content can circulate unchecked.

The failure to pass truth-in-political-advertising laws only exacerbates the problem.

The media blackout period bans political ads on radio and TV three days before the federal election, but it does not apply to online advertising, meaning there is little time to identify or challenge misleading ads.

Ad-driven technology firms like Meta and Alphabet have backed away from previous initiatives to curb misinformation and deceptive advertising and enforce minimum standards.

Despite Meta’s public commitments to prevent misinformation from spreading, deceptive ads still flourished throughout the 2024 US election, raising significant concerns about the effectiveness of platform self-regulation while backtracking on fact-checking raises concerns about Meta’s overall commitment to combating misinformation.

Given these developments, it is unrealistic to expect platforms to proactively police content effectively, especially in a jurisdiction like Australia.

Some solutions
Independent computational tools have emerged in an attempt to address these issues. They include browser plugins and mobile apps that allow users to donate their ad data. During the 2022 election, the ADM+S Australian Ad Observatory project collected hundreds of thousands of advertisements, uncovering instances of undisclosed political ads.

In the lead-up to the 2025 election, that project will rely on a new mobile advertising toolkit capable of detecting mobile digital political advertising served on Facebook, Instagram and TikTok.

Regulatory solutions like the EU’s Digital Services Act (DSA) offer another potential path forward, mandating access to political advertising data for researchers and policymakers although Australia lags in adopting similar measures.

Without some of these solutions platforms remain free to follow their economic incentive to pump the most sensational, controversial and attention-getting content into people’s news feeds, regardless of accuracy.

This creates a fertile environment for misleading ads, not least because platforms have been given protection from liability. That is not an information system compatible with democracy.

Professor Daniel Angus is a leading expert in computational communication and digital media, specialising in the analysis of online discourse, AI, and media transparency. He is the director of the Digital Media Research Centre at the Queensland University of Technology.

Professor Mark Andrejevic is an expert in the social and cultural implications of data mining, and online monitoring at Monash University’s School of Media, Film and Journalism.

Professor Angus’ research receives funding from the Australian Research Council through the Centre of Excellence for Automated Decision Making & Society and LP190101051 ‘Young Australians and the Promotion of Alcohol on Social Media’.

Professor Andrejevic is also a chief investigator in the Australian Research Council Centre of Excellence for Automated Decision Making & Society, and he also has an ARC Discovery Project ‘The Australian experience of automated advertising on digital platforms’.

Originally published under Creative Commons by 360info™.

SEE ALSO

AI in Journalism: new report reveals growing concerns over misleading content and industry impact

Front cover of Generative AI & Journalism report
Image: T.J Thomson

AI in Journalism: new report reveals growing concerns over misleading content and industry impact

Author ADM+S Centre
Date 19 February 2025

A new industry report has found audiences and journalists are growing increasingly concerned by generative artificial intelligence (AI) in journalism.

Summarising three years of research, the Generative AI & Journalism report was launched at the ARC Centre of Excellence for Automated Decision-Making and Society this week.

Report lead author, Dr T.J. Thomson from RMIT University in Melbourne, Australia, said the potential of AI-generated or edited content to mislead or deceive was of most concern.

“The concern of AI being used to spread misleading or deceptive content topped the list of challenges for both journalists and news audiences,” he said.

“We found journalists are poorly equipped to identify AI-generated or edited content, leaving them open to unknowingly propelling this content to their audiences.”

This is partly because few newsrooms have systematic processes in place for vetting user-generated or community contributed visual material.

Most journalists interviewed were not aware of the extent to which AI is increasingly and often invisibly being integrated into both cameras and image or video editing and processing software.

“AI is sometimes being used without the journalists or news outlet even knowing,” Thompson said.

While only one quarter of news audiences surveyed thought they had encountered generative AI in journalism, about half were unsure or suspected they had.

“This points to a potential lack of transparency from news organisations when they use generative AI or to a lack of trust between news outlets and audiences,” Thomson said.

News audiences were found to be more comfortable with journalists using AI when they themselves have used it for similar purposes, such as to blur parts of an image.

“The people we interviewed mentioned how they used similar tools when on video conferencing apps or when using the portrait mode on smartphones,” Thomson said.

“We also found this with journalists using AI to add keywords to media since audiences had themselves experienced AI describing images in word processing software.”

Thomson said news audiences and journalists alike were overall concerned about how news organisations are – and could be – using generative AI.

“Most of our participants were comfortable with turning to AI to create icons for an infographic but quite uncomfortable with the idea of an AI avatar presenting the news, for example,” he said.

Part-problem, part-opportunity
The technology, which has advanced significantly in recent years, was found to be both an opportunity and threat to journalism.

For example, Apple recently suspended its automatically generated news notification feature after it produced false claims about high-profile individuals, including false deaths and arrests, and attributed these false claims to reputable outlets, including BBC News and The New York Times.

While AI can perform tasks like sorting and generating captions for photographs, it has well-known biases against, for example, women and people of colour.

But the research also identified lesser-known biases, such as favouring urban over non-urban environments, showing women less often in more specialised roles, and ignoring people living with disabilities.

“These biases exist because of human biases embedded in training data and/or the conscious or unconscious biases of those who develop AI algorithms and models,” Thomson said.

But not all AI tools are equal. The study found those which explain their decisions, disclose their source material, and ensure transparency in outputs regarding their use are less risky for journalists compared to tools that lack these features.

Journalists and audience members were also concerned about generative AI replacing humans in newsrooms, leading to fewer jobs and skills in the industry.

“These fears reflect a long history of technologies impacting on human labour forces in journalism production,” Thompson said.

The report, designed for the media industry, identifies dozens of ways journalists and news organisations can use generative AI and summarises how comfortable news audiences are with each.

It summarises several of the team’s research studies, including the latest peer-reviewed study, published in Journalism Practice.

Report authors: Dr T.J Thomson (ADM+S Affiliate), Ryan Thomas, Assoc Prof Michelle Riedlinger (ADM+S Affiliate), and Dr Phoebe Matich (ADM+S Research Fellow).

Portions of the underlying research in the report were financially supported by the Design and Creative Practice, Information in Society, and Social Change Enabling Impact Platforms at RMIT University, the Weizenbaum Institute for the Networked Society / German Internet Institute, the Centre for Advanced Internet Studies, the Global Journalism Innovation Lab, the QUT Digital Media Research Centre, and the Australian Research Council through DE230101233 and CE200100005.

Generative AI and Journalism: Content, Journalistic Perceptions, and Audience Experiences is published by RMIT University (DOI: 10.6084/m9.figshare.28068008).

Old Threats, New Name? Generative AI and Visual Journalism is published in Journalism Practice (DOI: 10.1080/17512786.2025.2451677).

View the original article AI-generated journalism falls short of audiences’ expectations: report published by RMIT University Media.

SEE ALSO

Vibes are something we feel but can’t quite explain. Now researchers want to study them

AI Generated image - white and red human figures
Shutterstock/Efe Murat

Vibes are something we feel but can’t quite explain. Now researchers want to study them

Author Ash Watson
Date 19 February 2025

When we’re uncomfortable we say the “vibe is off”. When we’re having a good time we’re “vibing”. To assess the mood we do a “vibe check”. And when the atmosphere in the room changes we call it a “vibe shift”.

In a broad sense, a “vibe” is something akin to a mood, atmosphere or energy.

But this is an imperfect definition. Often, we’ll use this term to describe something we feel powerfully, but find hard to articulate.

As journalist and cultural critic Kyle Chayka described in 2021, a vibe is “a placeholder for an unplaceable feeling or impression, an atmosphere that you couldn’t or didn’t want to put into words”.

Being able to understand the subtleties of social interactions – that is, to “feel the vibes” – is extremely valuable, not just for our social interactions, but also for researchers who study people.

What’s behind the rise of vibes? And how can sociologists like myself unpack “vibe culture” to make sense of the world?

A history of vibes

The nuance and complexity of vibes makes them an interesting cultural trend. Vibes can be very specific, but can also totally resist specificity.

Australians (and fans of Australiana) will remember the iconic line from the beloved 1997 film The Castle: “It’s just the vibe of the thing… I rest my case.”

While it may seem like a recent cultural development, vibe isn’t the first example of cryptic language being used to express an ambiguous thing or situation. There are similar concepts with long histories, such as “quintessence” in Ancient Greek philosophy and “auras” in mysticism.

More recently, vibes rose in popularity through music including 1960s rock, epitomised by the Beach Boys (“pickin’ up good vibrations”) and Black American rap vernacular from the 1990s, such as in the song Vibes and Stuff by A Tribe Called Quest (“we got, we got, we got the vibes”).

‘Vibes’ rose in popularity through music including 1960s rock and 1990s Black American rap.
Shutterstock

While we don’t know when the term was first used as it is today, it seems to have taken hold in the 1970s.

I trawled the online archive of The New Yorker and found an early mention of vibes in a 1971 report about communes in New York City.

One interviewee spoke about the “vibration of togetherness” that drew them to the commune. Ending the day on the subway, the author Hendrik Hertzberg (now a senior editor at the magazine) “just sat there and soaked up the good vibes”.

New uses and meanings have emerged in the years since.

Vibes today

As vibe is used in more ways, its meaning becomes expanded and diffused. A person or situation can have good vibes, bad vibes, weird vibes, laid-back vibes, or any other adjective you can imagine.

Language is a central part of qualitative research. While new phrases and slang can be casual and superficial, they can also represent broader, more complex concepts. Vibe is a great example of this: a simple term that refers to something potent yet ephemeral, affecting yet ambiguous.

By paying attention to the words people use to describe their experiences, sociologists can identify patterns of social interactions and shifts in social attitudes.

Perhaps vibes work like a heuristic – a mental shortcut – but for feeling rather than thinking.

People use heuristics to make everyday decisions or draw conclusions based on their experiences. Heuristics are, in essence, our common sense. And “vibes” might be best described as our common feeling, as they speak to a subtle aspect of how we collectively relate and interact.

Sociologists have long studied complex common feelings. Ambivalence, for instance, has been a focus in research on digital privacy. Studying when and why people feel ambivalent about digital technology can help us understand their seemingly contradictory behaviour, such as when they say they are concerned about privacy, but do very little to protect their information.

Ambivalence reveals how people make decisions via small, everyday compromises – moments and feelings that may be overlooked in quantitative research. A qualitative approach can help us to align policies with people’s real-world behaviours.

Researchers react

Then again, it’s difficult to study something people find hard to articulate in the first place. Asking participants to rank the “vibes” of something in a survey doesn’t quite work.

So researchers are finding new ways to feel the vibe: to see what participants see, to feel what they feel and get a deeper understanding of their lived experiences.

For instance, such study could provide insight into how senior clinicians make important decisions amid uncertainty. We already know making decisions in complex situations involves more than logic and rationality.

In one Australian study published last year, researchers assessed how vibes have become part of online advertising algorithms. The researchers analysed the social media feeds of more than 200 young people, using the concept of vibes to show how advertising models attune to individuals and social groups.

Such approaches can complement, or even update, tried-and-tested research methods, expanding on what we know about human relationships and experiences.The Conversation

Ash Watson, Scientia Fellow and Senior Lecturer, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Generative AI is already being used in journalism – here’s how people feel about it

AI generated image of news presenter
Indonesia’s TVOne launched an AI news presenter in 2023. T.J. Thomson

Generative AI is already being used in journalism – here’s how people feel about it

Author ADM+S Centre
Date 19 February 2025

Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception.

A new report published today finds that news audiences and journalists alike are concerned about how news organisations are – and could be – using generative AI such as chatbots, image, audio and video generators, and similar tools.

The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France).

Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had.

This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences.

Who or what makes your news – and how – matters for a host of reasons.

Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources – such as politicians or experts – more than others.

Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet’s staff themselves aren’t representative of their audience.

Carelessly using AI to produce or edit journalism can reproduce some of these inequalities.

Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each.

The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic.

But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower.

The problem – and opportunity

Generative AI can be used in just about every part of journalism.

For example, a photographer could cover an event. Then, a generative AI tool could select what it “thinks” are the best images, edit the images to optimise them, and add keywords to each.

An image of a field with towers in the distance and computer-generated labels superimposed that try to identify certain objects in the image.
Computer software can try to recognise objects in images and add keywords, leading to potentially more efficient image processing workflows.
Elise Racine/Better Images of AI/Moon over Fields, CC BY

These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make “good” images are different to what a computer might think? These criteria may also change over time or in different contexts.

Even something as simple as lightening or darkening an image can cause a furore when politics are involved.

AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context.

Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others’ content.

AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC.

What do people think about journalists using AI?

Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes.

For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the “portrait” mode on smartphones.

Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who’d previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media.

A screenshot of an image with the alt-text description that reads A view of the beach from a stone arch.
Popular word processing and presentation software can automatically generate alt-text descriptions for images that are inserted into documents or presentations.
T.J. Thomson

The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral.

For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles’s coronation, news outlets reported on this false image.

Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns.

An overview of twelve opinion columns published by The Daily Telegraph and each featuring an image generated by an AI tool.
The Daily Telegraph frequently turns to generative AI to illustrate its opinion columns, sometimes generating more photorealistic illustrations and sometimes less photorealistic ones.
T.J. Thomson

Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use.

Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example.

On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to “enliven” an otherwise static image in the hopes of attracting viewer interest and engagement.

A historical photograph from the State Library of Western Australia’s collection has been animated with AI (a tool called Runway) to introduce motion to the still image.
T.J. Thomson

Your role as an audience member

If you’re unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can’t find one, consider asking the outlet to develop and publish a policy.

Consider supporting media outlets that use AI to complement and support – rather than replace – human labour.

Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Michelle Riedlinger, Associate Professor in Digital Media, Queensland University of Technology; Phoebe Matich, Postdoctoral Research Fellow, Generative Authenticity in Journalism and Human Rights Media, ADM+S Centre, Queensland University of Technology, and Ryan J. Thomas, Associate Professor, Washington State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S researcher cited in Parliament’s report on the Future of Work

ADM+S researcher cited in Parliament’s report on the Future of Work

Author Natalie Campbell
Date 17 February 2025

The House of Representatives Standing Committee on Employment, Education and Training has published The Future of Work: Inquiry into the Digital Transformation of Workplaces, following their Inquiry into the Digital Transformation of Workplaces, citing contributions from ADM+S Affiliate Emmanuelle Walkowiak’s 19 June submission.

The inquiry found that Imminent support is required for employers, workers, students, and regulators, and that Australia needs to increase investment in research and development to ensure the safe, responsible and effective use of ADM and AI in the workplace.

The report explains, “digital transformation has exposed significant risks, including gaps in Australia’s regulatory frameworks and workplace protections. This is especially the case with data and privacy.”

A Vice-Chancellor Senior Research Fellow in Economics at RMIT’s Blockchain Innovation Hub, Dr Walkowiak’s research primarily focuses on technology driven inclusion at work, and the changing nature of work in a digital economy.

Her submission to the Inquiry outlined evidence-based recommendations on harnessing AI for productivity, skill development, and job creation in Australia while addressing risks like impacts on hiring, job design, and work quality. It explored AI’s effect on labour rights, fairness, and dignity at work, as well as its influence on small businesses and vulnerable groups, including neurodiverse workers.

Dr Walkowiak’s submission is cited in the report’s discussion of Regulating Technology: Public views (p.19), Opportunities in productivity and efficacy (p.28), and Data and Privacy: Disclosure and breach of privacy (p.49).

Dr Walkowiak said, “I’m honoured that my insights have been cited in the final report, which outlines key recommendations on the digital transformation of work and its implications for workers, businesses, and policymakers.

“Engaging with policymakers to support evidence based decision is an important part of my research, and I look forward to further discussions on shaping more inclusive and productive workplaces in the digital age.”

The Inquiry into the Digital Transformation of Workplaces was adopted on 9 April 2024, following a referral from the Minister for Employment and Workplace Relations, to report on the rapid development and uptake of automated decision making and machine learning techniques in the workplace.

Dr Walkowiak was invited to present evidence to the Committee as part of an academic roundtable on 2 September 2024.

ADM+S Affiliate Dr Kobi Leins and PhD Student Lauren Kelly were also involved in the public hearings.

View the full report.

SEE ALSO

AI is being used in social services – but we must make sure it doesn’t traumatise clients

AI is being used in social services – but we must make sure it doesn’t traumatise clients

Author Suvradip Maitra, Lyndal Sleep, Paul Henman, Suzana Fay
Date 10 February 2025

Late last year, ChatGPT was used by a Victorian child protection worker to draft documents. In a glaring error, ChatGPT referred to a “doll” used for sexual purposes as an “age-appropriate toy”. Following this, the Victorian information commissioner banned the use of generative artificial intelligence (AI) in child protection.

Unfortunately, many harmful AI systems will not garner such public visibility. It’s crucial that people who use social services – such as employment, homelessness or domestic violence services – are aware they may be subject to AI. Additionally, service providers should be well informed about how to use AI safely.

Fortunately, emerging regulations and tools, such as our trauma-informed AI toolkit, can help to reduce AI harm.

How do social services use AI?

AI has captured global attention with promises of better service delivery. In a strained social services sector, AI promises to reduce backlogs, lower administrative burdens and allocate resources more effectively while enhancing services. It’s no surprise a range of social service providers are using AI in various ways.

Chatbots simulate human conversation with the use of voice, text or images. These programs are increasingly used for a range of tasks. For instance, they can provide mental health support or offer employment advice. They can also speed up data processing or help quickly create reports.

However, chatbots can easily produce harmful or inaccurate responses. For instance, the United States National Eating Disorders Association deployed the chatbot Tessa to support clients experiencing eating disorders. But it was quickly pulled offline when advocates flagged Tessa was providing harmful weight loss advice.

Recommender systems use AI to make personalised suggestions or options. These could include targeting job or rental ads, or educational material based on data available to service providers.

But recommender systems can be discriminatory, such as when LinkedIn showed more job ads to men than women. They can also reinforce existing anxieties. For instance, pregnant women have been recommended alarming pregnancy videos on social media.

Recognition systems classify data such as images or text to compare one dataset to another. These systems can complete many tasks, such as face matching to verify identity or transcribing voice to text.

Such systems can raise surveillance, privacy, inaccuracy and discriminationconcerns. A homeless shelter in Canada stopped using facial recognition cameras because they risked privacy breaches – it’s difficult to obtain informed consent from mentally unwell or intoxicated people using the shelter.

Risk-assessment systems use AI to predict the likelihood of a specific outcome occurring. Many systems have been used to calculate the risk of child abuse, long-term unemployment, or tax and welfare fraud.

Often data used in these systems can recreate societal inequalities, causing harm to already-marginalised peoples. In one such case, a tool in the US used for identifying risk of child mistreatment unfairly targeted poor, black and biracial families and families with disabilities.

A Dutch risk assessment tool seeking to identify childcare benefits fraud was shut down for being racist, while an AI system in France faces similar accusations.

The need for a trauma-informed approach

Concerningly, our research shows using AI in social services can cause or perpetuate trauma for the people who use the services.

The American Psychological Association defines trauma as an emotional response to a range of events, such as accidents, abuse or the death of a loved one. Broadly understood, trauma can be experienced at an individual or group level and be passed down through generations. Trauma experienced by First Nations people in Australia as a result of colonisation is an example of group trauma.

Between 57% and 75% of Australians experience at least one traumatic event in their lifetime.

Many social service providers have long adopted a trauma-informed approach. It prioritises trust, safety, choice, empowerment, transparency, and cultural, historical and gender-based considerations. A trauma-informed service provider understands the impact of trauma and recognises signs of trauma in users.

Service providers should be wary of abandoning these core principles despite the allure of the often hyped capabilities of AI.

Can social services use AI responsibly?

To reduce the risk of causing or perpetuating trauma, social service providers should carefully evaluate any AI system before using it.

For AI systems already in place, evaluation can help monitor their impact and ensure they are operating safely.

We have developed a trauma-informed AI assessment toolkit that helps service providers to assess the safety of their planned or current use of AI. The toolkit is based on the principles of trauma-informed care, case studies of AI harms, and design workshops with service providers. An online version of the toolkit is about to be piloted within organisations.

By posing a series of questions, the toolkit enables service providers to consider whether risks outweigh the benefits. For instance, is the AI system co-designed with users? Can users opt out of being subject to the AI system?

It guides service providers through a series of practical considerations to enhance the safe use of AI.

Social services do not have to avoid AI altogether. But social service providers and users should be aware of the risks of harm from AI – so they can intentionally shape AI for good.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Call for a more comprehensive regulatory framework for automated decision-making in the public sector

Report cover for Submission to AG Department on ADM Reform

Call for a more comprehensive regulatory framework for automated decision-making in the public sector

Author ADM+S Centre
Date 10 February 2025

In a new submission to the Attorney-General Department’s Automated Decision-Making (ADM) Reform consultation, experts from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) urge the government to adopt a more comprehensive regulatory framework for ADM in the public sector. 

The submission argues that the current focus of the consultation paper on legislation and regulation overlooks essential aspects like enforcement and accountability, which are critical to ensuring responsible use of technology in government decision-making.

The response highlights that the existing approach is too narrow, focusing primarily on AI-based systems, while neglecting broader systemic issues. The authors contend that the government should lead by example, setting a standard for safe and accountable technology use that applies to all technical systems, not just AI.

Among the key recommendations outlined in the submission are calls for stronger enforcement mechanisms, including active monitoring and independent oversight. The experts also emphasize the need for transparency in the acquisition of ADM systems, urging the government to adopt robust measures to prevent misuse and ensure accountability across all public sector applications of automated decision-making.

Key recommendations

  • The ADM framework should include enforcement and accountability mechanisms.
  • Systemic and preventative measures, including ex-ante control and active monitoring, are needed.
  • An independent oversight body should monitor and enforce standards across government.
  • Qualified transparency mechanisms should be adopted.
  • Key transparency requirements should be incorporated into the acquisition of ADM systems.

As the public sector increasingly integrates automated technologies, the submission urges policymakers to act quickly to address these gaps, advocating for a regulatory framework that goes beyond individual cases to tackle systemic risks.

Authors:
Dr José-Miguel Bello y Villarino, Prof Emeritus Terry Carney, Prof Kimberlee Weatherall, Dr Rita Matulionyte, Prof Julian Thomas, Prof Paul Henman and Veronica Lenard.

SEE ALSO

Elections mean more misinformation. Here’s what we know about how it spreads in migrant communities

Individual reading news on their phone while riding the bus.
Individual reading news on their phone while riding the bus.

Elections mean more misinformation. Here’s what we know about how it spreads in migrant communities

Author Fan Yang and Sukhmani Khorana
Date 6 February 2025

Migrants in Australia often encounter disinformation targeting their communities. However, disinformation circulated in non-English languages and within private chat groups often falls beyond the reach of Australian public agencies, national media and platform algorithms.

This regulatory gap means migrant communities are disproportionately targetedduring crises, elections and referendums when misinformation and disinformation are amplified.

With a federal election just around the corner, we wanted to understand how migrants come across disinformation, how they respond to it, and importantly, what can be done to help.

Our research

Our research finds political disinformation circulates both online and in person among friends and family.

Between 2023 and 2024, we carried out a survey with 192 respondents. We then conducted seven focus groups with 14 participants who identify as having Chinese or South Asian cultural heritage.

We wanted to understand their experiences of political engagement and media consumption in Australia.

An important challenge faced by research participants is online disinformation. This issue was already long-standing and inadequately addressed by Australian public agencies and technology companies, even before Meta ended its fact-checking program.

Lack of diversity in news

Our study finds participants read news and information from a diverse array of traditional and digital media services with heightened sense of caution.

They encounter disinformation in two ways.

The first is information misrepresenting their identity, culture, and countries of origin, particularly found in English-language Australian national media.

The second is targeted disinformation distributed across non-English social media services, including in private social media channels.

Image: Misinformation is often spread on Chinese social media platforms to target their users. Shutterstock

 

From zero (no trust) to five (most trusted), we asked our survey participants to rank their trust towards Australian national media sources. This included the ABC, SBS, The Age, Sydney Morning Herald, 9 News and the 7 Network.

Participants reported a medium level of trust (three).

Our focus groups explained the mistrust participants have towards both traditional and social media news sources. Their thoughts echoed other research with migrants. For instance, a second-generation South Asian migrant said:

it feels like a lot of marketing with traditional media […] they use marketing language to persuade people in a certain way.

Several participants of Chinese and South Asian cultural backgrounds reported that Australian national media misrepresent their culture and identity due to a lack of genuine diversity within news organisations. One said:

the moment you’re a person of colour, everyone thinks that you’re Chinese. And we do get painted with the same paintbrush. It is very frustrating […]

Another added:

Sri Lanka usually gets in the media for cricket mainly, travel and tourism. So apart from that, there’s not a lot of deep insight.

For migrants, the lack of genuine engagement with their communities and countries of origin distorts public understanding, reducing migrants to a one-dimensional, often stereotypical, portrayal. This oversimplification undermines migrants’ trust in Australian national media.

Participants also expressed minimal trust in news and information on social media. They often avoid clicking on headline links, including those shared by Australian national media outlets. According to a politically active male participant of Chinese-Malaysian origin:

I don’t really like reading Chinese social media even though I’m very active on WeChat and subscribe to some news just to see what’s going on. I don’t rely on them because I usually don’t trust them and can often spot mistakes and opinionated editorials rather than actual news.

Consuming news from multiple sources to understand a range of political leanings is a strategy many participants employed to counteract biased or partial news coverage. This was particularly the case on issues of personal interest, such as human rights and climate change.

What can be done?

Currently, Australia lacks effective mechanisms to combat online disinformation targeting migrant communities, especially those whose first language is not English.

Generalised counter-disinformation approaches (such as awareness camapaigns) fail to be effective even when translated into multiple languages.

This is because the disinformation circulating in these communities is often highly targeted and tailored. Scaremongering around geopolitical, economic and immigration policies is a common theme. These narratives are too specific for a population-level approach to work.

Our focus groups revealed that the burden of addressing disinformation often falls on family members or close friends. This responsibility is particularly carried by community-minded individuals with higher levels of media and digital knowledge. Women and younger family members play a key role.

Image: Women and younger family members play a key role in debunking misinformation in migrant families. Shutterstock

 

Focus group members told us how they explained Australian political events to their families in terms they were more familiar with.

During the Voice to Parliament referendum, one participant referenced China’s history of resistance against Japanese Imperialism to help a Chinese-Australian friend better understand the consequences of colonialism and its impacts on Australia’s First Nations communities.

Younger women participants shared that combating online disinformation is an emotionally taxing process. This is especially so when it occurs within the family, often leading to conflicts. One said:

I’m so tired of intervening to be honest, and mostly it’s family […] my parents and close friends and alike. There is so much misinformation passed around on WhatsApp or socials. When I do see someone take a very strong stand, usually my father or my mother, I step in.

Intervening in an informal way doesn’t always work. Family dynamics, gender hierarchies and generational differences can impede these efforts.

Countering disinformation requires us to confront deeper societal issues related to race, ethnicity, gender, power and the environment.

International research suggests community-based approaches work better for combating misinformation in specific cohorts, like migrants. This sort of work could take place in settings people trust, be that community centres or public libraries.

This means not relying exclusively on changes in the law or the practices of online platforms.

Instead, the evidence suggests developing community-based interventions that are culturally resonant and attuned to historical disadvantage would help.

Our recently-released toolkit makes a suite of recommendations for Australian public services and institutions, including the national media, to avoid alienating and inadvertently misinforming Asian-Australians as we approach a crucial election campaign.


Read more: About half the Asian migrants we surveyed said they didn’t fully understand how our voting systems work. It’s bad for our democracy


This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Open source and under control: the DeepSeek paradox

DeepSeek is a Monkey King moment in the global AI landscape. : Illustration by Michael Joiner, 360info. Images by William Tung, Wikimedia & Akash Tetwal, Pexels. CC BY-SA 4.0

Open source and under control: the DeepSeek paradox

Author Haiqing Yu
Date 5 February 2025

DeepSeek has emerged on the front line of debates determining the future of AI, but its arrival poses questions over who decides what ‘intelligence’ we need.

Chinese company DeepSeek stands at the crossroads of two major battles shaping artificial intelligence development: whether source code should be freely available and whether development should happen in free or controlled-information environments.

That also highlights the DeepSeek paradox. It champions open-source AI — where the source code of the underlying model is available for others to use or modify — while operating in China, one of the world’s most-controlled data environments.

That means DeepSeek prompts obvious questions about who decides what kind of ‘intelligence’ we need. Such questions are obviously front of mind for some governments, with several already placing restrictions on the use of DeepSeek.

DeepSeek, a Chinese startup, unveiled its AI chatbot late last month. It seemed to equal the performance of US models at a fraction of the cost and the news triggered a massive sell-off of tech company shares on the US sharemarket.

It also sparked concerns about data security and censorship. In Australia, DeepSeek has been banned from all federal government devices, the NSW government has reportedly banned it from its devices and systems and other state governments are considering their options. The Australian ban followed similar action by Taiwan, Italy and some US government agencies.

The Australian government says the bans are not related to DeepSeek’s country of origin, but the issues being raised now are similar to those discussed when Chinese-based social media app TikTok was banned on Australian government devices two years ago.

Yet aside from those concerns and DeepSeek’s role in reshaping the power dynamics in the US-China AI rivalry, it also gives hope to less well-resourced countries to develop their own large language models using DeepSeek’s model as a starting point.

For those seeking Chinese-related pop culture references, DeepSeek is a Monkey King moment in the global AI landscape.
Monkey King, or Wukong in Chinese, was a character featured in the 16th century novel Journey to the West.

The story was popularised in the 1980s television series Monkey and later iterations. In these stories, Wukong was the unpredictable force challenging established power, wreaking havoc in the Heavenly Palace and embodying both defiance and restraint.

That’s a pretty apt description for where DeepSeek stands in the AI world in 2025.

A new benchmark
As the author of a recent Forbes piece rightly points out, the real story about DeepSeek is not about geopolitics but “about the growing power of open-source AI and how it’s upending the traditional dominance of closed-source models”.

Author Kolawole Samuel Adebayo says it’s a line of thought that Meta chief AI scientist Yann LeCun also shares.

The AI industry has long been divided between closed-source titans like OpenAI, Google, Amazon, Microsoft and Baidu and the open-source movement, which includes Meta, Stability AI, Mosaic ML as well as universities and research institutes.

DeepSeek’s adoption of open-source methodologies — building on Meta’s open-source Llama models and the PyTorch ecosystem — places it firmly in the open-source camp.

While closed-source large language models prioritise controlled innovation, open-source large language models are built on the principles of collaborative innovation, sharing and transparency.

DeepSeek’s innovative methods challenge the notion that AI development is backed by vast proprietary datasets and computational power, measured by the number and capacity of chips.

It also demonstrates a point made by the Australian Institute for Machine Learning’s Deval Shah three months before DeepSeek made global headlines: “The future of LLM [large language model] scaling may lie not just in larger models or more training data, but in more sophisticated approaches to training and inference.”

The DeepSeek case illustrates that algorithmic ingenuity can compensate for hardware and computing limitations, which is significant in the context of US export controls on high-end AI chips to China. That’s a crucial lesson for any nation or company restricted by computational bottlenecks.

It suggests that an alternative path exists — one where innovation is driven by smarter algorithms rather than sheer hardware dominance.

Just as Wukong defied the gods with his wit and agility, DeepSeek has shown that brute strength, or in this case raw computing power, is not the only determinant of AI success.

However, DeepSeek’s victory in the open-source battle does not mean it has won the war.

It faces the toughest challenges for the road ahead, particularly when it comes to scale, refinement and two of the greatest strengths of US AI companies — data quality and reliability.

The Achilles’ heel
DeepSeek appears to have broken free from the limitations of computing dependence, but it remains bound by China’s controlled information environment, which is an even greater constraint.

Unlike ChatGPT or Llama, which train on vast, diverse and uncensored global datasets, DeepSeek operates in the palm of the Buddha — the walled garden that is the Chinese government-approved information ecosystem.

While China’s AI models are technically impressive and perform brilliantly on technical or general questions, they are fundamentally limited by the data they can access, the responses they can generate and the narratives they are allowed to shape.

This is particularly so when it comes to freedom of expression and was illustrated by a small test conducted on 29 January 2025. DeepSeek was asked questions about the 1989 Tiananmen Square protests and massacre.

Image above: Screenshot and translation of DeepSeek test provided by author

In the test, DeepSeek was asked three questions, two in Chinese and one in English. It refused to answer the first and third question and evaded the second question.

ChatGPT, on the other hand, gives a thorough analysis to all three questions.

The test — among many other queries on sensitive topics — exposes the double bind facing Chinese AI: Can its large language model be truly world-class if it is constrained in what data it can ingest and what output it can generate? Can it be trustworthy if it fails the reliability test?

This is not merely a technical issue — it’s a political and philosophical dilemma.

In contrast to models like GPT-4, which can engage in free-form debate, DeepSeek operates within an internet space where sensitive topics must be avoided.

DeepSeek may have championed open-source large language models with its Chinese discourse of efficiency and ingenuity, but it remains imprisoned by a deeper limitation: data and regulatory constraints.

While its technical prowess lies in its reliance on and contribution to openness in code, it operates within an information ‘greenhouse’, where production of and access to critical and diverse datasets are ‘protected’. In other words such datasets are restricted.

This is where the Monkey King metaphor comes full circle. Just as Wukong believed he had escaped but only to realize he was still inside the Buddha’s palm, DeepSeek appears to have achieved independence — yet remains firmly within the grip of the Chinese Communist Party.

It embodies the most radical spirit of AI transparency, yet it is fundamentally constrained in what it can see and say. No matter how powerful it becomes, it is hard to evolve beyond the ideological limits imposed upon it.

The true disruption in generative AI is not technical; it is philosophical.

As we move toward generative AI agency and superintelligent AI, the debate might no longer be about finding our own place in the workforce or cognitive hierarchy, or whether large language models should be open or closed.

Instead, we could be asking: What kind of ‘intelligence’ do we need and — more importantly — who gets to decide?

Professor Haiqing Yu is a professor of media and communication and ARC Future Fellow at RMIT University. She is also a Chief Investigator with the ARC Centre of Excellence for Automated Decision-Making & Society. Professor Yu researches the sociopolitical and economic impact of China’s digital media, communication and culture on China, Australia and the Asia Pacific.

Originally published under Creative Commons by 360info™.

SEE ALSO

ADM+S researchers to collaborate on Data and Society’s new Climate, Technology, and Justice Program

Data and Society project announcement

ADM+S researchers to collaborate on Data and Society’s new Climate, Technology, and Justice Program

Author Data and Society 
Date 30 January 2025

Data & Society (D&S) today announced the launch of its Climate, Technology, and Justice program. Climate change is perhaps the most urgent social issue of our time and is only accelerating in importance. Already disproportionately impacting communities in the majority world, energy-intensive technologies like artificial intelligence only worsen the problem. Data & Society has spent a decade building an empirical research base on data-driven technologies, and fostering a network that is influencing how these technologies are studied and governed. The organization is uniquely well-positioned to examine the social and environmental repercussions of the expanded global infrastructures and labor practices needed to sustain the growth of digital technologies, from AI and blockchain to streaming and data storage.

The new program will be led by Tamara Kneese, who joined D&S in 2023 as senior researcher and project director of the Algorithmic Impact Methods Lab (AIMLab), and whose experience in human-centered technology and climate activism in the tech industry make her an ideal leader for this work. Joining her are two affiliates: Zane Griffin Talley Cooper, who studies data, resource extraction, and the Arctic; and Xiaowei R. Wang, whose body of multidisciplinary work, over the past 15 years, sits at the intersection of tech, digital media, art, and environmental justice.

Succeeding Kneese as AIMLab project director is D&S Senior Researcher Meg Young, whose leadership of the Lab’s participatory efforts and impact engagement since its 2023 launch have been key to its early successes. A champion for participatory methods in the AI impact space and for making technology more accountable to the public, Young’s work with communities across the country has positioned AIMLab for the future.

Before joining D&S, Kneese was lead researcher at Green Software Foundation (GSF), where she was part of the policy working group and the author of GSF’s first State of Green Software Report, which provided insight into the people and planet impacts of AI. Earlier, she was director of developer engagement on the green software team at Intel and assistant professor of media studies and director of gender and sexualities studies at the University of San Francisco. She and Young recently co-authored ”Carbon Emissions in the Tailpipe of Generative AI” in the Harvard Data Science Review, offering an overview of the current state of measuring, regulating, and mitigating AI’s environmental impacts and underscoring that the real existential threat posed by AI is its impact on climate.

“While this program will first tackle the environmental impacts of AI, we have expansive visions of how D&S’s considerable skillset can help us understand the complex relationships between climate, the environment, climate change, technology, and justice — areas like e-waste and tech reuse, algorithmic disaster prediction, and low-carbon tech adoption, centering the experiences and voices of the communities most affected,” said Alice E. Marwick, Data & Society’s director of research. “I am thrilled about the new body of scholarship that we will develop under Tamara’s leadership.”

“I am very excited to begin to build an empirical research base that will demonstrate the impact that AI and other data-driven technologies are having on the environment and on communities,” Kneese said. “Most importantly, we are doing this work in partnership with other researchers, academics, and grassroots groups who are essential to our vision of being able to investigate how data-driven technologies shape the environment, and how communities participate in or resist these processes.”

The program begins its research with two related projects. The first, conducted in partnership with researchers at the University of Virginia School of Data Science, is an assessment of the environmental and social impacts of AI, going beyond quantitative measurements of energy, carbon, and water costs to include the human rights impacts of data centers and energy infrastructures on communities. The second is an ethnographic and historical study of the practices of measurement, resistance, contestation, and refusal that emerge within and alongside the tech industry, focusing on sustainability practitioners, tech worker activist groups, and grassroots community organizations that organize across the digital value chain to mitigate the environmental and labor implications of data-driven technologies. Both projects involve participatory workshops that center the perspectives and needs of impacted communities to ensure that policymakers understand the full spectrum of environmental impacts related to computing and its global supply chains and underlying infrastructures.

These projects are supported in part by the National Science Foundation under Grant No. 2427700 and the Internet Society Foundation’s Greening the Internet program. Data & Society believes this type of work is most successful when done in partnership with others. In addition to UVA, other current research collaborators include the ARC Centre of Excellence for Automated Decision-Making and Society, Athena Coalition, and Athena’s multi-state Data Center Working Group, in particular Green Web Foundation, Green Software Foundation, and UC Berkeley’s Human Rights Center.

SEE ALSO

Changing the narrative about regional women and technology

Report Cover: Improving digital inclusion for women in regional Victoria

Changing the narrative about regional women and technology

Author ADM+S Centre
Date 30 January 2025

A newly released evaluation report highlights the success of the Victorian Women’s Trust’s Rural Women Online program in addressing digital exclusion among women in regional Victoria.

Involving hands-on digital skills workshops on a range of topics, a help desk for one-on-one support, stands from local services providers, and keynotes from leading thinkers and writers on digital inclusion in Australia, Rural Women Online was delivered in August and September 2024 in Greater Shepparton and North East Victoria following extensive community consultation. Hundreds of women from across regional Victoria participated in the program, gaining new skills, confidence, and forging new social connections and opportunities.

Key outcomes include

  1. Boosting confidence: The program saw significant increases in participants’ confidence with digital technologies, with 43% reporting they felt more capable using digital tools and navigating online platforms. Workshops focused on practical skills like managing passwords, identifying scams, and safely using online services, helping participants overcome fears and avoid common pitfalls.
  2. Tailored support: Over half of participants sought personalised assistance from local mentors at the program’s help desks. Mentors were not necessarily ‘tech experts’ but were relatable and were happy to learn alongside participants, setting up a space for mutual empowerment for a range of often highly personal tasks.
  3. Strengths-based learning: By focusing on participants’ existing capabilities and reframing digital challenges as opportunities for empowerment, the program created a supportive environment. This approach empowered women to see themselves as capable digital users, shifting the narrative from vulnerability to resilience.
  4. Social connection: The program fostered a sense of community among participants, enabling them to share experiences and build networks for ongoing support. Informal workshops and “chat corners” encouraged open dialogue and connection, reducing feelings of isolation and promoting collaborative learning.

Rural women are among the most digitally excluded groups in Victoria, facing barriers like limited access to technology, low digital confidence, and a lack of locally relevant resources.

To address these challenges, Rural Women Online adopted a place-based approach, creating tailored learning environments that recognised and responded to the unique needs of each community.

The program was independently evaluated by a team of digital inclusion researchers at the ARC Centre of Excellence for Automated Decision-Making and Society, with the evaluation report detailing how place-based, community-driven programming boosted digital skills, confidence, and resilience for participants.

The evaluation also revealed that Rural Women Online effectively engaged participants from diverse cultural and economic backgrounds and different age groups. In Shepparton, where 44% of participants spoke a language other than English at home, the program provided sessions with local translators and culturally sensitive information about the online world. Meanwhile, sessions in Yackandandah in North East Victoria addressed disaster preparedness, reflecting local concerns in the region.

The program also supported older participants, with 66% of attendees aged 55 or older. eSafety sessions were particularly popular, with 79% of participants reporting they felt safer online after attending these sessions. For many, it was the first opportunity they had to learn collectively in a supportive environment.

Sustained impact
The program’s ripple effect is likely to extend beyond the workshops. Participants reported that they were keen to share their newfound knowledge with family and friends, helping to spread digital inclusion throughout their communities. The program also connected women with local resources and organisations for continued learning, connection and support.

In a keynote delivered as part of the program in Shepparton, ADM+S Director Distinguished Professor Julian Thomas noted importance of programs like Rural Women Online in building digital inclusion in local communities: “Tackling [digital exclusion] in isolation can be debilitating and discouraging… The genius of the Rural Women Online program is recognising we can share the labour of learning, and that we often learn best from each other and in company”.

The success of Rural Women Online underscores the importance of listening to community needs and designing solutions that empower everyone to thrive in an increasingly digital world.

The full evaluation report is available here.

Learn more about the program in this video.

SEE ALSO

President Trump’s move to dismantle AI safety measures could have global impact

President Trump’s move to dismantle AI safety measures could have global impact

Author ADM+S Centre
Date 28 January 2025

On 20 January 2025 U.S. President Donald Trump revoked a 2023 executive order aimed at regulating artificial intelligence (AI), prioritising innovation over regulation of the rapidly advancing technology.

The executive order previously signed by former President Joe Biden, required AI companies to submit safety testing data to federal authorities, to establish safety standards around AI development.

ADM+S Affiliate and Director of the Centre for AI and Digital Ethics at the University of Melbourne Prof Jeannie Paterson joined ABC radio last week to discuss some of the implications of this decision.

“It’s definitely a statement that the guardrails have come off for the development on AI,” she explains.

“The Executive order said that anybody who was releasing AI to be used with government had to put in place safeguards to prevent bias, to protect privacy, to reduce error, and to keep it cyber secure.

Those requirements aren’t there anymore so it’s hard to say what sort of safety measures will be reduced.”

Trump’s decision to revoke the order comes amid an escalating global race for AI supremacy and coincides with a decision to invest $800 billion to speed up its development.

It marks a significant shift in the US government’s approach to AI oversight, and contracts sharply with the approach of other nations.

Prof Paterson explains, “Australia is quite a small player here. We’ve made some steps, we’ve got some AI safety standards of our own in place that are very aligned with what’s happening in Europe, Canada, and indeed Singapore.

What I’d expect to see is Australia continue down that path, and perhaps make some allegiances with those countries, so we’ve got that alliance of other countries also making those demands.”

She concludes, “It will be interesting to see how the competitive pressures go for those big tech companies that still want to sell to other jurisdictions.”

Listen on ABC.

SEE ALSO

Don’t rely on social media users for fact-checking. Many don’t care much about the common good.

AI Generated Image: Hands on phones

Don’t rely on social media users for fact-checking. Many don’t care much about the common good.

Author Mark Andrejevic
Date 20 January 2025

In the wake of Donald Trump’s election victory, Meta chief executive Mark Zuckerberg fired the fact-checking team for his company’s social media platforms. At the same time, he reversed Facebook’s turn away from political content.

The decision is widely viewed as placating an incoming president with a known penchant for mangling the truth.

Meta will replace its fact-checkers with the “community notes” model used by X, the platform owned by avid Trump supporter Elon Musk. This model relies on users to add corrections to false or misleading posts.

Musk has described this model as “citizen journalism, where you hear from the people. It’s by the people, for the people.”

For such an approach to work, both citizen journalists and their readers need to value good-faith deliberation, accuracy and accountability. But our new research shows social media users may not be the best crowd to source in this regard.

Our research

Working with Essential Media, our team wanted to know what social media users think of common civic values.

After reviewing existing research on social cohesion and political polarisation and conducting ten focus groups, we compiled a civic values scale. It aims to measure levels of trust in media institutions and the government, as well as people’s openness to considering perspectives that challenge their own.

We then conducted a large-scale survey of 2,046 Australians. We asked people how strongly they believed in a common public interest. We also asked about how important they thought it was for Australians to inform themselves about political issues and for schools to teach civics.

Importantly, we asked them where they got their news: social media, commercial television, commercial radio, newspapers or non-commercial media.

What did we find?

We found people who rely on social media for news score significantly lower on a civic values scale than those who rely on newspapers and non-commercial broadcasters such as the ABC.

By contrast, people who rely on non-commercial radio scored highest on the civic values scale. They scored 11% higher than those who rely mainly on social media and 12% higher than those who rely on commercial television.

The lowest score was for people who rely primarily on commercial radio.



People who relied on newspapers, online news aggregators, and non-commercial TV all scored significantly higher than those who relied on social media and commercial broadcasting.

The survey also found that as the number of different media sources people use daily increased, so too did their civic values score.

This research does not indicate whether platforms foster lower civic values or simply cater to them.

But it does raise concerns about social media becoming an increasingly important source of political information in democratic societies like Australia.

Why measure values?

The point of the civic values scale we developed is to highlight the fact that the values people bring to news about the world is as important as the news content.

For example, most people in the United States have likely heard about the violence of the attack on the Capitol protesting Trump’s loss in 2020.

That Trump and his supporters can recast this violent riot as “a day of love” is not the result of a lack of information.

It is, rather, a symptom of people’s lack of trust in media and government institutions and their unwillingness to confront facts that challenge their views.

In other words, it is not enough to provide people with accurate information. What counts is the mindset they bring to that information.

No place for debate

Critics have long been concerned that social media platforms do not serve democracy well, privileging sensationalism and virality over thoughtful and accurate posts. As the critical theorist Judith Butler put it:

the quickness of social media allows for forms of vitriol that do not exactly support thoughtful debate.

Sociologist Zeynep Tufekci said social media is less about meaningful engagement than bonding with like-minded people and mocking perceived opponents. She notes, “belonging is stronger than facts”.

Her observation is likely familiar to anyone who has tried to engage in a politically charged discussion on social media.

These criticisms are commonplace in discussions of social media but have not been systematically tested until now.

Social media platforms are not designed to foster democracy. Their business model is based on encouraging people to see themselves as brands competing for attention, rather than as citizens engaged in meaningful deliberation.

This is not a recipe for responsible fact-checking. Or for encouraging users to care much about it.

Platforms want to wash their hands of the fact-checking process, because it is politically fraught. Their owners claim they want to encourage the free flow of information.

However, their fingers are on the scale. The algorithms they craft play a central role in deciding which forms of expression make it into our feeds and which do not.

It’s disingenuous for them to abdicate responsibility for the content they chose to pump into people’s news feeds, especially when they have systematically created a civically challenged media environment.


The author would like to acknowledge Associate Professor Zala Volcic, Research Fellow Isabella Mahoney and Research Assistant Fae Gehren for their work on the research on which this article is based.The Conversation

Mark Andrejevic, Professor of Media, School of Media, Film, and Journalism, Monash University, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S Partner Investigator receives a Presidential Early Career Award for Scientists and Engineers from the U.S. Government

Julia Stoyanovich
ADM+S PI Julia Stoyanovich receives Presidential Early Career Award for Scientists and Engineers

ADM+S Partner Investigator receives a Presidential Early Career Award for Scientists and Engineers from the U.S. Government

Author Natalie Campbell
Date 17 January 2025

Congratulations to ADM+S Partner Investigator Assoc Prof Julia Stoyanovich from New York University, a 2025 recipient of a Presidential Early Career Award for Scientists and Engineers in the United States.

Assoc Prof Stoyanovich is amongst nearly 400 scientists and engineers who received the award on 15 January, the highest honour bestowed by the U.S. government on outstanding scientists and engineers early in their careers.

“I am immensely grateful to my mentors and long-term collaborators, students, and postdocs for making this possible.

And I am thrilled to be able to call New York University my home, where all doors are open and the sky is the limit,” said Assoc Prof Stoyanovich.

Announced via the White House website, the media release reads, “Established by President Clinton in 1996, PECASE recognizes scientists and engineers who show exceptional potential for leadership early in their research careers.

“The award recognizes innovative and far-reaching developments in science and technology, expands awareness of careers in science and engineering, recognizes the scientific missions of participating agencies, enhances connections between research and impacts on society, and highlights the importance of science and technology for our nation’s future.”

Julia is an Associate Professor at New York University in the Department of Computer Science and Engineering at the Tandon School of Engineering, and the Center for Data Science.

Her research focuses on responsible data management and analysis practices: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data acquisition and processing lifecycle.

Learn more about Julia’s work.   

SEE ALSO

Meta is abandoning fact checking – this doesn’t bode well for the fight against misinformation

Image credit: David Paul Morris/Bloomberg via Getty Images
Image credit: David Paul Morris/Bloomberg via Getty Images

Meta is abandoning fact checking – this doesn’t bode well for the fight against misinformation

Authors  Ned Watt, Michelle Riedlinger and Silvia Montaña-Niño
Date 8 January 2025

Meta has announced it will abandon its fact-checking program, starting in the United States. It was aimed at preventing the spread of online lies among more than 3 billion people who use Meta’s social media platforms, including Facebook, Instagram and Threads.

In a video, the company’s chief, Mark Zuckerberg, said fact checking had led to “too much censorship”.

He added it was time for Meta “to get back to our roots around free expression”, especially following the recent presidential election in the US. Zuckerberg characterised it as a “cultural tipping point, towards once again prioritising speech”.

Instead of relying on professional fact checkers to moderate content, the tech giant will now adopt a “community notes” model, similar to the one used by X.

This model relies on other social media users to add context or caveats to a post. It is currently under investigation by the European Union for its effectiveness.

This dramatic shift by Meta does not bode well for the fight against the spread of misinformation and disinformation online.

Independent assessment

Meta launched its independent, third-party, fact-checking program in 2016.

It did so during a period of heightened concern about information integrity coinciding with the election of Donald Trump as US president and furore about the role of social media platforms in spreading misinformation and disinformation.

As part of the program, Meta funded fact-checking partners – such as Reuters Fact Check, Australian Associated Press, Agence France-Presse and PolitiFact – to independently assess the validity of problematic content posted on its platforms.

Warning labels were then attached to any content deemed to be inaccurate or misleading. This helped users to be better informed about the content they were seeing online.

A backbone to global efforts to fight misinformation

Zuckerberg claimed Meta’s fact-checking program did not successfully address misinformation on the company’s platforms, stifled free speech and lead to widespread censorship.

But the head of the International Fact-Checking Network, Angie Drobnic Holan, disputes this. In a statement reacting to Meta’s decision, she said:

Fact-checking journalism has never censored or removed posts; it’s added information and context to controversial claims, and it’s debunked hoax content and conspiracy theories. The fact-checkers used by Meta follow a Code of Principles requiring nonpartisanship and transparency.

A large body of evidence supports Holan’s position.

In 2023 in Australia alone, Meta displayed warnings on over 9.2 million distinct pieces of content on Facebook (posts, images and videos), and over 510,000 posts on Instagram, including reshares. These warnings were based on articles written by Meta’s third-party, fact-checking partners.

Numerous studies have demonstrated that these kinds of warnings effectively slow the spread of misinformation.

Meta’s fact‐checking policies also required the partner fact‐checking organisations to avoid debunking content and opinions from political actors and celebrities and avoid debunking political advertising.

Fact checkers can verify claims from political actors and post content on their own websites and social media accounts. However, this fact‐checked content was still not subject to reduced circulation or censorship on Meta platforms.

The COVID pandemic demonstrated the usefulness of independent fact checking on Facebook. Fact checkers helped curb much harmful misinformation and disinformation about the virus and the effectiveness of vaccines.

Importantly, Meta’s fact-checking program also served as a backbone to global efforts to fight misinformation on other social media platforms. It facilitated financial support to up to 90 accredited fact-checking organisations around the world.

What impact will Meta’s changes have on misinformation online?

Replacing independent, third-party fact checking with a “community notes” model of content moderation is likely to hamper the fight against misinformation and disinformation online.

Last year, for example, reports from The Washington Post and The Centre for Countering Digital Hate in the US found that X’s community notes feature was failing to stem the flow of lies on the platform.

Meta’s turn away from fact checking will also create major financial problems for third-party, independent fact checkers.

The tech giant has long been a dominant source of funding for many fact checkers. And it has often incentivised fact checkers to verify certain kinds of claims.

Meta’s announcement will now likely force these independent fact checkers to turn away from strings-attached arrangements with private companies in their mission to improve public discourse by addressing online claims.

Yet, without Meta’s funding, they will likely be hampered in their efforts to counter attempts to weaponise fact checking by other actors. For example, Russian President Vladimir Putin recently announced the establishment of a state fact-checking network following “Russian values”, in stark difference to the International Fact-Checking Network code of principles.

This makes independent, third-party fact checking even more necessary. But clearly, Meta doesn’t agree.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO