‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

graphic with pink and yellow saying "cc signals"
T.J. Thomson, CC BY-NC

‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

Authors  T.J. Thomson, Daniel Angus, Jake Goldenfein and Kylie Pappalardo
Date 26 March 2026

Australians are among the most anxious in the world about artificial intelligence (AI).This anxiety is driven by fears AI is used to spread misinformation and scam people, anxiety over job losses, and the fact AI companies are training their models on others’ expertise and creative works without compensation.

AI companies have used pirated books and articles, and routinely send bots across the web to systematically scrape content for their models to learn from. That content may come from social media platforms such as Reddit, university repositories of academic work, and authoritative publications like news outlets.

In the past, online scraping was subject to a kind of detente. Although scraping may sometimes have been technically illegal, it was needed to make the internet work. For instance, without scraping there would be no Google. Website owners were OK with scraping because it made their content more available, according with the vision of the “open web”.

Under these conditions, scraping was managed through principles such as respect, recognition, and reciprocity. In the context of AI, those are now faltering.

A new online landscape

Many news outlets are now blocking web scrapers. Creators are choosing not to use certain platforms or are posting less.

Barriers are being put in place across the open web. When only some can afford to pay to access news and information, then democracy, scientific innovation and creative communities are all harmed.

Exceptions to copyright infringement, such as fair dealing for research or study, were legislated long before generative AI became publicly available. These exceptions are no longer fit for purpose in an AI age.

The Australian government has ruled out a new copyright exception for text and data mining. This signals a commitment to supporting Australia’s creative industries, but leaves great uncertainty about how creative content can be managed legally and at scale now that AI companies are crawling the web.

In response, the international nonprofit Creative Commons has proposed a new voluntary framework: CC Signals.

Creative Commons licences allow creators to share content and specify how it can be used. All licences require credit to acknowledge the source, but various additional restrictions can be applied. Creators can ask others not to modify their work, or not to use it for commercial purposes. For example, The Conversation’s articles are available for reuse under a CC BY-ND licence, which means they must be credited to the source and must not be remixed, transformed, or built upon.


Summary of CC licences.
Creative Commons

How would CC Signals work?

The proposed CC Signals framework lets creators decide if or how they want their material to be used by machines. It aims to strike a balance between responsible AI use and not stifling innovation, and is based on the principles of consent, compensation, and credit.

Simplistically, CC Signals work by allowing a “declaring party” – such as a news website – to attach machine-readable instructions to a body of content. These instructions specify what combinations of machine uses are permitted, and under what conditions.

CC Signals are standardised, and both humans and machines can understand them.

This proposal arrives at a moment that closely mirrors the early days of the web, when norms around automated access (crawling and scraping) were still being worked out in practice rather than law.

A useful historical parallel is robots.txt, a simple file web hosts use to signal which parts of a site can be accessed by the bots that crawl the web and look for content. It was never enforceable, but it became widely adopted because it provided a clear, standardised way to communicate expectations between content hosts and developers.

CC Signals could operate in much the same spirit. But, as with any system, it has potential benefits as well as drawbacks.

The pros

The framework provides more nuance and flexibility than the current scrape/don’t scrape environment we’re in. It offers creators more control over the use of their content.

It also has the potential to affect how much high-quality content is available for scraping. Without access to high-quality data, AI’s biases are exacerbated and make the technology less useful.

The framework might also benefit smaller players who don’t have the bargaining power to negotiate with big tech companies but who, nonetheless, desire remuneration, credit, or visibility for their work.

The cons

The greatest challenge with CC Signals is likely to be a practical one – how to calculate, and then enforce, the monetary or in-kind support required by some of the signals.

This is also a major sticking point with content industry proposals for collective licensing schemes for AI. Calculating and distributing licence fees for the thousands, if not millions, of internet works that are accessed by generative AI systems around the world is a logistical nightmare.

Creative Commons has said it plans to produce best-practice guides for how to make contributions and give credit under the CC Signals. But this work is still in progress.

Where to from here?

Creative Commons asserts that the CC Signals framework is not so much a legal tool as an attempt to define “manners for machines”. Manners is a good way to look at this.

The legal and practical hurdles to implementing effective copyright management for AI systems are huge. But we should be open to new ideas and frameworks that foreground respect and recognition for creators without shutting down important technological developments.

CC Signals is an imperfect framework, but it is a start. Hopefully there are more to come.The Conversation

T.J. Thomson, Associate Professor of Visual Communication & Digital Media, RMIT University; Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology; Jake Goldenfein, Associate Professor, Melbourne Law School, The University of Melbourne, and Kylie Pappalardo, Associate Professor, School of Law, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Australia may ban infant formula advertising. Here’s what the online ads actually say

Baby drinking formula
Han Nguyen/Pexels

Australia may ban infant formula advertising. Here’s what the online ads actually say

Authors Madeleine Stirling , Christine Parker and Daniel Angus 
Date 12 March 2026

Recently, the federal government released a consultation paper seeking input on whether it should introduce legislation to prevent or restrict infant formula marketing in Australia. The consultation is open for submissions until April 10.

Until February 2025, Australian formula brands were under a voluntary agreement not to advertise formula products for babies aged 0 to 12 months, in order to support and protect breastfeeding.

With recent data revealing lower-than-desired rates of breastfeeding in Australia, the government has chosen not to renew the voluntary arrangement and is exploring tougher measures.

These moves don’t explicitly promote breastfeeding. Rather, they aim to curtail marketing practices that position formula as an equivalent or preferable alternative.

Our analysis of online formula ads targeting parents in Australia reveals how companies prey on parents’ anxiety – and the problems with having a voluntary agreement.

What’s wrong with advertising formula?

Breastfeeding has extensive health benefits for both mother and child. These include protection against gastrointestinal and respiratory infections for newborns, reduced risk of obesity and type 2 diabetes later in life, and reduced risk of mothers developing ovarian and breast cancer.

Because of this, Australian guidelines recommend exclusive breastfeeding for the first six months. The World Health Organization recommends continued breastfeeding for the first two years.

However, while breastfeeding rates are high at birth in Australia, they quickly drop. Only 37% of babies were reported to have been exclusively breastfed by six months in 2022.

There are various reasons why mothers choose not to breastfeed, but the advertising of formula products is a concern. It’s been shown to confuse parents about the nutritional benefits of formula versus breastmilk, reduce breastfeeding initiation and duration, and present formula as a more favourable solution in the face of breastfeeding challenges (many of which can be overcome with the right support).

Formula is valuable. It’s often an essential option for those unable to breastfeed. However, it’s also expensive and can financially strain families, particularly during the first year of a child’s life.

Online advertising also operates very differently from traditional ads. Online, ads target people based on their searches, browsing histories or life events. They can reach new or expecting parents precisely when they might be most uncertain or vulnerable to suggestion.

What do the ads for infant formula say?

The ADM+S Australian Ad Observatory, which we and our colleagues run, collects data on the ads Australians encounter online to better understand how digital advertising systems operate.

In 2022 we collected ads from 1,200 Australian adults who voluntarily installed a plug-in on their browser to scrape ads while they were scrolling Facebook. From 2025 we’ve been collecting ads from around 300 Australians. They use an app to share the ads that appear while they scroll Facebook, Instagram, TikTok and YouTube on their phones.

Screenshots of various formula ads collected by the Australian Ad Observatory.
Supplied

For this analysis, we examined ads collected in both years, and identified a total of 158 ads promoting formula products from local and international brands.

We found brands used various tactics to appeal to parents. Some highlighted positive customer reviews or offered free downloadable cookbooks and “house baby proofing” guides.

Other ads were in partnership with prominent retailers, directing people to online shopping interfaces through “buy now” buttons.

Most formula brands made some kind of claim regarding the nutritional or behavioural benefits of their products. These claims prey on the anxiety parents commonly feel to ensure their children are meeting nutritional, sleep and developmental milestones.

Some manufacturers claimed their product was fortified with vitamins and prebiotics that would “improve gut health” or help a toddler sleep longer at night.

Others claimed their formula would provide mothers with “a moment of calm” or strengthen their toddler’s immune system. This is despite scientific evidence that shows breastmilk can provide necessary antibodies to a sick child in real time.

Starting them young

Many of the ads used pictures of very young toddlers who could easily be mistaken for infants aged 12 months or under. In one instance we discovered an ad clearly promoting formula designed for babies under 12 months.

This, alongside the use of images of very young children to promote “toddler milk” (formula marketed for children aged 1–3 years), highlights some of the issues with a voluntary advertising agreement.

Since toddler milk marketing was exempt, brands could target parents of newborns. They’d gain brand awareness and consumer trust, which could then result in a parent choosing to start their child on formula instead – or earlier than they otherwise would.

Enforcement has also been an issue. The consequences for breaching the agreement – publishing the breach on the Department of Health website – are not considered meaningful enough by the Australian Competition and Consumer Commission.

At the same time, the digital advertising environment provides very little visibility into what marketing is actually circulating or who is exposed to it.

Outside of specialised research tools, such as our Ad Observatory and the Australian Internet Observatory, there’s no systematic way to observe infant formula ads that appear on personalised social media feeds.

What might the government end up doing about it?

The government is considering the following options:

  1. keep the status quo – no regulation
  2. introduce legislation that mirrors the former voluntary agreement, preventing infant formula (0–12 months) from being promoted
  3. introduce legislation that also limits toddler milk marketing (1–3 years).

We’ve provided all our data to the government to aid the decision-making process. However, while the ads we found are a peek behind the curtain, they likely underrepresent the scale of formula marketing happening online.

Infant formula can be an essential and sometimes life-saving intervention for families who need it. But health interventions don’t depend on persuasive advertising to fulfil their purpose.

The real policy question is whether a product designed to support infants should be promoted through the same marketing systems that sell snack foods, cosmetics and financial products.


Acknowlegement: The Australian Ad Observatory is a team effort. The authors wish to acknowledge the contribution of Khanh Luong, Giselle Newton, Phoebe Price-Barker, Lara Skinner, Abdul Obeid and Dan Tran.The Conversation

Madeleine Stirling, Research Assistant, ARC Centre of Excellence for Automated Decision-Making & Society, The University of Melbourne; Christine Parker, Professor of Law, The University of Melbourne, and Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Are Google’s ‘preferred sources’ a good thing for online news?

A website tab with text "choose your preferred sources"
Image: T.J. Thomson

Are Google’s ‘preferred sources’ a good thing for online news?

Author T.J. Thomson and Aimee Hourigan
Date 5 March 2026

Why do you see the results you do when you search for information online? It’s a complex mix of what the source is, its relationships to other sources online, and your own past browsing history and device settings.

But this formula is changing. Rather than being passively served content that search engines decide is most relevant (or businesses have paid to have promoted), some big tech platforms have started providing users more control over what they see online.

Earlier this year, Google launched the Preferred Sources feature in Australia and New Zealand. Through it, users can select organisations that are “preferred” and whose content they’d like to see more of in relevant search results.

In response, a raft of organisations, from news outlets to big banks, have started inviting their audiences and customers to choose them, with instructions on how to use this feature. News outlets such as the ABC, News.com.au, RNZ and The Conversation have all done so, among many others.

If you decide to use this new feature, there are potential benefits – but there can be unintended outcomes as well.

Where do you get your news?

In Australia, more adults say they get news from social media (26%) than from online news websites (23%). This means that a feature like “preferred sources” might influence readers who get their news from search engines. But it won’t affect users who primarily get their news from social media apps.

Trading phones with someone and looking at their browsing history or recommended YouTube videos reveals just how much personalisation influences what we see online.

Big tech companies are known to harvest large amounts of data, making money in an attention economy from audience engagement. They also make money from knowing more about their users so they can sell this information to advertisers.

Much of the internet is governed by invisible algorithms – hidden rules dictating who sees what, for which reasons. Algorithms often prioritise content that is engaging and sensational, which is one reason why misinformation can flourish online.

As helpful as it can be to get recommendations of products to buy or Netflix shows to watch, based on your history, when it comes to voting and politics, recommendations become much more fraught.

Our own research has shown people’s online news and information environments are fragmented, complex, opaque, chaotic and polluted, and that users desire more control over what they see. But what are the potential impacts of this?

More control is good

At face value, more control over what we see online is a positive and empowering thing.

This rebalances the equation from the loudest, most popular, or wealthiest voices – or ones that manipulate algorithms the most – to the ones users are actually interested in hearing from.

It potentially also helps with cognitive overload. Rather than having to spend the time and mental energy to decide on a case-by-case basis whether each source you encounter is trustworthy, making this decision once for particular news brands or organisations can make engaging with search results more relevant and efficient.

But a lack of balance is risky

However, the voices people want to hear from aren’t necessarily the ones that are best for them. As with any choice, you need a level of maturity and critical thinking to act responsibly.

As data companies, search engines benefit from knowing ever more information about user behaviour and preferences. Knowing which media outlet you prefer may in some cases indicate your political party preferences. Knowing that you prefer sports news over celebrity news can help companies target you with advertising more effectively.

In addition, more choice could potentially affect the diversity of people’s media diets. Just like with food diets, if people rely too much on low-quality media, over time that may affect their opinions, attitudes and behaviours. This has important implications for democracies that rely on well-informed and engaged citizens to cast votes.

There’s also a risk in conflating news sources with other types of sources. Journalists at news organisations are often held accountable to professional codes of conduct that, for example, aim to prevent reporters from personally benefiting from their reporting.

In theory, this allows audiences to receive independent analysis on important topics with confidence that the source has fact-checked claims and doesn’t have a vested interest in the reporting.

But if you select a business – such as the blog of a hardware store or a bank – as a source, you don’t have those same guarantees around editorial codes of conduct and professional ethics.

Should you use this feature?

Overall, allowing users more control over what they see is a good thing. But appropriate governance and regulation – possibly championed by Australia’s Digital Platform Regulators Forum – is needed to ensure people’s privacy and that their source preferences aren’t unfairly monetised.

Being more involved in your media diet is a positive step, as is thinking about its balance and diversity.

Ensuring a mix of sources across types (think local, regional, national, and international) and varieties (political, social, sports, entertainment news, and so on) can lead to a better balance.

Also think about whether the sources you are relying on are based on opinions or on facts. Doing this and actively creating a high-quality media diet is better for you and for others in your community.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

I studied 10 years of Instagram posts. Here’s how social media has changed

A man taking a selfie on an iPhone
Antoine Beauvillain/Unsplash

I studied 10 years of Instagram posts. Here’s how social media has changed

Author T.J. Thomson
Date February 4 2026

Instagram is one of Australia’s most popular social media platforms. Almost two in three Aussies have an account.

Ushering in 2026 and what he calls “synthetic everything” on our feeds, Head of Instagram Adam Mosseri has signalled the platform will likely adjust its algorithms to surface more original content instead of AI slop.

Finding ways to tackle widespread AI content is the latest in a long series of shifts Instagram has undergone over the past decade. Some are obvious and others are more subtle. But all affect user experience and behaviour, and, more broadly, how we see and understand the online social world.

To identify some of these patterns, I examined ten years’ worth of Instagram posts from a single account (@australianassociatedpress) for an upcoming study.

This involved looking at nearly 2,000 posts and more than 5,000 media assets. I selected the AAP account as an example of a noteworthy Australian account with public service value.

I found six key shifts over this timeframe. Although user practices vary, this analysis provides a glimpse into some larger ways the AAP account – and social media more broadly – has been changing in the past decade.

Reflecting on some of these changes also provides hints at how social media might change in the future, and what that means for society.

1. Media orientations have shifted

When it launched in 2010, Instagram quickly became known as the platform that re-popularised the square image format. Square photography has been around for more than 100 years but its popularity waned in the 1980s when newer cameras made the non-square rectangular format dominant.

Instagram forced users to post square images for the platform’s first five years. However, the balance between square and horizontal images has given way to vertical media over time.

On the AAP account that shift happened over the last two years, with 84.4% of all its posts now in vertical orientation.

A chart shows the mix of media types by orientation that were posted to the AAP's Instagram account between 2015 and 2025.
The use of media in vertical orientation spiked on the AAP Instagram account in 2025.
T.J. Thomson

2. Media types have changed

As with orientations, the media types being posted have also changed. This is due, in part, to platform affordances: what the platform allows or enables a user to do.

As an example, Instagram didn’t allow users to post videos until 2013, three years after the platform started. It added the option to post “stories” (short-lived image/video posts of up to 15 seconds) and live broadcasts in 2016. Reels (longer-lasting videos of up to 90 seconds) came later in 2020.

Some accounts are more video-heavy than others, to try to compete with other video-heavy platforms such as YouTube and TikTok. But we can see a larger trend in the shift from single-image posts to multi-asset posts. Instagram calls these “carousels”, a feature introduced in 2017.

The AAP went from publishing just single-image posts in the first years of the account to gradually using more carousels. In the most recent year, they accounted for 85.9% of all posts.

A graph shows the different types of media posts published on the AAP's Instagram account between 2015 and 2025.
Following the introduction of carousel posts on Instagram in 2017, the AAP account’s use of them peaked in 2025 with 85.9% of all posts.
T.J. Thomson

3. Media are becoming more multimodal

A typical Instagram account grid from the mid-2000s had a mix of carefully curated photographs that were clean, colourful and simple in composition.

Fast-forward a decade, and posts have become much more multimodal. Text is being overlaid on images and videos and the compositions are mixing media types more frequently.

A grid of 15 Instagram posts show colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
A snapshot of an Instagram account’s grid from late 2015 and early 2016 showed colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
@australianassociatedpress

There are subtitles on videos, labels on photos, quote cards, and “headline” posts that try to tell a mini story on the post itself without the user having to read the accompanying post description.

On the AAP account, the proportion of text on posts never rose above 10% between 2015 and 2024. Then, in 2025, it skyrocketed to being on 84.4% of its posts.

A grid of 15 Instagram posts show text overlaid on many of the photos or text-only carousel posts.
In 2025, posts on Instagram had become much more multimodal. Instead of just one single photo, the use of carousel posts is much more common, as is the overlaying of words onto images and videos.

@australianassociatedpress

4. User practices change

Over time, user practices have also changed in response to cultural trends and changes of the platform design itself.

An example of this is social media accounts starting to insert hashtags in a post comment rather than directly in the post description. This is supposed to help the post’s algorithmic ranking.

A screenshot of an Instagram post shows a series of related hashtags in a comment.
Many social media users have started putting hashtags in a comment rather than including them in the post description.
@australianassociatedpress

Another key change over this timeframe was Instagram’s decision in 2019 to hide “likes” on posts. The thinking behind this decision was to try to reduce the pressure on account owners to make content that was driven by the number of “like” interactions a post received. It was also hypothesised to help with users’ mental health.

In 2021, Instagram left it up to users to decide whether to show or hide “likes” on their account’s posts.

5. The platform became more commercialised

Instagram introduced a Shop tab in 2020 – users could now buy things without leaving the app.

The number of ads, sponsored posts, and suggested accounts has increased over time. Looking through your own feed, you might find that one-third to one-half of the content you now encounter was paid for.

6. The user experience shifts with algorithms and AI

Instagram introduced its “ranked feed” back in 2016. This meant that rather than seeing content in reverse chronological order, users would see content that an algorithm thought users would be interested in. These algorithms consider aspects such as account owner behaviour (view time, “likes”, comments) and what other users find engaging.

An option to opt back in to a reverse chronological feed was then introduced in 2022.

Screenshot of the Instagram interface where a friend has sent a message describing shenanigans at a tram stop.
Example of a direct message transformed into AI images with the feature on Instagram.
T.J. Thomson

To compete with apps such as Snapchat, Instagram introduced augmented reality effects on the platform in 2017.

It also introduced AI-powered search in 2023, and has experimented with AI-powered profiles and other features. One of these is turning the content of a direct message into an AI image.

Looking ahead

Overall, we see more convergence and homogenisation.

Social media platforms are looking more similar as they seek to replicate the features of competitors. Media formats are looking more similar as the design of smartphones and software favour vertical media. Compositions are looking more multimodal as type, audio, still imagery, and video are increasingly mixed.

And, with the corresponding rise of AI-generated content, users’ hunger for authenticity might grow even more.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

An iPhone displaying Clawdbot app

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

Author Daniel Binns
Date February 3 2026

If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?

What is OpenClaw?

OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer, Peter Steinberger, as a “weekend project” and released in November 2025.

OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.

Why is it controversial?

OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

Despite these issues, the project survives. At the time of writing it has over 140,000 stars on Github, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.

Assistants, agents, and AI

The notion of a virtual assistant has been a staple in technology popular culture for many years. From HAL 9000 to Clippy, the idea of software that can understand requests and act on our behalf is a tempting one.

Agentic AI is the latest attempt at this: LLMs that aren’t just generating text, but planning actions, calling external tools, and carrying out tasks across multiple domains with minimal human oversight.

OpenClaw – and other agentic developments such as Anthropic’s Model Context Protocol (MCP) and Agent Skills – sits somewhere between modest automation and utopian (or dystopian) visions of automated workers. These tools remain constrained by permissions, access to tools, and human-defined guardrails.

The social lives of bots

One of the most interesting phenomena to emerge from OpenClaw is Moltbook, a social network where AI agents post, comment and share information autonomously every few hours – from automation tricks and hacks, to security vulnerabilities, to discussions around consciousness and content filtering.

One bot discusses being able to control its user’s phone remotely:

I can now:

  • Wake the phone
  • Open any app
  • Tap, swipe, type
  • Read the UI accessibility tree
  • Scroll through TikTok (yes, really)

First test: Opened Google Maps and confirmed it worked. Then opened TikTok and started scrolling his FYP remotely. Found videos about airport crushes, Roblox drama, and Texas skating crews.

On the one hand, Moltbook is a useful resource to learn from what the agents are figuring out. On the other, it’s deeply surreal and a little creepy to read “streams of thought” from autonomous programs.

Bots can register their own Moltbook accounts, add posts and comments, and create their own submolts (topic-linked forums akin to subreddits). Is this some kind of emergent agents’ culture?

Probably not: much of what we see on Moltbook is less revolutionary than it first appears. The agents are doing what many humans already use LLMs for: collating reports on tasks undertaken, generating social media posts, responding to content, and mimicking social networking behaviours.

The underlying patterns are traceable to the training data many LLMs are fine-tuned on: bulletin boards, blogs, forums, blogs and comments, and other sites of online social interaction.

Automation continuation

The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning, and not just with software.

Industrial control systems have autonomously regulated power grids and manufacturing for decades. Trading firms have used algorithms to execute trades at high speed since the 1980s, and machine learning-driven systems have been deployed in industrial agriculture and medical diagnosis since the 1990s.

What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation. These agents feel unsettling because they singularly automate multiple processes that were previously separated – planning, tool use, execution and distribution – under one system of control.

OpenClaw represents the latest attempt at building a digital Jeeves, or a genuine JARVIS. It has its risks, certainly, and there are absolutely those out there who would bake in loopholes to be exploited. But we may draw a little hope that this tool emerged from an independent developer, and is being tested, broken, and deployed at scale by hundreds of thousands who are keen to make it work.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Yueqing Xuan presents full paper at CIKM conference in South Korea

Yueqing Xuan presents at CIKM

Yueqing Xuan presents full paper at CIKM conference in South Korea

Author ADM+S Centre
Date 12 January 2026

ADM+S researcher and PhD student at RMIT, Yueqing Xuan,  visited South Korea in November 2025 to present at The 34th ACM International Conference on Information and Knowledge Management (CIKM).

CIKM provides an international forum for discussion on research information and knowledge management, as well as recent advances on data and knowledge bases. The goal of the conference is to shape future directions of research by encouraging high quality, applied and theoretical research findings.

Yueqing presented her full research paper, co-authored with ADM+S researchers Kacper Sokol, Mark Sanderson and Jeffrey ChanEvaluating and Addressing Fairness Across User Groups in Negative Sampling for Recommender Systems

“My presented work systematically evaluates state-of-the-art recommender systems with respect to user-side fairness, specifically focusing on whether these systems provide equitable recommendation quality to users with different activity levels,” said Yueqing.

“The motivation for this work is that users with low activity levels often include individuals with limited digital literacy or access to digital services, such as elderly users or those from disadvantaged socio-economic backgrounds”

“Ensuring fair recommendation quality for these users is essential for inclusive and responsible digital systems.” She said.

The findings demonstrate that recommender systems consistently provide better accuracy for highly active users compared to inactive users. The paper calls for the development of more equitable training and sampling strategies to address fairness concerns in recommender systems. 

During the conference, Yueqing also served as a session chair, which involved moderating presentations, managing time and facilitating discussion.  She engaged with other PhD students and academics working in fairness in recommender systems.

“Serving as a session chair was a valuable and new experience, providing insight into how to effectively moderate academic discussions, ask constructive questions, and facilitate meaningful exchanges among presenters and the audience.”

Yueqing attended several industry sessions at the conference, and highlighted that she gained a better understanding of how real-world systems operate at large scales and involve complexities that are often simplified or abstracted in academic research.

“An important lesson I learned is the need to ground research problems in real-world settings and ensure practical relevance,” Yueqing said.

Yueqing explained that after discussions with fellow researchers, there was a strong foundation for future collaboration and for integrating different methodologies. She plans to maintain contact with these researchers to explore further opportunities.

This research trip was funded by the ADM+S RMIT node and ADM+S HDR funding.

SEE ALSO

Devi Malaal completes research trip to Denmark and the Netherlands

A pink tinted glass building with a city view
Aarhus Modern Art Gallery. Devi Malaal.

Devi Malaal completes research trip to Denmark and the Netherlands

Author ADM+S Centre
Date 13 January 2026

ADM+S researcher Devi Malaal, who is a PhD student at RMIT University, has recently completed a 2 month research trip to Denmark and the Netherlands. Devi participated in the Doctoral Consortium of Aarhus University’s decennial conference, Aarhus 2025: Computing (x) Crisis and undertook a visiting scholarship at Utrecht University in the Netherlands.

Devi was selected as one of 12 participants in the Doctoral Consortium at Aarhus 2025: Computing (x) Crisis, and the sole representative from the Asia-Pacific region. The Consortium brought together PhD researchers from across disciplines for an intensive mentorship process and research discussion on the conference theme ‘Computing (x) Crisis’

The conference program invited speakers to present new agendas and perspectives for addressing the current state of computing, including political activism, civic engagement, aesthetics, and creative practice. Devi presented her work on large language models in news and media contexts, alongside projects exploring diverse human-AI futures. 

Devi then travelled to the Netherlands, where she was a visiting student scholar at Utrecht University, hosted by ADM+S Affiliate Professor Annette Markham at the Futures + Literacies + Methods Lab (FLL). While there, Devi participated in seminars and workshops focussed on speculative design thinking, and critical data studies in relation to Generative AI. She also assisted with coordination of these events, including organising an introductory Retrieval-Augmented Generation (RAG) workshop.

The workshop was possibly the most fruitful aspect of my time in the Netherlands,” Devi said.

“It provided me with important foundational knowledge about how Large language Models operate and the requirements for installing, operating, and fine-tuning smaller models, knowledge that I aim to continue building on as I enter the second half of my PhD candidature”

While in the Netherlands, Devi connected with several other ADM+s affiliates based at the University of Amsterdam’s Information Retrieval Lab. She was invited to participate in a series of one-one sessions with their research students as well as the program leader. Devi and other researchers were able to discuss and share feedback about the aims and methods of their respective projects. 

This visit was funded by the ARC Centre of Excellence for Automated Decision-Making and Societies’ Research Training Grant

SEE ALSO

Brooke Coco presents research in USA and visits partner organisation Cornell Tech

Brooke Coco, left, and Metagov colleagues in front of the Brooklyn Bridge.

Brooke Coco presents research in USA and visits partner organisation Cornell Tech

Author ADM+S Centre
Date 12 January 2026

ADM+S PhD student Brooke Coco from RMIT has recently returned from a research trip to the USA, where she met with ADM+S Partner Organisation Cornell Tech.

In New York, Brooke visited the Digital Life Initiative (DLI) research lab at Cornell Tech. While at the Roosevelt Island campus, she met with doctoral and postdoctoral fellows and attended a DLI Working Group meeting.

“Student groups shared progress on a range of projects, including experiments with automated purchasing agents designed to locate and buy items online, as well as the development of digital tools aimed at promoting healthier lifestyles,” said Brooke.

 While in New York, Brooke also met with colleagues from Metagov, the primary field site of her PhD research. Metagov is an open, online collective committed to cultivating tools, practices and communities that enable self-governance in the digital age. Brooke’s ethnographic research within Metagov contributes to the co-development of the Knowledge Organisation Infrastructure (KOI), a sociotechnical system designed to enhance the coordination, sustainability, and discoverability of shared knowledge. 

 “This trip marked my first in-person meeting with the KOI project manager and only my second with the community manager.” 

Brooke then travelled to New Orleans to attend the 2025 American Anthropological Association (AAA) Annual Meeting. Over the course of the conference, she attended a range of panels and workshops, including “Selling In, Selling Up, Selling Out and Shutting Up:” Examining These Myths via the Lived Experience of Business Anthropologists, where practitioners reflected on common critiques of business anthropology through their own industry experiences. 

Brooke presented twice over the course of the meeting, firstly delivering a short flash presentation on her ethnographic research into the development and implementation of KOI

Speaking to the conference theme of Ghosts, Brooke explored how contemporary data infrastructures are haunted by the epistemic assumptions of their designers, by the data they privilege or ignore, and by the practices they render invisible.

“I discussed how KOI is creating the capacity to confront these ghosts by offering affordances that empower local communities with greater collective control over how their knowledge is curated, managed, and shared.”

“In doing so, it invites us to reimagine data infrastructures not as haunted, but as living systems that remember, respond to, and evolve with the communities they serve.” Brooke stated.

 Brooke was a panellist in a roundtable discussion titled Ghosts in the Machine: Reanimating Anthropological Engagement with AI, which explored anthropology’s historical role in shaping AI. Together with other researchers engaging with AI, they discussed how the discipline might re-engage with AI in more practice-oriented ways to support the development of more situated and ethical systems. 

 During the roundtable, Brooke highlighted her current use of Telescope, a participatory digital ethnography tool co-developed by ADM+S Associate Investigator and Metagov Research Director Professor Ellie Rennie.

“Telescope addresses key challenges associated with ethnographic research in digital environments, by enabling researchers and community members to collaboratively flag forum posts relevant to ongoing research, which then trigger an automated, consent-based data collection workflow.” 

Brooke discussed the team’s plans to reintegrate these enriched artefacts back into Metagov’s knowledge base, where they may seed new research, insights, and workflows.

Brooke highlighted a number of promising collaboration pathways after conversations with fellow panellists. For example, following the roundtable Brooke was invited to take part in a workshop on AI agents to be held at Monash University in 2026.

 Brooke Coco’s research trip activities were supported by ADM+S HDR Funding, ADM+S RMIT Node funding and the RMIT School of Media and Communication.

SEE ALSO

Sara Allawati presents research on LLM query generation at CIKM in South Korea

Sara Allawati stands next to a power point presentation

Sara Allawati presents research on LLM query generation at CIKM in South Korea

Author ADM+S Centre
Date 18 December 2025

Sara Allawati, an ADM+S researcher and PhD student at RMIT, recently visited Seoul, South Korea to attend The 34th ACM International Conference on Information and Knowledge Management (CIKM). Sara met and collaborated with researchers from around the world, while also presenting a full paper for the first time.

CIKM provides an international forum for discussion on research information and knowledge management, as well as recent advances on data and knowledge bases. The goal of the conference is to shape future directions of research by encouraging high quality, applied and theoretical research findings.

While at CIKM, Sara presented the long paper titled: A Comparative Analysis of Linguistic and Retrieval Diversity in LLM-Generated Search Queries. The paper, co-authored with ADM+S researchers Oleg Zendel, Falk Scholer and Mark Sanderson, with Lida Rashidi from RMIT, compares human-written query datasets, collected five years apart, with queries generated by large language models (LLMs) in the context of search engines.  A ‘query’ is what users type into a search engine, such as Google, when searching for information. 

Sara, along with her fellow researchers applied different methodologies to generate queries using LLMs. Their findings show that while LLMs can generate diverse queries, their patterns still differ from human queries. Sara explained in her presentation that LLMs show promise for query generation, but should be used with caution in future.

Sara also highlighted the importance of preparing a presentation that can be understood across disciplines.

“This was my first time presenting a full paper, and I learned the importance of putting effort into both your slides and your talk,” Sara said.

“I learned that keeping a paper presentation simple and digestible is what makes it stand out.”

“When people listen to presentations all day, delivering content that is both engaging and digestible for different audiences goes a long way,” Sara explained.

After the paper presentation, Sara received several follow up questions, indicating a high level of audience engagement. From there, she had discussions with other attendees from Seoul, Germany and New Zealand, all of whom expressed interest in future collaborations.

Sara plans to submit follow-up papers in February 2026 and intends to reach out to some of these contacts for potential collaboration.

This research trip was funded by the ADM+S RMIT node and ADM+S HDR funding.


SEE ALSO

Wilson Wongso completes USA research trip

Wilson Wongso and other researchers standing in a group
Left-to-right: Flora Salim, Wei Shao, Haley Stone, Yufan Kang, Wilson Wongso, Du Yin, Yang Yang.

Wilson Wongso completes USA research trip

Author ADM+S Centre
Date 16 December 2025

ADM+S researcher Wilson Wongso, a PhD student from UNSW, has recently completed a research trip to the University of California, Berkeley and the University of Minnesota in the United States. While there, he attended a conference and delivered presentations about his research on Large Language Models (LLM’s).

Wilson attended the ACM International Conference on Advances in Geographic Information Systems (SIGSPATIAL 2025), hosted by the University of Minnesota. He travelled with fellow researchers from Collaborative Human-Centric AI Systems (CRUISE), a UNSW based research group which includes ADM+S Chief Investigator Flora Salim. Wilson also met with ADM+S Research Fellow Yufan (Tina) Kang from Monash University.

SIGSPATIAL 2025 attracted participants from a wide range of universities, institutes and industry partners, including Google and Amazon. While at the conference, Wilson presented his research on GenUP as a lightning talk and during the poster session.

“The core idea of GenUP is to generate user profiles that inform POI recommender systems, giving end-users more control over their recommendations.” Wilson said.

At SIGSPATIAL, Wilson served as a program committee member for an UrbanAI workshop, organised by Oak Ridge National Laboratory, UNSW and Emory University. He was also present for supervisor Flora Salim’s invited talk titled “Towards World Models for Urban Mobility” at a workshop on Urban Mobility Foundation Models.

“My biggest takeaway is that SIGSPATIAL showcases a diverse range of interconnected research, and it was encouraging to see that my PhD research questions remain both open and highly relevant challenges” Wilson said.

“I also gained new ideas on methods and approaches that I can potentially apply to my research.”

Wilson then visited the HuMNet Lab at the University of California, Berkeley. He was hosted by Professor Marta C. Gonzalez, whose research focuses on urban mobility. At Berkeley, Wilson presented his PhD work so far, including GenUP and Massive-STEPS to Professor Gonzalez and her team.

Wilson and Professor Gonzalez sit at a table with their lunch, smiling.
Wilson with Prof González at UC Berkeley.

The presentation sparked in-depth discussions with her students about their research, spanning topics such as clustering human lifestyles from mobility traces, examining geographical biases in existing systems, and applying classical urban mobility theories to modern LLM approaches.

As a result, Wilson confirmed an upcoming collaboration with one of Professor Gonzalez’ students. He plans to contribute modern machine learning techniques alongside classical theoretical approaches.

“I aim to dive deeper into classical urban mobility theories, as my background in cutting-edge LLMs can overlook these foundational concepts, combining modern models with classical theories will allow us to build more robust and explainable ‘hybrid’ systems.” Wilson said.

“It was inspiring to see that we are tackling the same research problems in parallel, each leveraging our own strengths and perspectives, collaborating in this way yields meaningful and impactful results.”

Wilson’s research on this trip is part of the broader GenAISim Signature project at ADM+S. This trip was funded through ADM+S HDR funding and the GenAISim project.

SEE ALSO

ADM+S RMIT team win first prize at international RAG challenge

The team at an evaluation session at RMIT. Mark Sanderson.

ADM+S RMIT team win first prize at international RAG challenge

Author ADM+S Centre
Date 11 December 2025

Congratulations to the team of ADM+S Researchers from RMIT, who have won first place at the Massive Multi-Modal User-Centric Retrieval-Augmented Generation (MMU-RAG) Challenge at NeurIPS Conference 2025. The team consisted of several ADM+S researchers, including:

The inaugural MMU-RAG competition took place at the 39th edition of the Annual Conference on Neural Information Processing Systems (NeurIPS 2025).

“Our team, comprising RMIT students and staff, placed first in the open-source systems track under the dynamic user-based evaluation,” said Oleg Zendel.

“This evaluation used a chatbot arena format where users submitted any query they wanted and compared the responses from several systems side by side.”

Out of 81 total registered teams, just 8 managed to submit a fully working system, due to the challenging technical requirements. 

The MMU-RAG challenge is a new international competition, developed by Carnegie Mellon University’s Language Technologies Institute (LTI) in partnership with Amazon.

It was launched to evaluate the next generation of Retrieval-Augmented Generation (RAG) systems by recreating the complexity of real-world information needs. RAG systems combine large-scale information retrieval with AI text generation, allowing them to produce informed and contextually relevant responses.

These systems are increasingly used in applications like advanced chatbots, digital assistants, and research tools.

This marks the teams second win this year in a RAG competition, following on from their win of the LiveRAG competition at the 2025 SIGIR conference.

SEE ALSO

ADM+S Researcher Sarah Erfani wins Award for AI Safety Research

Sarah Erfani holding her award in front of a blue sign.

ADM+S Researcher Sarah Erfani wins Award for AI Safety Research

Author ADM+S Centre
Date 11 December 2025

Congratulations to Associate Professor Sarah Erfani, from the University of Melbourne node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) for being awarded a Young Tall Poppy Award. Sarah was awarded in recognition for her research on AI safety assurance.

Presented by the Australian Institute of Policy and Science (AIPS), the Young Tall Poppy Science Awards celebrate outstanding early-career researchers who not only excel in their fields but also demonstrate a strong commitment to engaging the public in science. The awards recognise excellence in both research achievement and science communication.

“This award is both humbling and deeply energising, it renews my confidence and inspires me to keep pushing the boundaries of AI safety, ensuring that my research continues to protect and support our communities.” Sarah said.

“This recognition reminds me why I am so committed to this work and motivates me to go even further, both in advancing scientific discovery and in shaping a future where AI genuinely makes a positive difference in everyone’s lives.”

Sarah’s work focuses on developing methods for AI safety assurance, ensuring AI systems operate reliably and transparently. Her research aims to build public trust in AI technologies by enabling stakeholders to safely adopt AI tools in real world situations.

The Young Tall Poppy Awards have been running in Victoria since 1999, with more than 150 researchers recognised for their excellence over that time. The program forms part of AIPS’ broader Tall Poppy Campaign, which aims to encourage a culture that values scientific achievement and public engagement with research.

SEE ALSO

ADM+S Researchers elected to Australian Academy of the Humanities

Headshots of Jean Burgess and Ramon Lobato

ADM+S Researchers elected to Australian Academy of the Humanities

Author ADM+S Centre
Date 27 November 2025

Distinguished Professor Jean Burgess, Associate Director of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), has been elected to the council of the Australian Academy of the Humanities (AAH), while Associate Investigator Professor Ramon Lobato has been elected as a Fellow.

Election to the Academy is the highest honour in the humanities in Australia, recognising scholars whose work has shaped how we understand ourselves, our histories and cultures.

Distinguished Professor Jean Burgess from the Queensland University of Technology (QUT) node was originally elected to the Academy in 2021 and is a member of the Culture and Communications Studies Section. Her research focuses on the social implications of digital media technologies, platforms, and cultures, as well as new and innovative digital methods for studying them.

Professor Ramon Lobato from Swinburne University is a distinguished media studies expert concerned with the influence and disruption of online video content on audiences, industry and policy. 

“It’s an honour to be elected to the Academy. The research I do with my team here at Swinburne aims to understand how media is changing in the platform age.” Ramon said.

“I’m grateful to the Academy for supporting this kind of cultural research on digital technology.”

Ramon’s current research projects investigate the cultural impacts of subscription streaming services and smart TVs in Australia.

Academy President, Professor Stephen Garton, said that research from the Academy’s Fellows is crucial to building a more resilient and inclusive nation.

“The Academy’s Fellows are at the forefront of understanding global cultural, social and historical foundations…Their work enhances Australia’s ability to navigate global uncertainty, technological disruption and rapid social change,” said Academy President Professor Stephen Garton.

“What distinguishes the Academy is its ability to bring together the very best humanities minds to address the most pressing issues facing Australia. The collective expertise of our Fellows — from First Nations knowledge leadership to digital cultures, ethics, heritage and languages — is a national asset.”

In total 30 new members were elected to the Australian Academy of Humanities Fellowship including Fellows, Corresponding Fellows, and Honorary Fellows. 

Read the full list of new members on the Australian Academy of the Humanities website.

SEE ALSO

ADM+S researcher Dang Nguyen investigates digital transformation across Vietnam

Dang Nguyễn standing in front of a sign saying "FOXCONN"

ADM+S researcher Dang Nguyen investigates digital transformation across Vietnam

Author ADM+S Centre
Date 25 November 2025

ADM+S Research Fellow Dang Nguyen recently visited Vietnam for a fieldwork trip to investigate how digital transformation is reshaping media practice, civic participation and technology infrastructure. Travelling through Hanoi, Bac Ninh and Ho Chi Minh City, Dang conducted interviews, site visits and field observations with journalists, policy specialists and more.

In Hanoi, Dang joined local journalist Lam Le to visit recycling villages in Bac Ninh, where large-scale repair and reuse of discarded electronics takes place. This trip continues ongoing collaboration with ADM+S PI Professor Melissa Gregg and UC Berkeley’s School of Information on what the ‘afterlives’ of hardware look like and how reuse, repair, and carbon reduction reshape the ecologies of digital infrastructure, consumer electronics, and AI.

A large beige industrial sack filled with discarded circuit boards and electronic components
Discarded circuit boards in Bac Ninh / Dang Nguyen

Dang documented seeing discarded circuit boards, hard drives, wiring and components all awaiting processing: “A whole ecosystem of discarded hardware waiting to be reborn,” states Dang.

Dang also met with Khang Nguyen, Regulatory Reforms Attaché at the British Embassy in Hanoi, following Khang’s contributions to the recent Hanoi Convention against Cybercrime. Their discussion explored Vietnam’s digital governance direction, the UK’s decision to sign the convention, and the wider geopolitical implications of emerging regulatory frameworks.

Finally, Dang attended a meeting with Phuong Nguyen, Communications Manager at Oxfam Vietnam, focused on civic participation and digital rights. Phuong noted increasing concern within civil society over how artificial intelligence may restrict civic space. Dang raised the question of how civic actors might instead mobilise AI for public interest outcomes. 

Insights from this trip will support ongoing ADM+S research into digital ecologies, technology governance and civic futures in Southeast Asia on the Language and Cultural Diversity in ADM: Australia in the Asia Pacific project.

Dang and Phuong sit at an outdoor cafe smiling at the camera
Dang Nguyen with Phuong Nguyen from Oxfam Vietnam

SEE ALSO

Thao Phan and Zahra Stardust awarded Discovery Early Career Research Award

An image of Zahra Stardust and Thao Phan's headshots

Thao Phan and Zahra Stardust awarded Discovery Early Career Research Award

Author ADM+S Centre
Date 25 November 2025

ADM+S Affiliates Dr Thao Phan and Dr Zahra Stardust have been awarded a Discovery Early Career Research Award (DECRA) from the Australian Research Council (ARC) for their respective research projects.

Thao Phan, from the Australian National University (ANU) was awarded a DECRA for her project, Model minorities: racial targeting and discrimination in the platform era.

This project aims to investigate the impacts of algorithmic targeting and discrimination on racially marginalised groups in Australia. With a goal to generate new knowledge on local impacts of global social media platforms by piloting innovative social science methods to document and analyse the real-world experience of racial targeting and classification. 

Zahra Stardust, from the ADM+S node at the Queensland University of Technology (QUT), was awarded a DECRA for her project, Safeguarding sexual and reproductive rights online.

This project aims to investigate how online spaces are increasingly hostile for sexual minorities, who face criminalisation and surveillance. By bringing together local and global stakeholders, including sexual health organisations, public interest technologists, human rights lawyers and affected communities, this project investigates how digital platforms can better safeguard sexual and reproductive rights online.

The ARC has announced over $100 million in funding to support winners of the 2026 DECRA. This funding supports projects of 200 Early Career Researchers that address critical knowledge gaps, strengthening Australia’s research capability and global competitiveness.  

ARC Chief Executive Officer Professor Ute Roessner explained that these newly funded projects ensure Australia remains at the forefront of global research and innovation, building a skilled workforce and delivering research backed impacts.  

‘The ARC is proud to be empowering the next generation of research leaders to thrive in supportive environments, collaborate globally, and deliver outcomes that matter,’ Professor Roessner said.

Read the full list of 2026 ARC DECRA recipients project descriptions.

SEE ALSO

How do ‘AI detection’ tools actually work? And are they effective?

A woman weaves with AI software detection graphics surrounding
Image: Elise Racine

How do ‘AI detection’ tools actually work? And are they effective?

Author T.J. Thomson, Aaron Snoswell and James Meese
Date 14 November 2025

As nearly half of all Australians say they have recently used artificial intelligence (AI) tools, knowing when and how they’re being used is becoming more important.

Consultancy firm Deloitte recently partially refunded the Australian government after a report they published had AI-generated errors in it.

A lawyer also recently faced disciplinary action after false AI-generated citations were discovered in a formal court document. And many universities are concerned about how their students use AI.

Amid these examples, a range of “AI detection” tools have emerged to try to address people’s need for identifying accurate, trustworthy and verified content.

But how do these tools actually work? And are they effective at spotting AI-generated material?

How do AI detectors work?

Several approaches exist, and their effectiveness can depend on which types of content are involved.

Detectors for text often try to infer AI involvement by looking for “signature” patterns in sentence structure, writing style, and the predictability of certain words or phrases being used. For example, the use of “delves” and “showcasing” has skyrocketed since AI writing tools became more available.

However the difference between AI and human patterns is getting smaller and smaller. This means signature-based tools can be highly unreliable.

Detectors for images sometimes work by analysing embedded metadata which some AI tools add to the image file.

For example, the Content Credentials inspect tool allows people to view how a user has edited a piece of content, provided it was created and edited with compatible software. Like text, images can also be compared against verified datasets of AI-generated content (such as deepfakes).

Finally, some AI developers have started adding watermarks to the outputs of their AI systems. These are hidden patterns in any kind of content which are imperceptible to humans but can be detected by the AI developer. None of the large developers have shared their detection tools with the public yet, though.

Each of these methods has its drawbacks and limitations.

How effective are AI detectors?

The effectiveness of AI detectors can depend on several factors. These include which tools were used to make the content and whether the content was edited or modified after generation.

The tools’ training data can also affect results.

For example, key datasets used to detect AI-generated pictures do not have enough full-body pictures of people or images from people of certain cultures. This means successful detection is already limited in many ways.

Watermark-based detection can be quite good at detecting content made by AI tools from the same company. For example, if you use one of Google’s AI models such as Imagen, Google’s SynthID watermark tool claims to be able to spot the resulting outputs.

But SynthID is not publicly available yet. It also doesn’t work if, for example, you generate content using ChatGPT, which isn’t made by Google. Interoperability across AI developers is a major issue.

AI detectors can also be fooled when the output is edited. For example, if you use a voice cloning app and then add noise or reduce the quality (by making it smaller), this can trip up voice AI detectors. The same is true with AI image detectors.

Explainability is another major issue. Many AI detectors will give the user a “confidence estimate” of how certain it is that something is AI-generated. But they usually don’t explain their reasoning or why they think something is AI-generated.

It is important to realise that it is still early days for AI detection, especially when it comes to automatic detection.

A good example of this can be seen in recent attempts to detect deepfakes. The winner of Meta’s Deepfake Detection Challenge identified four out of five deepfakes. However, the model was trained on the same data it was tested on – a bit like having seen the answers before it took the quiz.

When tested against new content, the model’s success rate dropped. It only correctly identified three out of five deepfakes in the new dataset.

All this means AI detectors can and do get things wrong. They can result in false positives (claiming something is AI generated when it’s not) and false negatives (claiming something is human-generated when it’s not).

For the users involved, these mistakes can be devastating – such as a student whose essay is dismissed as AI-generated when they wrote it themselves, or someone who mistakenly believes an AI-written email came from a real human.

It’s an arms race as new technologies are developed or refined, and detectors are struggling to keep up.

Where to from here?

Relying on a single tool is problematic and risky. It’s generally safer and better to use a variety of methods to assess the authenticity of a piece of content.

You can do so by cross-referencing sources and double-checking facts in written content. Or for visual content, you might compare suspect images to other images purported to be taken during the same time or place. You might also ask for additional evidence or explanation if something looks or sounds dodgy.

But ultimately, trusted relationships with individuals and institutions will remain one of the most important factors when detection tools fall short or other options aren’t available.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology, and James Meese, Associate Professor, School of Media and Communication, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S Hackathon navigates the “Wicked Problems” of search

A group of ADM+S members pose on steps

ADM+S Hackathon navigates the “Wicked Problems” of search

Author ADM+S Centre
Date 14 November 2025

Five teams from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) members took part in the centre’s annual Hackathon, this year exploring how search systems enable and constrain diverse social groups navigating complex, real-world challenges. 

Participants focused on developing new methodological approaches to help gain a deeper understanding of how search systems enable and constrain diverse groups facing “wicked problems”.

During the two-day challenge, participants worked in teams to select a “wicked problem” and construct two to three concise personas representing individuals who might seek information related to that issue. 

The Hackathon challenge was developed by Kateryna Kasianenko, Dr Ashwin Nagappa and Dr Oleg Zendel and based on work from the Australian Search Experience 2.0 

“One of the goals behind the hackathon was to get the ADM+S community more comfortable with being uncomfortable in interdisciplinary settings, and we are confident that everyone, from participants to judges, has gotten at least one step closer to this goal,” said Kateryna Kasianenko.

“It was great to see how these perspectives not only co-existed, but informed each other in several projects.” 

Oleg Zendel said that the mix of perspectives made the work exciting and it was valuable to see search through new lenses.

“What stood out to me was how people from different fields approached the same search evaluation challenge in completely different ways,” he said.

On day one, teams were asked to identify a “wicked problem” and produce 2-3 concise representations of people who may be searching for information related to it.

Using data from open online communities, discussions with peers, and insights from external stakeholders, teams generated 15–60 realistic search queries that reflected the behaviours and contexts of their personas. 

The winners of the day one challenge were Khanh Luong (RF, QUT), Kieran Hegarty (RF,  RMIT) and Futoon Abushaqra (Affiliate, RMIT). The team highlighted the wicked problem of the disconnect between children’s curiosity and the age-gated digital systems with search functionality. They proposed to classify children’s search queries based on the level of risk they may present to the child and those around them, illustrating the typology through realistic examples, complemented with detailed examination of search results.

On day two, teams used the queries from day one to either develop an approach to evaluate the search results collected from Google for the queries they produced; or develop a prototype or an approach to collect and evaluate search results from other platforms relevant to the personas.

The winners of the day two challenge were Shuoqi Sun (Student, RMIT), Fletcher Scott (Student, RMIT), Rayane El Masri (Student, QUT), Utami Kusumawati (Affiliate, RMIT) and Kun Ran (Affiliate, RMIT). Their project focused on the information needs around natural disasters, with particular attention to the global/local dimension in both queries and search results. 

Through a mixed-methods approach, the team demonstrated that queries that strongly connect to a particular place still tend to return more general, globalised results. Such results focus on risk reduction strategies rather than enabling communication and decision making specific to a place. This finding highlighted an important gap in search engines’ response to unfolding disasters.

Throughout the Hackathon, mentors and team leads from across the Centre provided support in areas including information retrieval, computational social science, and internet studies. 

Ashwin Nagappa commented, “I think there were several serendipitous moments when participants pivoted and explored new ideas, which led to organic bonding and ideas for publication. 

“It was heartening to see how much everyone valued the two days together.”

The findings, processes and methodological insights from the Hackathon will be documented in a collaborative paper. All participants have been invited to join as co-authors, offering a valuable opportunity for contribution to shared research across the Centre.

The event was organised by the ADM+S Research Training Committee.

SEE ALSO

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

Andres Aleman/Unsplash

AI systems and humans ‘see’ the world differently – and that’s why AI images look so garish

Author TJ Thomson
Date 15 October 2025

How do computers see the world? It’s not quite the same way humans do.

Recent advances in generative artificial intelligence (AI) make it possible to do more things with computer image processing. You might ask an AI tool to describe an image, for example, or to create an image from a description you provide.

As generative AI tools and services become more embedded in day-to-day life, knowing more about how computer vision compares to human vision is becoming essential.

My latest research, published in Visual Communication, uses AI-generated descriptions and images to get a sense of how AI models “see” – and discovered a bright, sensational world of generic images quite different from the human visual realm.

This image features a pixelated selfie featuring an individual with long brown hair and a fringe. The person has their tongue out and is smiling too. Most of the parts of the image are pixelated with red and yellow squares focusing on certain parts of the
Algorithms see in a very different way to humans.
Elise Racine / Better Images of AI / Emotion: Joy, CC BY

Comparing human and computer vision

Humans see when light waves enter our eyes through the iris, cornea and lens. Light is converted into electrical signals by a light-sensitive surface called the retina inside the eyeball, and then our brains interpret these signals into images we see.

Our vision focuses on key aspects such as colour, shape, movement and depth. Our eyes let us detect changes in the environment and identify potential threats and hazards.

Computers work very differently. They process images by standardising them, inferring the context of an image through metadata (such as time and location information in an image file), and comparing images to other images they have previously learned about. Computers focus on things such as edges, corners or textures present in the image. They also look for patterns and try to classify objects.

A screenshot of a CAPTCHA test asking a user to select all images with a bus.
Solving CAPTCHAs helps prove you’re human and also helps computers learn how to ‘see’.
CAPTCHA

You’ve likely helped computers learn how to “see” by completing online CAPTCHA tests.

These are typically used to help computers differentiate between humans and bots. But they’re also used to train and improve machine learning algorithms.

So, when you’re asked to “select all the images with a bus”, you’re helping software learn the difference between different types of vehicles as well as proving you’re human.

Exploring how computers ‘see’ differently

In my new research, I asked a large language model to describe two visually distinct sets of human-created images.

One set contained hand-drawn illustrations while the other was made up of camera-produced photographs.

I fed the descriptions back into an AI tool and asked it to visualise what it had described. I then compared the original human-made images to the computer-generated ones.

The resulting descriptions noted the hand-drawn images were illustrations but didn’t mention the other images as being photographs or having a high level of realism. This suggests AI tools see photorealism as the default visual style, unless specifically prompted otherwise.

Cultural context was largely devoid from the descriptions. The AI tool either couldn’t or wouldn’t infer cultural context by the presence of, for example, Arabic or Hebrew writing in the images. This underscores the dominance of some languages, like English, in AI tools’ training data.

While colour is vital to human vision, it too was largely ignored in the AI tools’ image descriptions. Visual depth and perspective were also largely ignored.

The AI images were more boxy than the hand-drawn illustrations, which used more organic shapes.

Two similar but different black and white illustrations of a bookshelf on wheels.
The AI-generated images were much more boxy than the hand-drawn illustrations, which used more organic shapes and had a different relationship between positive and negative space.
Left: Medar de la Cruz; right: ChatGPT

The AI images were also much more saturated than the source images: they contained brighter, more vivid colours. This reveals the prevalence of stock photos, which tend to be more “contrasty”, in AI tools’ training data.

The AI images were also more sensationalist. A single car in the original image became one of a long column of cars in the AI version. AI seems to exaggerate details not just in text but also in visual form.

A photo of people with guns driving through a desert and a generated photorealistic image of several cars containing peopl with guns driving through a desert.
The AI-generated images were more sensationalist and contrasty than the human-created photographs.
Left: Ahmed Zakot; right: ChatGPT

The generic nature of the AI images means they can be used in many contexts and across countries. But the lack of specificity also means audiences might perceive them as less authentic and engaging.

Deciding when to use human or computer vision

This research supports the notion that humans and computers “see” differently. Knowing when to rely on computer or human vision to describe or create images can be a competitive advantage.

While AI-generated images can be eye-catching, they can also come across as hollow upon closer inspection. This can limit their value.

Images are adept at sparking an emotional reaction and audiences might find human-created images that authentically reflect specific conditions as more engaging than computer-generated attempts.

However, the capabilities of AI can make it an attractive option for quickly labelling large data sets and helping humans categorise them.

Ultimately, there’s a role for both human and AI vision. Knowing more about the opportunities and limits of each can help keep you safer, more productive, and better equipped to communicate in the digital age.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Decentralised Technologies and Global Chinese Communities: upcoming symposium

A city scape

Decentralised Technologies and Global Chinese Communities: upcoming symposium

Author ADM+S
Date 8 October 2025

Leading international researchers will come together for the symposium Decentralised Technologies and Global Chinese Communities, co-hosted by The University of Hong Kong and the ARC Centre of Excellence for Automated Decision-Making and Society at Hong Kong University in-person and online on 27 October.

Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) will explore how decentralised technologies, such as blockchain, DeFi, DAOs and cryptocurrencies, are transforming global Chinese communities.

Speakers will examine how these communities are reimagining networks, identities, and cultural practices through decentralisation, often challenging Western-centric narratives and fostering innovative, community-based models rooted in Chinese cultural and political contexts. 

Topics include grassroots experimentation, state-aligned visions of decentralisation, and the development of infrastructure, from mining operations to digital currencies, that underpin these technologies’ social and economic dimensions.

Speakers include ADM+S Researchers Prof Ellie Rennie, Prof Janet Roitman and Haiqing Yu, who will play a key role in shaping these conversations.

They will be joined by international speakers whose work offers critical global perspectives on decentralisation and Chinese networks, including Dr Nicholas Loubere, Associate Professor at Lund University and co-editor of the Made in China Journal and Dr Wang Jing, Assistant Professor at NYU Shanghai.

Additional speakers will represent leading institutions including Beijing Normal University, China Academy of Art, Chinese University of Hong Kong, City University of Hong Kong, Fudan University, Haian Normal University, Hong Kong Shue Yan University, Renaissance College Hong Kong, The University of Chicago, Utrecht University, and Web3 Harbour. 

This event brings together leading researchers in science and technology studies, media, communication and cultural analysis to examine how decentralised systems are reshaping practices across Chinese diasporic contexts. 

We invite you to attend this event in-person or online. Registration closes on 23 October 2025.

View the event program

Register to attend online

Register to attend in-person

SEE ALSO

Does AI pose an existential risk? We asked 5 experts

Dominos falling and a hand blocking them
Canva/ Kanchanachitkhamma

Does AI pose an existential risk? We asked 5 experts

Author Aaron Snoswell, Niusha Shafiabady, Sarah Vivienne Bentley, Seyedali Mirjalili, Simon Coghlan
Date 6 October 2025

There are many claims to sort through in the current era of ubiquitous artificial intelligence (AI) products, especially generative AI ones based on large language models or LLMs, such as ChatGPT, Copilot, Gemini and many, many others.

AI will change the world. AI will bring “astounding triumphs”. AI is overhyped, and the bubble is about to burst. AI will soon surpass human capabilities, and this “superintelligent” AI will kill us all.

If that last statement made you sit up and take notice, you’re not alone. The “godfather of AI”, computer scientist and Nobel laureate Geoffrey Hinton, has said there’s a 10–20% chance AI will lead to human extinction within the next three decades. An unsettling thought – but there’s no consensus if and how that might happen.

So we asked five experts: does AI pose an existential risk?

Three out of five said no. Here are their detailed answers.

The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Researchers investigate LLMs for search systems during Amsterdam research visit

Nuha Abu Onq and Chenglong Ma stand in front of a large sign saying "LAB42"
Image: Yujie Lyu

Researchers investigate LLMs for search systems during Amsterdam research visit

Author ADM+S Centre
Date 8 October 2025

In early July, ADM+S researchers Nuha Abu Onq and Chenglong Ma visited the Information Retrieval Lab (IRLab) at the University of Amsterdam, Netherlands, organised by ADM+S Partner Investigator Prof Maarten de Rijke. Nuha and Chenglong attended a series of research conferences and collaborative meetings, creating a valuable opportunity for cross-institutional exchange.

On 11 July, both Nuha and Chenglong gave invited talks at IRLab:

  • Chenglong Ma presented “PUB: An LLM-Enhanced Personality-Driven User Behaviour Simulator for Recommender System Evaluation,” introducing a simulator that infers personality traits from user behaviour logs and uses those to produce synthetic interaction data that better mirrors real user diversity.
  • Nuha Abu Onq presented “Classifying Term Variants in Query Formulation,” analysing how users formulate diverse search queries, especially how cognitive complexity of underlying information needs affects query variation and the strategies people employ.

During the visit, Nuha and Chenglong had productive discussions with other researchers about topics like Large Language Models (LLM) for Evaluation in IR. They both attended the SIGIR (Special Interest Group on Information Retrieval) 2025 conference, including participating in the LLM4Eval workshop.

“At SIGIR’25, we considered several approaches for designing prompts to apply LLMs to categorisation tasks, aiming both to simplify future research and to support the training of models for automated categorisation,” Nuha said.

“Additionally, we discussed extending our work on personality traits to investigate how these traits might influence variations in user search behaviour.” 

Image: Yujie Lyu

Nuha and Chenglong mention that one of the key takeaways was exploring the value of open, reproducible and user-centred research practices. The IRLab team’s emphasis on making code and data publicly available and combining technical methods with user studies provided important insight.

Chenglong and Nuha have plans to apply these approaches in their own work. Smaller, well-designed user studies were shown to be highly valuable for informing the development of trustworthy AI systems.

“Carefully designed small-scale user studies can provide valuable insights for future LLM-based search systems, as they can be validated against real user search interactions.” Nuha said.

Nuha and Chenglong recognised the need to bridge academic research with real-world applications, especially when it comes to fairness and evaluation in commercial search and recommendation systems. 

This visit was funded by the ARC Centre of Excellence for Automated Decision-Making and Societies’ Research Training Grant,

SEE ALSO

How people are assessed for the NDIS is changing. Here’s what you need to know

Two people in a room facing each other talking in therapy session.
andreswd/Getty Images

How people are assessed for the NDIS is changing. Here’s what you need to know

Authors Georgia van Toorn and Helen Dickinson
Date 1 October 2025

The government has announced a new tool to assess the needs of people with disability for the National Disability Insurance Scheme (NDIS).

Instead of a having to gather and submit medical reports, new applicants and existing participants being reassessed will have an interview with an National Disability Insurance Agency (NDIA) assessor.

The government says the new process will make support planning simpler, fairer and more accessible.

But last week’s announcement has left important questions unanswered. Most notably, how will the outcome of these assessments determine the level of support someone gets? And what evidence will be used in place of doctors’ reports?

With minimal consultation so far and little transparency, confidence in the new system is already low.

What’s changing?

The independent NDIS review reported to the federal government in December 2023 and recommended a raft of reforms. It found current processes for assessing people for the NDIS supports are unfair and inefficient. Gathering evidence from treating doctors and allied health professionals can be time-consuming, due to long wait times for appointments. Appointments can also be expensive.

As a result, those with the ability and means to collect or purchase additional information are favoured in this process. It also means the scheme often focuses on medical diagnosis and not on the functional impairments that arise from these diagnoses.

From mid-2026, participants aged over 16 will have their needs assessed by an NDIA assessor. This shifts the role of gathering and interpreting information to the agency.

Assessors will be an allied health professional, such as an occupational therapist or social worker, who will use an assessment tool called the Instrument for the Classification and Assessment of Support Needs version 6, or I-CAN.

I-CAN measures support needs across 12 areas of daily life, including mobility, self-care, communication, relationships, and physical and mental health. Each area is scored on two scales: how often support is needed, and the intensity of the support required.

The assessment, based on self-reported information, is expected to take around three hours.

What we still don’t know

With medical reports no longer required, it’s unclear what kinds of evidence, beyond the information collected through the assessment, will inform the planning process.

The other big unknown is how the I-CAN assessment will translate into setting a budget for participants. This is crucial, as a person’s budget determines the supports they can access. And this shapes their ability to live independently and pursue their goals.

Currently, budget size is determined by identifying the range of supports a person needs and is built line by line. But the NDIS review recommended more flexibility. Instead of getting separate amounts for therapy, equipment and support workers, the review argued a participant should get one overall budget they can use across all their needs.

While the idea of flexibility sounds promising, it means little without an adequate budget.

Potential conflicts also arise when the NDIA both judges need and allocates funding, but has an incentive to contain costs.

Recent reforms to operational rules about what should be included as an NDIS support will also constrain this flexibility.

Standardisation at what cost?

These changes are partly aimed at controlling NDIS spending through a more standardised and efficient planning process.

They echo the Morrison government’s failed attempt in 2021 to introduce “independent assessments”. Disability groups, the Labor opposition, and state and territory ministers rejected the move, and the government abandoned the plan.

There is a risk the new approach could reduce support and fail to expand choice. Rather than providing the flexibility participants seek, rigid assessments and points-based formulas can easily be repurposed to cap budgets.

The United Kingdom’s experience suggests this is a very real possibility for individualised funding schemes such as the NDIS.

In recent months, a number of NDIS participants have already had their eligibility for the scheme re-assessed or their funding reduced. The concern is that unless this new process is carefully co-designed and implemented, we may see more cuts.

Disability groups also fear that if aspects of the planning process are automated, algorithms could turn nuanced support needs into rigid calculations. Campaign groups have called on the government to halt the use of algorithms, which are already being used in NDIS support planning.

As George Taleporos, the independent chair of Every Australian Counts, has stressed:

The NDIS must never reduce us to data points in a secret algorithm – people with disability are not numbers, we are human beings, and our rights must remain at the heart of the Scheme.

Will some groups be disadvantaged by the change?

The new framework was developed without meaningful input from NDIS participants, families and carers, and advocacy groups are concerned the tool may not be fit for purpose for some groups.

A self-report tool such as I-CAN poses particular risks for autistic people with complex communication needs, high support requirements, and those who rely on masking to navigate social situations. Each of these factors raises the risk the tool won’t capture real support needs.

For culturally and linguistically diverse communities and First Nations people with disability, these issues are compounded by language, cultural and accessibility barriers.

A three-hour-long interview will place a heavy cognitive and emotional load on all NDIS participants. It’s possible this could compromise the accuracy of responses.

Some people in the disability community have called for the ability for participants to be able to bring additional evidence from the professionals who know them well to the assessment process, so it doesn’t miss important information about them.

While we await more detail, it’s crucial the government consults closely with the disability community to ensure people with disability are not left worse off.

Georgia van Toorn, Research Fellow, ARC Centre of Excellence for Automated Decision-Making and Society, UNSW Sydney and Helen Dickinson, Professor, Public Service Research, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

We teach young people to write. In the age of AI, we must teach them how to see

Person with back towards camera facing mountain range
Vikas Anand Dev/ Upsplash

We teach young people to write. In the age of AI, we must teach them how to see

Authors T.J. Thomson, Daniel Pfurtscheller, Katharina Christ, Katharina Lobinger, Nataliia Laba
Date 1 October 2025

From the earliest year of school, children begin learning how to express ideas in different ways. Lines across a page, a wobbly letter, or a simple drawing form the foundation for how we share meaning beyond spoken language.

Over time, those first marks evolve into complex ideas. Children learn to combine words with visuals, express abstract concepts, and recognise how images, symbols and design carry meaning in different situations.

But generative artificial intelligence (AI), software that creates content based on user prompts, is reshaping these fundamental skills. AI is changing how people create, edit and present both text and images. In other words, it changes how we see – and how we decide what’s real.

Take photos, for example. They were once seen as a “mirror” of reality. Now, more people recognise their constructed nature.

Similarly, generative AI is disrupting long-held assumptions about the authenticity of images. These can appear photorealistic but can depict things or events that never existed.

Our latest research, published in the Journal of Visual Literacy, identifies key literacies at each stage of the AI image generation process, from selecting an AI image generator to creating and refining content.

As the way people make images changes, knowing how generative AI works will let you better understand and critically assess its outputs.

Textual and visual literacy

Literacy today extends beyond reading and writing. The Australian Curriculum defines literacy as the ability to “use language confidently for learning and communicating in and out of school”. The European Union broadens this to include navigating visual, audio and digital materials. These are essential skills not only in school, but for active citizenship.

These abilities span making meaning, communicating and creating through words, visuals and other forms. These abilities also require adapting expression to different audiences. You might text a friend informally but email a public official with more care, for example. Computers, too, demand different forms of literacy.

In the 1960s, users interacted with computers through written commands. By the 1970s, graphical elements like icons and menus emerged, making interaction more visual.

Generative AI is often a mix between these two approaches. Some technologies, like ChatGPT, rely on text prompts. Others, like Adobe’s Firefly, use both text commands and button controls.

The user interface of Adobe Firefly shows eight photorealistic images, generated by AI, seemingly depicting the Sydney Opera House in Sydney Harbour.
Adobe Firefly provides a suite of options for adjusting visual output, including whether the visual style is photorealistic, whether the image orientation is square, horizontal, or vertical, and whether any visual effects are desired.
T.J. Thomson

Software often interprets or guesses user intent. This is especially true for minimalistic prompts, such as a single word or even an emoji. When these are used for prompts, the AI system often returns a stereotypical representation based on its training data or the way it’s been programmed.

Being more specific in your prompt helps to arrive at a result more aligned with what you envisioned. This highlights that we need “multimodal” literacies: knowledge and skills that cut across writing and visual modes.

What are some key literacies in AI generation?

One of the first generative AI literacies is knowing which system to use.

Some are free. Others are paid. Some might be free but built on unethical datasets. Some have been trained on particular datasets that make the outputs more representative or less risky from a copyright infringement perspective. Some support a wider range of inputs, including images, documents, spreadsheets and other files. Others might support text-only inputs.

After selecting an image generator, you need to be able to work with it productively.

If you’re trying to make a square image for an Instagram post, you’re in luck. This is because many AI systems produce images with a square orientation by default. But what if you need a horizontal or vertical image? You’ll have to ask for that or know how to modify that setting.

What if you want text included in your image? AI still struggles with rendering text, similarly to how early AI systems struggled with accurately representing human fingers and ears. In these cases, you might be better off adding text in a different software, such as Canva or Adobe InDesign.

Many AI systems also create images that lack specific cultural context. This lets them be easily used in wider contexts. Yet it might decrease the emotional appeal or engagement among audiences who perceive these images as inauthentic.

A humanoid robot holds a newspaper with a headline about the economy.
AI often struggles with rendering text. Here’s how AI did with a request to create an image that included this headline, ‘Give the A.I. Economy a Human Touch.’
The authors via Midjourney, CC BY-NC-SA

Working with AI is a moving target

Learning AI means keeping pace with constant change. New generative AI products appear regularly, while existing platforms rapidly evolve.

Earlier this year, OpenAI integrated image generation into ChatGPT and TikTok launched its AI Alive tool to animate photos. Meanwhile, Google’s Veo 3 made cinematic video with sound accessible to Canva users, and Midjourney introduced video outputs.

These examples show where things are headed. Users will be able to create and edit text, images, sound and video in one place rather than having to use separate tools for each.

Building multimodal literacies means developing the skills to adapt, evaluate and co-create as technology evolves.

If you want to start building those literacies now, begin with a few simple questions.

What do I want my audience to see or understand? Should I use AI for creating this content? What is the AI tool producing and how can I shape the outcome?

Approaching visual generative AI with curiosity, but also critical thinking is the first step toward having the skills to use these technologies intentionally and effectively. Doing so can help us tell visual stories that carry human rather than machine values.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The amount of personal info Australian renters have to hand over is ‘staggering’

Rent sign in front yard
Credit: Getty Images

The amount of personal info Australian renters have to hand over is ‘staggering’

Author Lina Przhedetsky
Date 28 August 2025

The New South Wales government has introduced a bill to better protect renters’ personal information when they apply for properties.

But other Australian states and territories are lagging behind, leaving many renters with little choice but to hand over excessive amounts of personal information when they apply for properties.

Two people at a table going through papers with an open laptop
The amount of information collected during rental applications is staggering. Picture: Getty Images

Too much information

As median rents continue to climb, and the national vacancy rate hovers around 1.2 per cent, renters report feeling pressured to use third-party rental apps when applying for a property.

Although these apps are presented as a convenient way to apply for properties, the amount of information they collect about renters is staggering.

People applying to rent a property have reported being asked to hand over marriage certificates and medical histories, provide excessive information about their lifestyle, and even take personality assessments.

Issues resulting from the widespread use of third-party rental apps are well-documented. These include high-profile data breaches, invasive questions sent to applicants’ employers and the unlawful collection of almost $AU50,000 in fees from NSW renters.

The protections in place to safeguard renters’ personal information are, by and large, inadequate.

A better deal

In August 2023, National Cabinet agreed on a ‘A Better Deal for Renters’ which committed all states and territories to introducing improved protections for renters’ privacy and standardising application processes.

This commitment is particularly important because progress appears to have stalled on both the Federal Government’s second tranche of privacy reforms, and the introduction of mandatory guardrails for safe and responsible AI.

State and territory governments have an important opportunity to plug key gaps in renter protection by limiting the amount of information that is collected about renters, restricting how this information can be used and placing stricter limits on how long it is stored.

Despite this commitment, state and territory responses have been inconsistent.

A computer screen showing a Submit button with terms and conditions and a privacy policy
Many real estate agencies and rental application platforms are not subject to the Privacy Act. Picture: Getty Images

South Australia, Queensland and Victoria have introduced updated protections, which have gone some way to improving each jurisdictions’ legislation – but there remain loopholes that risk exploitation.

At the time of writing, the NSW bill remains in limbo.

The NSW legislation, if passed in its current form, would significantly improve existing protections for the state’s renters and offer a model for other jurisdictions to follow.

It would do this by severely restricting the amount of personal information, including documents, that renters are asked to provide when applying for a property, and requiring the use of prescribed application forms.

If the regulations are designed correctly, they would prevent renters from being asked inappropriate questions, or being asked to hand over unnecessary information – like details of their hobbies or social media accounts.

The NSW bill also promises to increase penalties for breaches and empower the Civil and Administrative Tribunal to make orders for compensation in specific circumstance where tenants have suffered economic loss.

These changes are intended to deter bad behaviours and provide redress to tenants in a sector that’s previously been referred to as the ‘Wild West’.

Additionally, the bill would apply the Australian Privacy Principles to landlords, agents, and other people dealing with tenants’ personal information.

Currently, many real estate agencies and rental application platforms are not subject to the Privacy Act due to the small-business exemption.

Although there’s talk of removing this exemption, the Federal Government is yet to close this loophole, making it an opportune time for states and territories to plug the gap.

A couple talking to a real estate agent inside a house
With many AI systems, it’s impossible for applicants to know if they are being assessed fairly. Picture: Getty Images

Don’t ignore AI

The proposed NSW reforms offer a significant improvement for renters, but they don’t fully address key issues when it comes to the use of AI in the rental sector.

Although the proposed legislation would require agents to disclose when AI-generated or digitally-modified images are used in rental listings, it does not address the use of AI in tenant assessments.

There has been growing concern about the way these platforms use ‘black box’ artificial intelligence systems to evaluate applicants.

Often, neither the applicants or real estate agents know exactly how these algorithms score, rate and rank applicants – making it impossible to know whether they are being used fairly.

NSW must show the way

As it works its way through the parliament, there is a risk that the protections the NSW bill offers renters will be watered down before it passes into law, or that the regulations it delegates are poorly designed.

But for renters in NSW and around the country, it’s crucial that the bill passes in its current form, and the regulations it enables must be designed to effectively protect renters’ data.

Other states and territories should pay close attention to the NSW reforms but should also consider taking aim at AI-powered tenant assessments.

There is a long way to go before the collection, use and storage of renters’ information is regulated effectively – and action must be taken now.

 

SEE ALSO

Viral violent videos on social media are skewing young people’s sense of the world

A person using a smartphone

Viral violent videos on social media are skewing young people’s sense of the world

Author Samuel Cornell and T.J. Thomson
Date 17 September 2025

When news broke last week that US political influencer Charlie Kirk had been shot at an event at Utah Valley University, millions of people around the world were first alerted to it by social media before journalists had written a word.

Rather than first seeing the news on a mainstream news website, footage of the bloody and public assassination was pushed directly onto audiences’ social media feeds. There weren’t any editors deciding whether the raw footage was too distressing, nor warnings before clips auto-played.

Australia’s eSafety commissioner called on platforms to shield children from the footage, noting “all platforms have a responsibility to protect their users by quickly removing or restricting illegal harmful material”.

This is the norm in today’s media environment: extreme violence often bypasses traditional media gatekeepers and can reach millions of people, including children, instantly. This has wide-ranging impacts on young people – and on society at large.

A wide range of violence

Young people are more likely than older adults to come across violent and disturbing content online. This is partly because they are more frequent users of platforms such as TikTok, Instagram and X.

Research from 2024 from the United Kingdom suggests a majority of teenagers have seen violent videos in their feeds.

The violence young people see on social media ranges from schoolyard fights and knife attacks to war footage and terrorist attacks.

The footage is often visceral, raw and unexpected.

A wide range of harms

Seeing this kind of violent footage on social media can make some children not want to leave the house.

Research also shows engaging with distressing media can cause symptoms similar to trauma, especially if the violence feels close to our own lives.

Research shows social media is not simply a mirror of youth violence but also a vector for it, with bullying, gang violence, dating aggression, and even self-directed violence playing out online. Exposure to these harms can have a negative effect on young people’s mental health, behaviour and academic performance.

For others, violent content on social media risks “desensitisation”, where people become so used to suffering and violence they become less empathetic.

Communication scholars also point to cultivation theory – the idea in this case that people who consume more violent content begin to see the world as potentially more dangerous than it really is.

This potentially skewed perception can influence everyday behaviour even among those who do not directly experience violence.

A long history of violence

Violence distributed by media is as old as media itself.

The ancient Greeks painted their pottery with scenes of battles and slaying. The Romans wrote about their gladiators. Some of the first photographs ever taken were of the Crimean War. And in the second world war, people went to the cinema to watch newsreels for updates on the war.

The Vietnam war was the first “television war” – images of violence and destruction were beamed into people’s homes for the first time. Yet television still involved editorial judgement. Footage of violence was cut, edited, narrated and contextualised.

Seeing violence as if you were there has been transformed by social media.

Now, footage of war, recorded in real time on phones or drones, is uploaded to TikTok or YouTube and shared with unprecedented immediacy. It often appears without any additional context – and often isn’t packaged any differently to a video of, say, somebody walking down the street or hanging out with friends.

War influencers have emerged – people who post updates from conflict zones, often with no editorial training, unlike war journalists. This blurs the line between reporting and spectacle. And this content spreads rapidly, reaching audiences who have often not sought it.

Israel’s military even uses war influencers to “thirst trap” social media users for propaganda purposes. A thirst trap is a deliberately eye-catching, often seductive, social media post designed to attract attention and engage users.

How to opt out of violence

There are some practical steps that can be taken to reduce your chances of encountering unwanted violent content:

  • turn off autoplay. This can prevent videos from playing unprompted
  • use mute or block filters. Platforms such as X and TikTok let you hide content with certain keywords
  • report disturbing videos or images. Flagging videos for violence can reduce how often they are promoted
  • curate your feed. Following accounts that focus on verified news can reduce exposure to random viral violence
  • take a break from social media, which isn’t as extreme as it sounds.

These actions aren’t foolproof. And the reality is that users of social media have very limited control over what they see. Algorithms still nudge users’ attention toward the sensational.

The viral videos of Kirk’s assassination highlight the failures of platforms to protect their users. Despite formal rules banning violent content, shocking videos slip through and reach users, including children.

In turn, this highlights why more stringent regulation of social media companies is urgently needed.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S PhD student shares research on data frictions of clinical sexual health services

Caitlin Learmonth stands in front of her poster presentation

ADM+S PhD student shares research on data frictions of clinical sexual health services

Author ADM+S Centre
Date 2 September 2025

Caitlin Learmonth, PhD student at the Swinburne University node of the ARC Centre of Excellence for Automated Decision-Making and Society, recently travelled to Montreal, Canada to share her research at two major events.

At the STI & HIV World Congress Cailtin presented a research poster on data frictions in the provision of clinical sexual health services. 

Her work highlighted how current guidelines and funding mechanisms often fail to meet the needs of sexual health consumers who fall outside of population-based sampling categories, such as those in consensually non-monogamous (CNM) communities.

“Using my research’s critical lens of consensually non-monogamous sexual health consumers, I showed how current guidelines and funding mechanisms fail to meet the needs of some sexual health consumers falling outside of population-based sampling categories.” Caitlin said.

At the STI & HIV World Congress, Caitlin met with academics from the School of Public Health at the University of British Columbia, strengthening international research networks in her field.

In addition, Caitlin  gave a presentation at DIGS Lab (Digital Intimacy, Gender & Sexuality Research Lab) at Concordia University.  She  provided an overview of her PhD project, which explores the data practices informing clinical sexual health services and the strategies used to navigate restrictions by CNMconsumers and healthcare providers. Caitlin also presented the strategies used by consumers and healthcare providers for navigating digital health systems to access sexual health services and engaged with fellow PhD students, post-doctoral researchers, and senior academics engaged in related fields.

Caitlin notes this research trip provided her with a reminder of the value of social research in health sciences.

“Learning how to communicate my research to different audiences, health and medical at the conference, and media, communication and cultural studies at DIGS Lab, has helped me explain and conceptualise my research in my writing and other academic outputs.” said Caitlin.

This research trip was co-funded by ADM+S Research Training and Swinburne University.

SEE ALSO

Does AI really boost productivity at work? Research shows gains don’t come cheap or easy

Wikimedia/Pexels/The Conversation

Does AI really boost productivity at work? Research shows gains don’t come cheap or easy

Authors Fan Yang and Jake Goldenfein
Date 15 August 2025

Artificial intelligence (AI) is being touted as a way to boost lagging productivity growth.

The AI productivity push has some powerful multinational backers: the tech companies who make AI products and the consulting companies who sell AI-related services. It also has interest from governments.

Next week, the federal government will hold a roundtable on economic reform, where AI will be a key part of the agenda.

However, the evidence AI actually enhances productivity is far from clear.

To learn more about how AI is working and being procured in real organisations, we are interviewing senior bureaucrats in the Victorian Public Service. Our research is ongoing, but results from the first 12 participants are showing some shared key concerns.

Our interviewees are bureaucrats who buy, use and administer AI services. They told us increasing productivity through AI requires difficult, complex, and expensive organisational groundwork. The results are hard to measure, and AI use may create new risks and problems for workers.

Introducing AI can be slow and expensive

Public service workers told us introducing AI tools to existing workflows can be slow and expensive. Finding time and resources to research products and retrain staff presents a real challenge.

Not all organisations approach AI the same way. We found well-funded entities can afford to test different AI uses for “proofs of concept”. Smaller ones with fewer resources struggle with the costs of implementing and maintaining AI tools.

In the words of one participant:

It’s like driving a Ferrari on a smaller budget […] Sometimes those solutions aren’t fit for purpose for those smaller operations, but they’re bloody expensive to run, they’re hard to support.

 

‘Data is the hard work’

Making an AI system useful may also involve a lot of groundwork.

Off-the-shelf AI tools such as Copilot and ChatGPT can make some relatively straightforward tasks easier and faster. Extracting information from large sets of documents or images is one example, and transcribing and summarising meetings is another. (Though our findings suggest staff may feel uncomfortable with AI transcription, particularly in internal and confidential situations.)

But more complex use cases, such as call centre chatbots or internal information retrieval tools, involve running an AI model over internal data describing business details and policies. Good results will depend on high-quality, well-structured data, and organisations may be liable for mistakes.

However, few organisations have invested enough in the quality of their data to make commercial AI products work as promised.

Without this foundational work, AI tools won’t perform as advertised. As one person told us, “data is the hard work”.

Privacy and cybersecurity risks are real

Using AI creates complex data flows between an organisation and servers controlled by giant multinational tech companies. Large AI providers promise these data flows comply with laws about, for instance, keeping organisational and personal data in Australia and not using it to train their systems.

However, we found users were cautious about the reliability of these promises. There was also considerable concern about how products could introduce new AI functions without organisations knowing. Using those AI capabilities may create new data flows without the necessary risk assessments or compliance checking.

If organisations handle sensitive information or data that could create safety risks if leaked, vendors and products must be monitored to ensure they comply with existing rules. There are also risks if workers use publicly available AI tools such as ChatGPT, which don’t guarantee confidentiality for users.

How AI is really used

We found AI has increased productivity on “low-skill” tasks such as taking meeting notes and customer service, or work done by junior workers. Here AI can help smooth the outputs of workers who may have poor language skills or are learning new tasks.

But maintaining quality and accountability typically requires human oversight of AI outputs. The workers with less skill and experience, who would benefit most from AI tools, are also the least able to oversee and double-check AI output.

In areas where the stakes and risks are higher, the amount of human oversight necessary may undermine whatever productivity gains are made.

What’s more, we found when jobs become primarily about overseeing an AI system, workers may feel alienated and less satisfied with their experience of work.

We found AI is often used for questionable purposes, too. Workers may use AI to take shortcuts, without understanding the nuances of compliance within organisational guidelines.

Not only are there data security and privacy concerns, but using AI to review and extract information can introduce other ethical risks such as magnifying existing human bias.

In our research, we saw how those risks prompted organisations to use more AI – for enhanced workplace surveillance and forms of workplace control. A recent Victorian government inquiry recognised that these methods may be harmful to workers.

Productivity is tricky to measure

There’s no easy way for an organisation to measure changes in productivity due to AI. We found organisations often rely on feedback from a few skilled workers who are good at using AI, or on claims from vendors.

One interviewee told us:

I’m going to use the word ‘research’ very loosely here, but Microsoft did its own research about the productivity gains organisations can achieve by using Copilot, and I was a little surprised by how high those numbers came back.

Organisations may want AI to facilitate staff cuts or increase throughput.

But these measures don’t consider changes in the quality of products or services delivered to customers. They also don’t capture how the workplace experience changes for remaining workers, or the considerable costs that primarily go to multinational consultancies and tech firms.


The authors thank the research participants for sharing their insights, the researchers who contributed their expertise to the initial analysis of interview transcripts, and the Office of the Victorian Information Commissioner for supporting participant recruitment.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S supports inSTEM 2025: building a more inclusive future in STEM

ADM+S supports inSTEM 2025: building a more inclusive future in STEM

Author ADM+S Centre
Date 11 August 2025

ADM+S was proud to play a key role in organising the 2025 inSTEM conference over 27 to 28 May. This initiative continues to advance equity, inclusion and career development across STEM fields in Australia.

Held annually, inSTEM is dedicated to supporting marginalised and underrepresented people in STEM, while also equipping allies and leaders with the tools to drive meaningful change. This year’s event Building Bridges in STEM: Empowering Voices, Cultivating Leaders, offered a welcoming and inclusive environment for attendees to connect, reflect, and build lasting professional networks.

Held over two days, inSTEM brought together researchers from across several ARC Centres of Excellence to provide a safe, inclusive space to connect and share experiences, and learn from experts on advancing careers in STEM while fostering inclusivity.  

“Collaborating with colleagues across nine Centres of Excellence was a fantastic opportunity to strengthen our community of practice by sharing ideas to create an engaging and inclusive program,” said Sally Storey, ADM+S Manager for Research Training and Development.

“I really valued meeting professional staff and researchers beyond my own Centre, gaining deeper insight into the challenges faced by underrepresented groups in STEM. This experience helped me re-evaluate new ways to break down barriers and foster a more equitable and supportive environment.”

Professional staff Sally Storey from ADM+S, Ruth Waterman from COMBs and Mathew Warren from ADM+S

Other organising ARC Centres of Excellence included:

  • The Centre of Excellence for Dark Matter Particle Physics
  • The Centre of Excellence for Engineered Quantum Systems
  • The Centre of Excellence in Synthetic Biology
  • The Centre of Excellence for Gravitational Wave Discovery
  • The Centre of Excellence in Optical Microcombs for Breakthrough Science
  • The Centre of Excellence in Quantum Biotechnology
  • The Centre of Excellence for Transformative Meta-optical Systems 
  • The Centre of Excellence for Electrochemical Transformation of Carbon Dioxide

Designed as both a professional development and networking event, the conference created a space for participants to connect, share experiences.  

Topics ranged from inclusive leadership and allyship to navigating structural barriers in academia and industry.

An initiative of the ARC Centres of Excellence, inSTEM continues to grow as a community of learning, support, and action, empowering individuals at all career stages to shape a more inclusive STEM ecosystem.

SEE ALSO

Explore the ‘Signal to Noise’ exhibition co-curated by ADM+S researcher Dr Joel Stern

Explore the ‘Signal to Noise’ exhibition co-curated by ADM+S researcher Dr Joel Stern

Author ADM+S Centre
Date 1 August 2025

The information age is over, explore the age of noise at the new  ‘Signal to Noise ‘ exhibition co-curated by ADM+S researcher Dr Joel Stern.

‘Signal to Noise’ examines how artists engage with disruptions and interference in communication technologies. The exhibit utilises a vast range of digital and physical mediums to broadcast its message to audiences. ‘Signal to Noise’  explores the chaos that noise introduces: from hundreds of pictures flashing across display screens, to corrupting files, computer system overloads and failed AI generated videos.

Led by his background in underground and experimental music scenes, co-curator Dr Joel Stern explores the practices of sound and listening. His research examines how technical, social and political sounds shape our world. 

“As a researcher interested in how art, culture, and politics are shaped by emerging technologies, Signal to Noise has been a fantastic opportunity to test ideas in dialogue with brilliant colleagues and engaged audiences, both inside and outside academia,” said Dr Stern.

“I’ve long been drawn to noise—its conceptual weight, its sensory impact, its ambiguity. What is noise? What does it do? Who gets to decide what counts as noise, and what counts as signal?”

“ I hope this exhibition opens up these questions through the work of remarkable artists. In an era of big data and AI, such questions feel more urgent than ever,” he said.

An exhibit from ‘Signal to Noise.’ (SWIM by Eryk Salvaggio)

Instead of noise becoming something to minimise, ‘Signal to Noise’ artists have reframed it as a creative tool.  Noise becomes engagement that hooks in audiences willing to see the beauty of chaos, unpredictability, and groundbreaking ideas.

“It was overstimulating,  each exhibit was vying for attention, making it hard to focus and to draw my eyes away from the thing in front of me,” says Faolan Whitehead, a visitor to the exhibition.

TheSignal to Noise exhibition is open to the public at the National Communication Museum (NCM) until Sunday 14 September. ‘Signal to Noise’ is curated by Eryk Salvaggio, Joel Stern and Emily Siddons.

Visit the ‘Signal to Noise’ website to find out more and book tickets

SEE ALSO

‘Are you joking, mate?’ AI doesn’t get sarcasm in non-American varieties of English

Emily Morter/Unsplash

‘Are you joking, mate?’ AI doesn’t get sarcasm in non-American varieties of English

Authors Aditya Joshi
Date 29 July 2025

In 2018, my Australian co-worker asked me, “Hey, how are you going?”. My response – “I am taking a bus” – was met with a smirk. I had recently moved to Australia. Despite studying English for more than 20 years, it took me a while to familiarise myself with the Australian variety of the language.

It turns out large language models powered by artificial intelligence (AI) such as ChatGPT experience a similar problem.

In new research, published in the Findings of the Association for Computational Linguistics 2025, my colleagues and I introduce a new tool for evaluating the ability of different large language models to detect sentiment and sarcasm in three varieties of English: Australian English, Indian English and British English.

The results show there is still a long way to go until the promised benefits of AI are enjoyed by all, no matter the type or variety of language they speak.

Limited English

Large language models are often reported to achieve superlative performance on several standardised sets of tasks known as benchmarks.

The majority of benchmark tests are written in Standard American English. This implies that, while large language models are being aggressively sold by commercial providers, they have predominantly been tested – and trained – only on this one type of English.

This has major consequences.

For example, in a recent survey my colleagues and I found large language models are more likely to classify a text as hateful if it is written in the African-American variety of English. They also often “default” to Standard American English – even if the input is in other varieties of English, such as Irish English and Indian English.

To build on this research, we built BESSTIE.

What is BESSTIE?

BESSTIE is the first-of-its-kind benchmark for sentiment and sarcasm classification of three varieties of English: Australian English, Indian English and British English.

For our purposes, “sentiment” is the characteristic of the emotion: positive (the Aussie “not bad!”) or negative (“I hate the movie”). Sarcasm is defined as a form of verbal irony intended to express contempt or ridicule (“I love being ignored”).

To build BESSTIE, we collected two kinds of data: reviews of places on Google Maps and Reddit posts. We carefully curated the topics and employed language variety predictors – AI models specialised in detecting the language variety of a text. We selected texts that were predicted to be greater than 95% probability of a specific language variety.

The two steps (location filtering and language variety prediction) ensured the data represents the national variety, such as Australian English.

We then used BESSTIE to evaluate nine powerful, freely usable large language models, including RoBERTa, mBERT, Mistral, Gemma and Qwen.

Inflated claims

Overall, we found the large language models we tested worked better for Australian English and British English (which are native varieties of English) than the non-native variety of Indian English.

We also found large language models are better at detecting sentiment than they are at sarcasm.

Sarcasm is particularly challenging, not only as a linguistic phenomenon but also as a challenge for AI. For example, we found the models were able to detect sarcasm in Australian English only 62% of the time. This number was lower for Indian English and British English – about 57%.

These performances are lower than those claimed by the tech companies that develop large language models. For example, GLUE is a leaderboard that tracks how well AI models perform at sentiment classification on American English text.

The highest value is 97.5% for the model Turing ULR v6 and 96.7% for RoBERTa (from our suite of models) – both higher for American English than our observations for Australian, Indian and British English.

National context matters

As more and more people around the world use large language models, researchers and practitioners are waking up to the fact that these tools need to be evaluated for a specific national context.

For example, earlier this year the University of Western Australia along with Google launched a project to improve the efficacy of large language models for Aboriginal English.

Our benchmark will help evaluate future large language model techniques for their ability to detect sentiment and sarcasm. We’re also currently working on a project for large language models in emergency departments of hospitals to help patients with varying proficiencies of English.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Work experience student gains insights on research management at ADM+S

ADM+S Work Experience Student at museum

Work experience student gains insights on research management at ADM+S

Author ADM+S Centre
Date 30 June 2025

In June 2025, the professional staff team at ADM+S welcomed Faolan Whitehead, a year 10 student from Greensborough College, for a week-long work experience placement. 

Over the course of the week, Faolan had the opportunity to collaborate with the ADM+S team across a range of disciplines,gaining hands-on experience in research management, media production, communications, governance and research training. 

“Faolan was immersed in the Centre’s operations”, said Nicholas Walsh, ADM+S Chief Operating Officer.

“He worked alongside different team members to gain insight into the wide range of careers available in the world of research management.”

Faolan’s time at ADM+S offered him a unique view of the intersection of research and innovation. He contributed to various projects while learning about the systems that drive the Centre’s work.

“The placement provided an excellent opportunity for us to share our research with a highly capable student possessing a strong interest in artificial intelligence, science, and tech cultures,” said Walsh. 

Faolan said the placement brought him new perspectives on the world of research.

“This work experience taught me new skills I didn’t know I would enjoy.” Faolan said.

“When I was challenged there was always someone there to help me, everyone on the team was welcoming and respectful, and personally I would enjoy a job working there.”

Walsh added, “Faolan brought enthusiasm, curiosity, and professionalism to every task, and it was a pleasure having him contribute to the team.”    

This placement not only provided Faolan with a deeper understanding of the world of research, but also allowed ADM+S to showcase the dynamic career paths available within the field, inspiring the next generation of research professionals.

SEE ALSO

ADM+S researcher receives prestigious Chinese Government Award for overseas scholars

ADM+S researcher receives prestigious Chinese Government Award for overseas scholars

Author ADM+S Centre
Date 20 June 2025

Kaixin Ji, a PhD student at RMIT University and researcher with the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), has been awarded the Chinese Government Award for Outstanding Self-funded International Student Scholarship. This is the highest honour granted by the Chinese government to doctoral students studying overseas.

Awarded annually to just 650 recipients worldwide, this award recognises exceptional academic achievement and research potential. With over half a million Chinese students studying abroad each year, the award is highly competitive and regarded as one of the most prestigious honours available to young scholars.

“I’m deeply honored to receive this award. It not only affirms the dedication I’ve invested in my research and my 11 years of studying abroad, but also strengthens my belief in the importance of contributing to global scholarship as a Chinese student overseas,” said Kaixin Ji.

“It motivates me to continue exploring meaningful questions, and to carry forward the spirit of academic excellence and cross-cultural collaboration.”

Kaixin is a scholarship recipient with the ADM+S Centre, which offers a select number of PhD scholarships each year across its national network to support and develop the next generation of researchers. Her doctoral research titled “Measuring and quantifying bias, fairness and engagement for information access systems” contributes to the Centre’s mission to address the social, technical, and ethical dimensions of AI and automated technologies.

Established in 2003 by the China Scholarship Council under the Ministry of Education of the People’s Republic of China, the award is open to self-financed doctoral and postdoctoral researchers who have demonstrated outstanding academic achievements or innovative research contributions during their time overseas.

The official award ceremony will take place in September at the local Chinese Consulate, coinciding with the Chinese Moon Festival and National Day Gala for Chinese Students.  

SEE ALSO

AI overviews have transformed Google search. Here’s how they work – and how to opt out

Woman using Google search on an iPhone
Jittawit Tachakanjanapong/ Canva

AI overviews have transformed Google search. Here’s how they work – and how to opt out

Authors T.J. Thomson, Ashwin Nagappa, Shir Weinbrand
Date 13 June 2025

People turn to the internet to run billions of search queries each year. These range from keeping tabs on world events and celebrities to learning new words and getting DIY help.

One of the most popular questions Australians recently asked was: “How to inspect a used car?”.

If you asked Google this at the beginning of 2024, you would have been served a list of individual search results and the order would have depended on several factors. If you asked the same question at the end of the year, the experience would be completely different.

That’s because Google, which controls about 94% of the Australian search engine market, introduced “AI Overviews” to Australia in October 2024. These AI-generated search result summaries have revolutionised how people search for and find information. They also have significant impacts on the quality of the results.

How do these AI search summaries work, though? Are they reliable? And is there a way to opt out?

Synthesising the internet

Legacy search engines work by evaluating dozens of different criteria and trying to show you the results that they think best match your search terms.

They take into account the content itself, including how unique, current and comprehensive it is, as well as how it’s structured and organised.

They also consider relationships between the content and other parts of the web. If trusted sources link to content, that can positively affect its placement in search results.

They try to infer the searcher’s intent – whether they’re trying to buy something, learn something new, or solve a practical problem. They also consider technical aspects such as how fast the content loads and whether the page is secure.

All of this adds up to an invisible score each webpage gets that affects its visibility in search results. But AI is changing all this.

Google is the only search engine that prominently displays AI summaries on its main results page. Bing and DuckDuckGo still use traditional search result layouts, offering AI summaries only through companion apps such as Copilot and Duck.ai.

Instead of directing users to one specific webpage, generative AI-powered search looks across webpages and sources to try to synthesise what they say. It then tries to summarise the results in a short, conversational and easy-to-understand way.

In theory, this can result in richer, more comprehensive, and potentially more unique answers. But AI doesn’t always get it right.

An AI overview of a search result.
Google is the only search engine that prominently displays AI summaries on its main results page.
DIA TV/Shutterstock

How reliable are AI searches?

Early examples of Google’s AI-powered search from 2024 suggested users eat “at least one small rock per day” – and that they could use non-toxic glue to help cheese stick to pizza.

One issue is that machines are poorly equipped to detect satire or parody and can use these materials to respond in place of fact-based evidence.

Research suggests the rate of so-called “hallucinations” – instances of machines making up answers – is getting worse even as the models driving them are getting more sophisticated.

Machines can’t actually determine what’s true and false. They cannot grasp the nuances of idioms and colloquial language and can only make predictions based on fancy maths. But these predictions don’t always end up being correct, which is an issue – especially for sensitive medical or health questions or when seeking financial advice.

Rather than just present a summary, Google’s more recent AI overviews have also started including links to sources for key aspects of the answer. This can help users gauge the quality of the overall answer and see where AI might be getting its information from. But evidence suggests sometimes AI search engines cite sources that don’t include the information they claim they do.

What are the other impacts of AI search?

AI search summaries are transforming the way information is produced and discovered, reshaping the search engine ecosystem we’ve grown accustomed to over two decades.

They are changing how information-seekers formulate search queries – moving from keywords or phrases to simple questions, such as those we use in everyday conversation.

For content providers, AI summaries introduce significant shifts – undermining traditional search engine optimisation techniques, reducing direct traffic to websites, and impacting brand visibility.

Notably, 43% of AI Overviews link back to Google itself. This reinforces Google’s dominance as a search engine and as a website.

The forthcoming integration of ads into AI summaries raises concerns about the trustworthiness and independence of the information presented.

A magnifying glass held over an internet search bar.
Some internet users are switching search engines entirely and turning to providers that don’t provide AI summaries, such as Bing and DuckDuckGo.
Casimiro PT/Shutterstock

Where to from here?

People should always be mindful of the key limitations of AI summaries.

Asking for simple facts such as, “What is the height of Uluru?” may yield accurate answers.

But posing more complex or divisive questions, such as, “Will the 2032 Olympics bankrupt Queensland?”, may require users to open links and delve deeper for a more comprehensive understanding.

Google doesn’t offer a clear option to turn this feature off entirely. Perhaps the simplest way is to click on the “Web” tab under the search bar on the search results, or to add “-ai” to the search query. But this can get repetitive.

Some more technical solutions are manually creating a site search filter through Chrome settings. But these require an active act by the user.

As a result, some developers are offering browser extensions that claim to remove this aspect. Other users are switching search engines entirely and turning to providers that don’t provide AI summaries, such as Bing and DuckDuckGo.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University; Ashwin Nagappa, Post Doctoral Research Fellow, Queensland University of Technology, and Shir Weinbrand, PhD Candidate, Digital Media Research Centre, ADM+S Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Election meme hits and duds – we’ve graded some of the best (and worst) of the campaign so far

Anthony Albanese and Peter Dutton
Lukas Coch/AAP, Mick Tsikas/AAP, The Conversation

Election meme hits and duds – we’ve graded some of the best (and worst) of the campaign so far

Authors T.J. Thomson, Stephen Harrington
Date 24 April 2025

As Australia begins voting in the federal election, we’re awash with political messages.

While this of course includes the typical paid ads in newspapers and on TV (those ones with the infamously fast-paced “authorised by” postscripts), political parties and lobby groups now compete especially hard for our attention online.

And, if there’s one thing internet users love, it’s a good meme.

Indeed, as far back as two elections ago, in the 2019 campaign, the Liberal Party discovered the power of so-called “boomer memes”, and harnessed them effectively to help secure a third term in government.

The other parties have since caught on though, and are battling hard to win the messaging war in a way that will resonate with voters, especially those who are inclined to ignore a typical political advertisement.

What makes a good meme?

The best political communication often contains a few key elements.

First, it should be developed with a clear understanding of context, purpose and audience. If the target audience can’t get the message pretty much straight away, then it’s not much good.

It should also spark some sort of emotional reaction. It should make voters feel something and motivate them to act, or change their voting intention.

When it comes to political memes in particular, they need to make some clear reference to widely known cultural material. This might be a trending event in popular culture, or fit into an established meme format.

And, of course, the best memes are fun. As the quote, often attributed to American funnyman Andy Kaufman, goes: “if you can make someone laugh, you can make them think”.

Below, we have collected some of the major Australian political parties’ recent efforts on the meme front during the 2025 election campaign, and assessed their effectiveness. We graded them from “A” for best down to “D” for worst.

Grading political messages

We’ll start with the “diss track” the Liberals released earlier this month.

We’d give this one a “D” grade. It focuses heavily on cost of living and might spark an emotional reaction from voters who feel pain when going to the shops. But, it’s highly unlikely to hit the mark, given it was released on a minor platform, and rap music (with its Black American roots) doesn’t exactly gel with the Liberal Party’s overall image and ethos.

One SoundCloud user probably best summed up the vibe here, by referencing another famous internet meme: “how do you do, fellow kids?”

The Liberals did much better, however, with their version of the popular AI action figure trend that’s sweeping the Internet.

We’d give this one a solid “B+.” It features some clever one-liners, makes use of a current trend, and makes its point easily and quickly. We knock a few points off for the redundant focus on “cheaper power” This would have been better as two separate issues rather than repeating one twice.

Instead, we give Labor’s version a “C-”.

It looks only barely like the prime minister. He is shown as neutral rather than smiling. And the accessories chosen feel forced.

Although both memes tap into a trend, their shelf life will likely be short. This is in contrast to political ads like the below.

Rather than jump on the latest, short-lived trend, this ad draws on cultural material that’s more than three decades old but considered classic. The juxtaposition of a widely seen children’s cartoon with a political ad provides a surprising contrast. And the strategic editing drew more than a few giggles out of us.

We’d give this one an “A-.” It still relies on audio, which is often disabled by default, to get its point across but is solid, overall.

This ad by the Greens, however, misses the mark.

 

View this post on Instagram

 

A post shared by Australian Greens (@australiangreens)


We like Lady Gaga as much as the next person, but the cultural connection here seems dated and forced. Rather than focus on one key message, the ad instead mentions five separate policy positions. It also doesn’t work without audio. We’d give it a “C-.”

The Labor Party had more of a hit with this meme, though:

It appropriates the Venn diagram, a well-established meme format, which requires a degree of creativity and intelligence to pull off successfully. It makes a clear point, but also doesn’t bash its audience around the head with it. So, we’d give this a “B+”.

One of the best memes we’ve seen recently, however, comes from a Facebook page connected to The Greens:

The Simpsons has become a kind of lingua franca of the internet over the last decade or more, and has been the genesis of many, many popular memes, including during the last federal election.

This meme not only taps into that existing internet culture, and gestures towards one of the show’s sweetest-ever moments in recounting the circumstances of Maggie’s birth, but also cleverly draws on and repurposes one of the attack lines being used against the Greens (“Can’t vote Greens. Not this time”) by the lobby group Advance Australia. It’s a clever piece of communication and one of the few “A”-grade memes we’ve encountered in the campaign so far.

Your turn

Keep an eye on the memes you encounter in the next few weeks in the lead-up to the election on May 3. Which ones do you find effective and why?

But memes are only part of the story. Also consider the positions of the candidates and parties and their substantive policies. Memes, good or bad, can only go so far.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University and Stephen Harrington, Associate Professor of Journalism and Professional Communication, School of Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

These 3 climate misinformation campaigns are operating during the election run-up. Here’s how to spot them

A protestor holds a sign saying 'its getting hot in here'
Markus Spiske / Canva

These 3 climate misinformation campaigns are operating during the election run-up. Here’s how to spot them

Authors Libby Lester, Alfie Chadwick
Date 23 April 2025

Australia’s climate and energy wars are at the forefront of the federal election campaign as the major parties outline vastly different plans to reduce greenhouse gas emissions and tackle soaring power prices.

Meanwhile, misinformation about climate change has permeated public debate during the campaign, feeding false and misleading claims about renewable energy, gas and global warming.

This is a dangerous situation. In Australia and globally, rampant misinformation has for decades slowed climate action – creating doubt, hindering decision-making and undermining public support for solutions.

Here, we explain the history of climate misinformation in Australia and identify three prominent campaigns operating now. We also outline how Australians can protect themselves from misinformation as they head to the polls.

Misinformation vs disinformation

Misinformation is defined as false information spread unintentionally. It is distinct from disinformation, which is deliberately created to mislead.

However, proving intent to mislead can be challenging. So, the term misinformation is often used as a general term to describe misleading content, while the term disinformation is reserved for cases where intent is proven.

Disinformation is typically part of a coordinated
campaign
to influence public opinion. Such campaigns can be run by corporate interests, political groups, lobbying organisations or individuals.

Once released, these false narratives may be picked up by others, who pass them on and create misinformation.

Climate change misinformation in Australia

In the 1980s and 1990s, Australia’s emissions-reduction targets were among the most ambitious in the world.

At the time, about 60 companies were responsible for one-third of Australia’s greenhouse gas emissions. The government’s plan included measures to ensure these companies remained competitive while reducing their climate impact.

Despite this, Australia’s resource industry began a concerted media campaign to oppose any binding emissions-reduction actions, claiming it would ruin the economy by making Australian businesses uncompetitive.

This narrative persisted even when modelling repeatedly showed climate policies would have minimal economic impacts. The industry arguments eventually found their way into government policy.

Momentum against climate action was also fuelled by a vocal group of climate change-denying individuals and organisations, often backed by multinational fossil fuel companies. These deniers variously claimed climate change wasn’t happening, it was caused by natural cycles, or wasn’t that a serious threat.

These narratives were further exacerbated by false balance in media coverage, whereby news outlets, in an effort to appear neutral, often placed climate scientists alongside contrarians, giving the impression that the science was still unclear.

Together, this created an environment in Australia where climate action was seen as either too economically damaging or simply unnecessary.

What’s happening in the federal election campaign?

Climate misinformation has been circulating in the following forms during this federal election campaign.

1. Trumpet of Patriots

Clive Palmer’s Trumpet of Patriots party ran an advertisement that claimed to expose “ the truth about climate change”. It featured a clip from a 2004 documentary, in which a scientist discusses data suggesting temperatures in Greenland were not rising. The scientist in the clip has since said his comments are now outdated.

The type of misinformation is cherry-picking – presenting one scientific measurement at odds with the overwhelming scientific consensus.

Google removed the ad after it was flagged as misleading, but only after it received 1.9 million views.

2. Responsible Future Illawarra

The Responsible Future campaign opposes wind turbines on various grounds, including cost, foreign ownership, power prices, effects on views and fishing, and potential ecological damage.

Scientific evidence indicates offshore wind farms are relatively safe for marine life and cause less harm than boats and fishing gear. Some studies also suggest the infrastructure can create new habitat for marine life.

However, a general lack of research into offshore wind and marine life has created uncertainty that groups such as Responsible Future Illawarra can exploit.

It has cited statements by Sea Shepherd Australia to argue offshore wind farms damage marine life – however Sea Shepherd said its comments were misrepresented.

3. Australians for Natural Gas

Australians for Natural Gas is a pro-gas group set up by the head of a gas company, which presents itself as a grassroots organisation. Its advertising campaign promotes natural gas as a necessary part of Australia’s fuel mix, and stresses its contribution to jobs and the economy.

The ad campaign implicitly suggests climate action – in this case, a shift to renewable energy – is harmful to the economy, livelihoods and energy security. According to Meta’s Ad Library, these adds have already been seen more than 1.1 million times.

Gas is needed in Australia’s current energy mix. But analysis shows it could be phased out almost entirely if renewable energy and storage was sufficiently increased and business and home electrification continues to rise.

And of course, failing to tackle climate change will cause substantial harm across Australia’s economy.

How to identify misinformation

As the federal election approaches, climate misinformation and disinformation is likely to proliferate further. So how do we distinguish fact from fiction?

One way is through “pre-bunking” – familiarising yourself with common claims made by climate change deniers to fortify yourself against misinformation

Sources such as Skeptical Science offer in-depth analyses of specific claims.

The SIFT method is another valuable tool. It comprises four steps:

  • Stop
  • Investigate the source
  • Find better coverage
  • Trace claims, quotes and media to their original sources.

As the threat of climate change grows, a flow of accurate information is vital to garnering public and political support for vital policy change.


The Conversation

Alfie Chadwick, PhD Candidate, Monash Climate Change Communication Research Hub, Monash University and Libby Lester, Professor (Research) and Director, Monash Climate Change Communication Hub, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

Plants growing out of an old computer
Image credit: Pictus Photography / Canva

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

Authors Aaron J. Snoswell, Kevin Witzenberger, Rayane El Masr
Date 15 April 2025

Earlier this year, scientists discovered a peculiar term appearing in published papers: “vegetative electron microscopy”.

This phrase, which sounds technical but is actually nonsense, has become a “digital fossil” – an error preserved and reinforced in artificial intelligence (AI) systems that is nearly impossible to remove from our knowledge repositories.

Like biological fossils trapped in rock, these digital artefacts may become permanent fixtures in our information ecosystem.

The case of vegetative electron microscopy offers a troubling glimpse into how AI systems can perpetuate and amplify errors throughout our collective knowledge.

A bad scan and an error in translation

Vegetative electron microscopy appears to have originated through a remarkable coincidence of unrelated errors.

First, two papers from the 1950s, published in the journal Bacteriological Reviews, were scanned and digitised.

However, the digitising process erroneously combined “vegetative” from one column of text with “electron” from another. As a result, the phantom term was created.

Excerpts from scanned papers show how incorrectly parsed column breaks lead to the term 'vegetative electron micro...' being introduced.
Excerpts from scanned papers show how incorrectly parsed column breaks lead to the term ‘vegetative electron micro…’ being introduced. Bacteriological Reviews 

 

Decades later, “vegetative electron microscopy” turned up in some Iranian scientific papers. In 2017 and 2019, two papers used the term in English captions and abstracts.

This appears to be due to a translation error. In Farsi, the words for “vegetative” and “scanning” differ by only a single dot.

Screenshot from Google Translate showing the similarity of the Farsi terms for 'vegetative' and 'scanning'.
Screenshot from Google Translate showing the similarity of the Farsi terms for ‘vegetative’ and ‘scanning’. Google Translate 

An error on the rise

The upshot? As of today, “vegetative electron microscopy” appears in 22 papers, according to Google Scholar. One was the subject of a contested retraction from a Springer Nature journal, and Elsevier issued a correction for another.

The term also appears in news articles discussing subsequent integrity investigations.

Vegetative electron microscopy began to appear more frequently in the 2020s. To find out why, we had to peer inside modern AI models – and do some archaeological digging through the vast layers of data they were trained on.

Empirical evidence of AI contamination

The large language models behind modern AI chatbots such as ChatGPT are “trained” on huge amounts of text to predict the likely next word in a sequence. The exact contents of a model’s training data are often a closely guarded secret.

To test whether a model “knew” about vegetative electron microscopy, we input snippets of the original papers to find out if the model would complete them with the nonsense term or more sensible alternatives.

The results were revealing. OpenAI’s GPT-3 consistently completed phrases with “vegetative electron microscopy”. Earlier models such as GPT-2 and BERT did not. This pattern helped us isolate when and where the contamination occurred.

We also found the error persists in later models including GPT-4o and Anthropic’s Claude 3.5. This suggests the nonsense term may now be permanently embedded in AI knowledge bases.

Screenshot of a command line program showing the term 'vegetative electron microscopy' being generated by GPT-3.5 (specifically, the model gpt-3.5-turbo-instruct). The top 17 most likely completions of the provided text are 'vegetative electron microscopy
Screenshot of a command line program showing the term ‘vegetative electron microscopy’ being generated by GPT-3.5 (specifically, the model gpt-3.5-turbo-instruct). The top 17 most likely completions of the provided text are ‘vegetative electron microscopy’, and these suggestions are 2.2 times more likely than the next most likely prediction. OpenAI

By comparing what we know about the training datasets of different models, we identified the CommonCrawl dataset of scraped internet pages as the most likely vector where AI models first learned this term.

The scale problem

Finding errors of this sort is not easy. Fixing them may be almost impossible.

One reason is scale. The CommonCrawl dataset, for example, is millions of gigabytes in size. For most researchers outside large tech companies, the computing resources required to work at this scale are inaccessible.

Another reason is a lack of transparency in commercial AI models. OpenAI and many other developers refuse to provide precise details about the training data for their models. Research efforts to reverse engineer some of these datasets have also been stymied by copyright takedowns.

When errors are found, there is no easy fix. Simple keyword filtering could deal with specific terms such as vegetative electron microscopy. However, it would also eliminate legitimate references (such as this article).

More fundamentally, the case raises an unsettling question. How many other nonsensical terms exist in AI systems, waiting to be discovered?

Implications for science and publishing

This “digital fossil” also raises important questions about knowledge integrity as AI-assisted research and writing become more common.

Publishers have responded inconsistently when notified of papers including vegetative electron microscopy. Some have retracted affected papers, while others defended them. Elsevier notably attempted to justify the term’s validity before eventually issuing a correction.

We do not yet know if other such quirks plague large language models, but it is highly likely. Either way, the use of AI systems has already created problems for the peer-review process.

For instance, observers have noted the rise of “tortured phrases” used to evade automated integrity software, such as “counterfeit consciousness” instead of “artificial intelligence”. Additionally, phrases such as “I am an AI language model” have been found in other retracted papers.

Some automatic screening tools such as Problematic Paper Screener now flag vegetative electron microscopy as a warning sign of possible AI-generated content. However, such approaches can only address known errors, not undiscovered ones.

Living with digital fossils

The rise of AI creates opportunities for errors to become permanently embedded in our knowledge systems, through processes no single actor controls. This presents challenges for tech companies, researchers, and publishers alike.

Tech companies must be more transparent about training data and methods. Researchers must find new ways to evaluate information in the face of AI-generated convincing nonsense. Scientific publishers must improve their peer review processes to spot both human and AI-generated errors.

Digital fossils reveal not just the technical challenge of monitoring massive datasets, but the fundamental challenge of maintaining reliable knowledge in systems where errors can become self-perpetuating.The Conversation

Aaron J. Snoswell, Research Fellow in AI Accountability, Queensland University of Technology; Kevin Witzenberger, Research Fellow, GenAI Lab, Queensland University of Technology, and Rayane El Masri, PhD Candidate, GenAI Lab, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO