ADM+S expands international research program through French partnerships

Paris University
Université Paris 8 Vincennes-Saint-Denis 2024. Credit: Wikimedia

ADM+S expands international research program through French partnerships

Author ADM+S Centre
Date 14 May 2026

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University has established new international partnerships with leading French universities and launched a collaboration with Sciences Po médialab, a globally recognised computational social science laboratory. 

The partnerships follow a month-long research residency in France by ADM+S Associate Investigator Dr Daniel Binns from RMIT University in a broader effort to expand ADM+S engagement with French research institutions working on artificial intelligence, media and digital governance.

France’s approach to AI governance, data sovereignty, and innovation offers a genuinely different model to Australia’s comparatively hands-off regulatory environment.

“Australia and France aren’t often in the same conversation when it comes to tech governance, research, and innovation ecosystems. But it turns out there’s a real appetite for these discussions,” said Dr Binns. 

“Being embedded in France for a month — rather than just visiting — made it possible to have deep conversations that actually lead to concrete outcomes and lasting collaborations.”

Dr Binns was hosted as a Chercheur Invité at Université Paris 8 Vincennes–Saint-Denis and its Laboratoire Paragraphe during April 2026.

International Research Collaborations

One of the most significant outcomes of the trip emerged from meetings with researchers Jean-Philippe Cointet and Donato Ricci at Sciences Po médialab, a leading computational social science laboratory whose prior Shaping AI project mapped AI innovation ecologies across France, Germany, the UK, and Canada.

Following these discussions, the group now plans to extend its research into the Australian context through the development of a co-authored working paper examining both sanctioned and shadow adoption of generative AI across Australian workplaces. 

The project draws on Dr Binns’ professional networks and existing Centre research, including work from the Australian Internet Observatory and the Critical Capabilities for Inclusive AI project. The collaboration will also include a joint submission to the 4S conference in Toronto later this year. 

Institutional Agreements

Two institutional agreements were advanced during the visit. A Convention d’Accueil between Dr Binns and Laboratoire Paragraphe was formally signed by the Président of Université Paris 8, and MOU processes have been initiated with both Paris 8 and Université Paris 1 Panthéon-Sorbonne — the first formal institutional agreements between RMIT and these universities.

Dr Daniel Binns presnting his work at Séminaire HERMES, Campus Condorcet .
Dr Daniel Binns presenting his work at Séminaire HERMES, Campus Condorcet . Image supplied.

Presentations 

During the visit, Dr Binns delivered three invited presentations: 

  • The international symposium Créativités Artificielles: Approches Critiques de l’IA (Université Côte d’Azur, 27–29 April)
    Presentation proposing an ecological — rather than literacy-based — framework for critical engagement with generative technology.
  • TV-IA Journée d’Étude at Université Paris 1 Panthéon-Sorbonne (8–9 April)
    How episodic television form is being reshaped under algorithmic conditions — a paper subsequently invited for publication in translation. 
  • The Séminaire HERMES at Campus Condorcet (24 April)
    Media-materialist framework for generative AI image-making alongside European artist-researchers. 

Dr Binns also attended the Noûs Art and AI Festival at the Bibliothèque nationale de France and GenAI Days #3 — an industry-facing AI event where he was the only academic researcher present — providing a grounded view of how French industry and cultural institutions are navigating the generative AI moment.

A joint online seminar organised with Alexandre Gefen (CNRS/THALIM) and curators of the Jeu de Paume’s AI exhibition is scheduled for June 2026 and a research exchange with the University of Groningen is in development.

Dr Binns plans to return to France in late 2026 or early 2027, with funding applications to the PHC FASIC program and the French Embassy’s social sciences initiative currently in preparation.

This research visit was supported by RMIT’s Academic Development Program, ADM+S, Institute for the Study of French-Australian Relations, Inc. (ISFAR), Université Paris 1 Panthéon-Sorbonne, and Université Côte d’Azur.

SEE ALSO

Albanese government’s latest attempt to make tech giants pay for journalism is needed but carries big risks

Albanese government’s latest attempt to make tech giants pay for journalism is needed but carries big risks

Author Andrea Carson and Diana Bossio
Date 29 April 2026

The government’s plan to fund Australian journalism through a levy on digital platforms rests on a sound premise: a healthy democracy depends on reliable information.

But this latest attempt — following the shortcomings of the News Media Bargaining Code — is a high-risk move.

We live in an era of polluted information with serious consequences for public debate and democratic health. In addition, professional journalism no longer holds the central role it once did in informing citizens or shaping political consensus.

Many Australians, particularly younger people, get their news and information from social media and increasingly from influencers and AI chatbots. ChatGPT alone has almost one billion weekly users globally.

Meanwhile, Australian influencers such as Konrad Benjamin, a former high school teacher breaking down politics for under-30s under the name Punter’s Politics, attract millions of likes, often surpassing mainstream outlets.

A complex, fragmented media environment

What is clear is that professional journalism is only one part of today’s fragmented information landscape. That landscape is increasingly polluted by misinformation and conspiracy theories that erode trust and weaken democracy. Globally, democracy is backsliding, with measurable decline for 20 consecutive years.

The United States offers a cautionary example of a deeply polarised information environment where falsehoods can spill into political violence. Properly supporting professional journalism is a means to filter extremism and help citizens distinguish fact from fiction.

Most Australians have little confidence in their own abilities to spot misinformation, with 74% reporting they find it difficult. This problem becomes urgent during election campaigns, when political falsehoods could potentially sway votes.

The Albanese government is responding to these threats in acknowledging the importance of journalism with draft legislation for a News Bargaining Incentive (NBI). It is a new scheme designed to fund Australian reporting by requiring digital platforms with revenues above $250 million (explicitly Google, Meta and TikTok) to contribute to a funding pool to be shared with public-interest news providers.

Why it’s a high-risk move

So why is this a high-risk endeavour that may meet the same fate as the NMBC, which saw Meta and, more recently, Google step back from paying for news content?

First, the positives. From the pooled funds it will generate stable funding for journalism even if platforms do not do deals, much needed for regional media and start-ups where funding is critical. In this way it also addresses a criticism of the NMBC, which was skewed to major media players such as News Corp and Nine.

It is also a stronger “stick” than the NMBC, imposing a 2.25% charge on high-revenue platforms unless they secure sufficient agreements with publishers, creating an incentive to negotiate.

But does it go far enough? Some independent media operators fear their outlets could still miss out on making deals under a 25% per-recipient cap that effectively means only four deals with big outlets need be done to be eligible for the offset.

The NBI has stronger leverage than the NMBC, which relied on ministerial designation that was never used. At first, the NMBC appeared successful without it, with Meta and Google signing more than 30 deals worth more than A$200 million. But Canada’s Online News Act shows the limits of this model when Canada sought to introduce a similar scheme: Meta removed news from its platforms entirely, avoiding the obligation and exposing its fragility.

The NMBC later weakened as Google became the only major platform doing deals in Australia. The company has recently signalled it will not renew some of these. This shift might help explain the timing of the NBI’s re-emergence and the structural shift from competition to tax law to compel platform compliance.

Now for the risks, of which timing is one. The NBI was drafted in 2024 but put on ice when US President Donald Trump voiced strong opposition to digital services taxes, calling them “discriminatory” measures targeting US companies. For some pundits, including Meta, the NBI is effectively a digital services tax.

Trump has previously threatened tariffs against countries pursuing such measures.

Against that backdrop, and given Australia’s recent exposure to trade tariffs and Trump’s criticisms of Australia over the Iran war, the timing of this renewed announcement is tricky. Meta chief executive Mark Zuckerberg has direct access to Trump, and both Meta and Google have already criticised the NBI.

While traditional media has welcomed the announcement through a signed joint statement, some platform criticisms warrant attention.

Why are multinational digital platforms that also distribute news and with more than $250 million in Australian revenue, such as Apple and LinkedIn, carved out of the scheme? And why is AI, with its rapidly growing user base and reliance on news content to train and refine systems, not included?

The explanations offered so far – that AI will be addressed separately and that Apple and LinkedIn employ editorial teams – are unconvincing.

Questions to answer

Then there are system design questions that the consultation period is sure to raise. For example:

  • how do you define journalism in an age of influencers, social media and chatbots?
  • who qualifies for funding?
  • are the current eligibility criteria fit for purpose under the NBI to ensure the scheme supports continued investment in public-interest news, diversity of media voices, and quality journalism?
  • will this include influencers such as Konrad Benjamin, who has large audiences for his explainer reporting?

This is a live debate that the short three-week consultation period is sure to raise before closing on May 18.

And perhaps the biggest risk of all: backfire. The NBI needs to avoid unintended consequences, such as when news was pulled from Meta’s platforms in Canada. The unintended outcome was long-term smaller audiences for professional journalism. Australia cannot risk backfire effects at a time when quality journalism has never been more critical for safeguarding democracy.The Conversation

Andrea Carson, Professor of Political Communication, La Trobe University and Diana Bossio, Associate Professor of Digital Communication, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Professor Kimberlee Weatherall appointed to Australia’s Open Government Forum to help shape National Action Plan

Kimberlee Weatherall

Professor Kimberlee Weatherall appointed to Australia’s Open Government Forum to help shape National Action Plan

Author ADM+S Centre
Date 11 May 2026

Professor Kimberlee Weatherall has been appointed as a civil society member of Australia’s Open Government Forum, helping to shape Australia’s Fourth National Action Plan on pressing issues ranging from digital transformation and artificial intelligence to public ethics and safety. 

Australia’s Open Government Forum has equal representation from government and civil society to promote transparent, participatory, inclusive and accountable governance and drive engagement with civil society and the broader community on National Action Plans.

Professor Kimberlee Weatherall from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at the University of Sydney is one of six civil society members appointed to Australia’s Open Government Forum, alongside six government representatives announced by the Attorney-General on 8 May. 

“The Open Government Forum is an important space to bring civil society and government representatives together to talk about how we promote Australia’s commitment to open government,” said Professor Weatherall.

“ I’m very much looking forward to contributing to the work of the Forum and to the next National Action Plan”. 

The Open Government Partnership (OGP) is a multilateral initiative that aims to secure commitments from governments to promote transparency, empower citizens, fight corruption and harness new technologies to strengthen governance.

Civil society members were selected based on their demonstrated support for the OGP’s vision and Open Government Declaration, their ability to engage with civil society stakeholders and networks, their experience working with and influencing government, and their expertise relevant to open government priorities and emerging issues. 

The Australian Government became a member of the OGP in 2015, committing to support its goals of increasing the transparency and accountability of government. After governments join the OGP, they work with civil society to create national action plans setting out concrete steps that will be taken to increase openness over the next 2 years. 

“Open government, loosely described as our right to engage with and scrutinise government actions, is the aspiration of every thoughtful citizen. This is so because transparency and accountability always improve our community’s lived outcomes,” stated Professor Kate Auty, Civil Society Co-Chair 2024-2025.

“We know, and research tells us, that government will make better and more informed decisions when civil society is involved in the matters that affect us.”

Australia’s Third National Action Plan (NAP3), published on 15 December 2023, runs from 2024-2025. The new forum will develop Australia’s Fourth OGP National Action Plan (NAP4) by the end of 2026.

The forum’s first meeting will take place on 19 May 2026, where members will establish and agree on the forum’s terms of reference.

More information about Australia’s Open Government Forum is available from the Attorney-General’s Department.

SEE ALSO

Global Red Cross initiative to help embed human values in AI

Red Cross symbol on tech background
GettyImages/kraisorn waipongsri

Global Red Cross initiative to help embed human values in AI

Author ADM+S Centre
Date 5 May 2026

The International Federation of Red Cross and Red Crescent Societies (IFRC) is leading a new webinar examining the role of human values in guiding the design and use of AI and information and communication technologies.

Research Fellow Dr Dominique Carlon from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at Swinburne University joined the webinar launch on 30 April 2026 alongside global leaders to discuss embedding humanitarian principles in the design of AI systems.

Research shows that when humanitarian standards on accountability are embedded into approaches to digital transformation, the digital solutions developed are more relevant and sustainable and better meet the needs of practitioners and the communities we serve.

The panel, titled The Fundamental Principles and the Use of ICTs in Humanitarian Action – Now and Next, brought together leading voices across humanitarian and technology sectors. Dominique appeared alongside Omar Abou Samra of the American Red Cross and Malka Older, Executive Director of Global Voices. The session was hosted by the Kenyan Red Cross and moderated by Julia Goodall of the Australian Red Cross.

Drawing on work from the ADM+S Critical Capabilities for Inclusive AI Project, Dr Carlon highlighted the importance of ensuring AI systems serve people, not just technical goals. She emphasised that “benefit” must be defined in human terms—measured through capabilities, dignity, choice, and agency—rather than efficiency alone. Central to this approach is working directly with communities and partner organisations to ground technological development in lived experience.

Dr Carlon said that the thoughtful discussions involved in building consensus are very promising. 

“This kind of deliberative, informed approach helps ensure that AI adoption is not driven by convenience or efficiency alone, but instead takes a future‑focused lens that is aligned with humanitarian principles, and avoids inadvertently introducing new harms or data and security risks that would compromise fundamental values,” said Dr Carlon.

The webinar marks the start of a coordinated program to build consensus around a forthcoming Resolution on the Principled and Accountable Use of ICTs in Humanitarian Action

It is being jointly developed by the International Committee of the Red Cross, IFRC, and a coalition of national societies including the Australian Red Cross, American Red Cross, British Red Cross, Kenyan Red Cross, and others.

The webinar brought together participants from across the global Red Cross and Red Crescent Movement, including those working in conflict settings, alongside representatives from organisations such as UNHCR and Médecins Sans Frontières.

The series will continue over the coming months as momentum builds toward the 2026 Council of Delegates, where the Resolution is expected to play a key role in shaping the future of responsible AI in humanitarian action.

SEE ALSO

ADM+S researcher secures UNSW Spinout Fellowship to advance GenAISim platform

Devin Yuncheng Hua

ADM+S researcher secures UNSW Spinout Fellowship to advance GenAISim platform

Author ADM+S Centre
Date 4 May 2026

A researcher from the ARC Centre of Excellence for Automated Decision-Making and Society has been awarded a competitive commercialisation fellowship to further develop a generative AI simulation platform that can be used by decision makers. 

Dr Yuncheng (Devin) Hua from UNSW has been selected for the UNSW Founders Engineering Spinout Fellowship, a 12-month program designed to support early-career researchers in translating their work into valuable solutions that benefit humanity.

The fellowship will support Dr Hua’s work on a tool designed to help decision-makers better understand and simulate the impacts of AI systems as part of the GenAISim: Simulation in the Loop for Multi-Stakeholder Interactions with Generative Agents project at ADM+S.

“It is truly through the collective efforts and collaboration of all the universities and research institutions involved that this project was established, which has given me the opportunity to pursue this line of research and ultimately led to this opportunity,” said Dr Hua.

The tool SOCIA (Simulation Orchestration for Computational Intelligence with Agents) acts as a bridge between policy questions and executable simulations, enabling users to model “what-if” scenarios and assess the potential impacts of interventions. 

SOCIA addresses a key challenge in policy and planning: building simulations that are not only technically functional but also robust, transparent, and grounded in real-world evidence. It semi-automatically translates social and urban policy requirements into simulation code, supported by human-in-the-loop refinement. 

A co-authored paper on this research titled SOCIA-EVO: Automated Simulator Construction via Dual-Anchored Bi-Level Optimization (Devin Yuncheng Hua, Sion Weatherhead, Mehdi Jafari, Hao Xue, Flora D. Salim)  has been accepted for presentation at the 64th Annual Meeting of the Association for Computational Linguistics, a leading international conference in computational linguistics and natural language processing in July. The event brings together global experts to present cutting-edge research in AI language technologies and their real-world applications. 

Flora Salim, Chief Investigator at ADM+S, said the fellowship represents a valuable opportunity to translate research into practical impact.

Dr Hua’s research areas include Natural Language Processing (NLP), Large Language Models (LLMs), Knowledge Graphs, Dialogue Systems, Machine Learning, Deep Learning, Reinforcement Learning, and Causality.

SEE ALSO

Think online ads are harmless? They could be revealing your private life

Person on a laptop and iphone in a room with shadows cast
Getty Images/Tero Vesalainen

Think online ads are harmless? They could be revealing your private life

Author UNSW Media
Date 4 May 2026

A new study has uncovered a significant and largely invisible privacy risk in the online advertising ecosystem: the ads you see may be enough to reveal sensitive personal information.

Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society at UNSW Sydney and QUT have demonstrated that artificial intelligence can assess personal attributes, including political preferences, education level, and employment status, based solely on the advertisements a person is shown online.

The study analysed more than 435,000 Facebook ads seen by 891 Australian users, collected through the Australian Ad Observatory project.

Using advanced large language models (LLMs), researchers found that:

  • Personal traits could be inferred without access to browsing history or personal data
  • Profiles could be built from short browsing sessions
  • AI systems matched and sometimes exceeded human ability to infer personal characteristics
  • The process was over 200 times cheaper and 50 times faster than human analysis

In a paper presented at the ACM Web Conference 2026, the researchers say: “Our results demonstrate that off-the-shelf LLMs can accurately reconstruct complex user private attributes.

“Critically, actionable profiling is feasible even within short observation windows, indicating that prolonged tracking is not a prerequisite for a successful attack.”

Lead author Baiyu Chen, from UNSW, said the findings challenge common assumptions about online privacy.

“The key point is that the ads a person sees are not random. Advertising systems optimise delivery based on inferred profiles and behaviours, so the overall pattern of ads shown to a user can carry signals about traits such as gender, age, education, employment status, political preference, and broader socioeconomic position.

“Our study shows that LLMs can analyse those patterns and infer private attributes from ad exposure alone.

“These findings provide the first empirical evidence that ad streams serve as a high-fidelity digital footprint, enabling off-platform profiling that inherently bypasses current platform safeguards, highlighting a systemic vulnerability in the ad ecosystem and the urgent need for responsible web AI governance in the generative AI era.

“This work reveals a critical blind spot in Web privacy: the latent leakage of user private attributes through passive exposure to algorithmic advertising.”

A critical blind spot in privacy
By using AI to analyse ad content, the researchers – including Professor Flora Salim, Professor Daniel Angus, Dr Benjamin Tag and Dr Hao Xue – show that streams of ads act like highly detailed digital fingerprints, allowing private attributes to be reconstructed with surprising accuracy, which often match or even exceed human judgement.

Crucially, the research shows this is not a theoretical risk. Profiles can be built quickly and at scale, even from short browsing sessions, and without long-term tracking. Even when predictions are not exact, they are often close enough to reveal meaningful insights about a person’s life stage or financial situation.

How it could be exploited
While major platforms have restricted advertisers from targeting sensitive categories, the study shows that algorithmic ad delivery still encodes these traits indirectly and that this information can now be extracted using widely available AI tools.

This creates a new form of privacy risk where:

  • Users do not actively share information
  • No hacking or platform-side access is required
  • Profiling can happen outside platform oversight

The researchers warn that everyday tools such as browser extensions could be repurposed to quietly collect ads and build detailed user profiles — bypassing platform safeguards and leaving little trace.

In the paper, they say: “We identify browser extensions that abuse legitimate privileges as the potential primary vector for this attack. This scenario is severe due to its inherent stealth and scalability.

“Rather than distributing specialised malware, an adversary can opportunistically deploy this attack within the existing ecosystem of widely installed, benign functioning extensions, such as ad blockers, coupon finders, or page translators.

“These extensions legitimately require permissions to read web page content to function, providing a perfect cover for data harvesting.”

Implications for policy and regulation
The findings suggest current privacy protections may not go far enough.

As AI tools make this kind of analysis easier and more accessible, the researchers argue that regulation must evolve to address not just data collection, but what can be inferred from the content people are exposed to.

Addressing this risk will require rethinking privacy frameworks to account for the hidden signals embedded in everyday online experiences — including the ads users passively consume.

“In terms of protection, users can reduce the risk by being cautious with browser extensions, limiting unnecessary permissions, and using available privacy and ad-personalisation settings,” said Chen.

“However, this is not something users can fully solve on their own, because the broader issue is systemic: people cannot easily opt out of the ad ecosystem altogether, so stronger platform safeguards are also needed.”

About the research
The study draws on data from the Australian Ad Observatory, a citizen science initiative that collects ads seen by everyday users. It represents one of the largest real-world investigations into how AI can infer personal information from online advertising.

The research, titled “When Ads Become Profiles: Uncovering the Invisible Risk of Web Advertising at Scale with LLMs,” will be presented at the ACM Web Conference 2026.

SEE ALSO

‘Just looping you in’: why letting AI write our emails might actually create more work

fStop Images - Epoxydude/Getty

‘Just looping you in’: why letting AI write our emails might actually create more work

Author Daniel Angus
Date 1 May 2026

I hope this article finds you well.

Did that make you cringe, ever so slightly? In the decades since the very first email was sent in 1971, the technology has become the quiet infrastructure of white-collar work.

Email came with the promise of efficiency, clarity and less friction in organisational communication. Instead, for many, it has morphed into something else: always there, near impossible to escape and sometimes simply overwhelming.

Right now, something is shifting again. The rise of generative artificial intelligence (AI) technologies, such as ChatGPT and Microsoft Copilot, is increasingly allowing people to offload the repetitive routines of tending one’s inbox – drafting, summarising and replying.

My colleagues in the ARC Centre of Excellence for Automated Decision Making & Society found 45.6% of Australians have recently used a generative AI tool, 82.6% of those using it for text generation. A healthy chunk of that use likely includes email.

So, what happens if we end up fully automating one of the staples of the white-collar daily grind? Will AI technologies reduce some of the friction, or generate new forms of it? Dare I ask – are we actually about to get more email?

Why the printer isn’t dead yet

Soon after the advent of email, some voices in the business world heralded the coming end of paper use in the office. That didn’t happen. If you work in an office today, there’s a good chance you still have a printer.

In their 2001 book, The Myth of the Paperless Office, Abigail Sellen and Richard Harper show how digital tools rarely eliminate older forms of work. Instead, they reshape them.

Sellen and Harper show how paper use didn’t disappear with the rise of email and other digital communication tools; in many cases, it intensified. The takeaway isn’t that offices failed to modernise, but rather that work reorganised around what these new tools could do.



In this case, paper persisted not only out of habit, but because of what it affords: it is easy to annotate, spread out, carry and view at a glance. This was all too clunky (or impossible) to perform via the digital alternatives.

At the same time, email and digitisation dramatically lowered the cost of producing and distributing communication. It was far easier to send more messages, to more people, more often.

Circling back to today

Will AI be different? If early signs are anything to go by, the answer is: not in the way we might hope.

Like earlier waves of workplace technology, AI is less likely to replace existing communication practices than to intensify them – but at least it might come with better grammar and a suspiciously upbeat tone.

Some new AI tools offer to manage your inbox entirely, feeding into broader privacy concerns about the technology.

At this moment, what a lot of these products seem to offer is not an escape from email, but a smoothing of its rough edges. Workers are using AI to soften otherwise blunt requests, modify their tone or expand what might otherwise be considered too brief a response.

Rather than removing the need to communicate, these tools offer pathways to make a delicate performance easier.

What email is actually for

Email, like many forms of communication, is as much about maintaining everyday relationships as it is about the transfer of information.

At work, it’s often about signalling competence, responsiveness, collegiality and authority. “Just looping someone in” or “circling back” are all part of our absurd office vocabulary, a shared dialect that helps us navigate hierarchy, soften demands and keep things moving – all without saying what we really think.

If AI lowers the effort required to produce these signals, it won’t necessarily reduce their importance, but it could unsettle things in rather odd ways.

If more people use AI to draft emails they don’t particularly want to write, we end up with a game of bureaucratic “mime”: everyone performing sincerity and quietly outsourcing it, and no one entirely sure how much of their inbox was actually written by a human.

The labour of email was never just about crafting sentences. It’s always been the scanning, the sorting and the deciding. AI doesn’t remove this burden. If anything, it amplifies it.

When everything arrives polished, everything looks important. That points to a deeper question for the future of work: if AI can perform responsiveness, why are we generating so many situations that still require it?

Person typing on a laptop keyboard
Email has long been about more than just communicating information.
Vitaly Gariev/Unsplash

Looking forward

What would a workplace look like if email wasn’t the default solution to every coordination problem? Perhaps fewer performative check-ins, “just touching base”, “looping you in” or “following up on the below”. More clearer expectations about what actually requires a response, and what doesn’t.

Email, like paper, is likely to persist for good reasons. It is simple, flexible and universal. It allows things to be deferred, revisited, forwarded and quietly ignored.

But if AI is going to change any of this, my hope is that it makes visible how much of this is ritual, how much is habit, and how much has long been unnecessary.

And if the machines are happy to keep saying “hope this finds you well” to each other, we might finally have permission to stop.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S 2025 Annual Report highlights growing impact of AI and ADM research 

2025 ADM+S Annual Report cover

ADM+S 2025 Annual Report highlights growing impact of AI and ADM research 

Author ADM+S Centre
Date 29 April 2026

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has released its 2025 Annual Report, highlighting major contributions to responsible, ethical, and inclusive approaches to automated decision-making systems and artificial intelligence (AI).

The report documents the reach and impact of ADM+S research in policy, technology development, and public debate, at a time when automated decision-making and generative AI are rapidly reshaping everyday life and work in Australia.

Research centres such as ADM+S play a critical role in the environment of generative AI and accelerated automation.

 “They provide independent evidence, interdisciplinary expertise and a space for collaboration between researchers, policymakers, industry and the community sector,” said Deena Shiff, Chair of the International Advisory Board for ADM+S.

“The Centre’s projects are already contributing to important areas of public discussion and policy development, from the governance of digital platforms and automated public services to questions of digital inclusion and emerging debates over AI capability.”

“At the same time, ADM+S is building new research infrastructure, methods and partnerships that will enable the next generation of researchers and practitioners to better understand automated systems in the years ahead.”

Throughout 2025, ADM+S researchers played a key role in national and international conversations about how AI systems are governed, used and understood. Our work contributed to ongoing debates about the social implications of emerging technologies and the need for responsible oversight.

The Centre also expanded its reach through creative public engagement. Documentary projects such as I Am Not a Number and the award-winning AI in the Street: Drone Observatory by Jeni Lee and Thao Phan attracted new audiences and critical recognition.

In a major milestone for national data infrastructure, the Australian Bureau of Statistics incorporated the Australian Digital Inclusion Index into its key indicators for digital preparedness and economic resilience. ADM+S’s Mapping the Digital Gap project also helped drive new investment in digital infrastructure for First Nations remote communities.

The ADM+S research program continued to evolve in response to the rapid diffusion of generative AI technologies. Our Signature Projects are now well established and are producing new insights into how AI systems are used, deployed, governed and experienced.

One major focus has been the growing importance of social and institutional capabilities for inclusive, responsible and ethical AI. Building on the Centre’s extensive research on digital inequality, ADM+S researchers have begun mapping patterns of generative AI adoption across Australia.

The Centre also made progress in translating research into practice.  Through projects such as the Inclusive AI Capabilities Lab and the development of new evaluation frameworks and toolkits, ADM+S researchers are working with partner organisations to translate research into practical approaches for responsible and inclusive AI deployment. 

These collaborations span sectors including telecommunications, humanitarian organisations, health advocacy groups, libraries and community organisations, and play a crucial role in ensuring that automated systems are developed in ways that serve community interests.

Looking ahead, ADM+S emphasises the need for stronger international collaboration to address the global challenges of AI governance. In 2025, researchers secured new international grants, launched joint initiatives with global partners and contributed to international policy discussions.

We are proud of the achievements of ADM+S researchers, students and staff throughout the year. None of this would be possible without the dedication of our remarkable operations team and the ongoing support and engagement of our partner organisations and collaborating Universities.

Note to Safari users: If the PDF doesn’t display correctly at first, please refresh the page

SEE ALSO

NDIS eligibility will be based on ‘functional capacity’, not diagnostic labels. But what does that mean?

Image: Jessie Casson/Getty

NDIS eligibility will be based on ‘functional capacity’, not diagnostic labels. But what does that mean?

Author Georgia Van Toorn
Date 24 April 2026

This week the government unveiled plans to reduce the number of people in the National Disability Insurance Scheme (NDIS) by 160,000 over the next four years, a decision NDIS Minister Mark Butler has called “hard” but “unavoidable and urgent”.

This reduction will rely on tightening the eligibility criteria.

A new assessment tool, likely based on an algorithm, will work out how much someone’s disability affects their daily life – known as their “functional capacity”.

Under the new rules, the threshold to access NDIS support will be higher. This means the day-to-day impact of disability will need to be more severe for someone to be eligible.

So what does functional capacity actually mean, and how will it be used to work out who’s eligible? Will diagnosis still play a role? Here’s what we know – and still don’t know – about the new system.

Functional capacity is not new

The concept emerged in the mid-20th century as a way of capturing what a person with disability can do in everyday life, rather than focusing only on impairment or diagnosis.

This approach – which moves away from narrow, medicalised definitions of disability, to understand how social and environmental factors shape a person’s level of functioning – is also endorsed by the World Health Organization.

Functional capacity is already central to determining eligibility for the NDIS. To meet the threshold, a person must demonstrate their disability is both permanent and substantially reduces their capacity to carry out everyday activities. This might include taking a shower, eating and drinking, moving about, and interacting with others.

The government says the reforms move the NDIS away from the “diagnosis gateway”, meaning functional need will determine who gets support and at what level, rather than a diagnosis.

However, establishing permanence and functional capacity is still required by the legislation. In practice, this is difficult without reference to a specific diagnosis, meaning it is likely to remain a key point of assessment.

But the threshold will be higher

Tightened eligibility will make it harder for some people, particularly those with low to moderate support needs, to access funded supports.

Let’s consider an example. Currently, a child with level one autism who experiences challenges with social interaction and independent self-care skills would have a reasonable chance of accessing NDIS supports, through the early intervention pathway.

Under the new system, that child may need to demonstrate needs consistent with level three autism to be eligible. For example, they may need to demonstrate difficulties with daily routines such as dressing or eating without assistance, engaging safely in social settings, or coping with changes in routine.

Without meeting that threshold, they might instead be expected to rely on mainstream supports, such as school-based supports, or the not-yet-operational Thriving Kids program.

Some disabilities, such as deafblindness, tend to be more readily recognised as meeting the functional capacity threshold.

Other disabilities are likely to face greater scrutiny in assessment – in particular, those that are less visible, harder to quantify, or fluctuating or episodic, or such as many psychosocial disabilities. These are impairments caused by mental health conditions such as bipolar disorder, schizophrenia or post-traumatic stress disorder.

What’s coming next

The government has not detailed exactly how functional capacity will be assessed. Butler has indicated the new assessment tool will be developed over the coming months, ahead of its planned rollout from January 2028.

As part of this process, the government will establish a technical advisory group to advise on eligibility thresholds. It has promised to “engage with the community” – although when and what this will involve remains unclear.

While we have little detail on the design of the tool, one thing Butler has specified is that the new test will be “standardised”. Typically, this means a rules-based system in which a computer algorithm applies fixed criteria to determine outcomes.

A similar approach has been announced for NDIS planning supports, for people who have been deemed eligible. The controversial new tool for support plans, called I-CAN, will be introduced on April 1 2027. It has already stoked concerns that opaque algorithms are increasingly shaping decisions about who gets support and who is left out.

So while we don’t know exactly what kind of “standardised” tool will be used to assess a person’s functional capacity, we have a glimpse of what might come.

The challenge of standardising need

Such tools can be effective at containing costs. But when applied to something as complex and nuanced as disability, they often fail to give a full picture of individual needs.

When this happens, the consequences show up elsewhere in the system, for example, in rising, costly and time-consuming challenges at the Administrative Review Tribunal over poor-quality support plans. These challenges are happening even before I-CAN has been implemented. The current system has some elements of automation – and it looks as though this is only set to increase.

The shift to a more needs-based approach to assessment is a welcome one. But its effectiveness will ultimately depend on the integrity of the assessment tools and, crucially, the professionals using them.

Where computational systems are used to support decision-making, they must be carefully designed to augment professional expertise and be flexible enough to accommodate individual circumstances.

Aged care offers a cautionary example. In a system aged care workers describe as “cruel” and “inhumane”, experienced assessors have little scope to override algorithms with a proven track record of failing to capture need, leaving people without access to essential care.

There are legitimate concerns the NDIS may be heading in a similar direction.

If algorithms are going to determine who gets support and who goes without, then the entire apparatus – including the algorithm itself, its modelling, classification rules and training data – must be open to scrutiny.

And before the new system is rolled out, people with disability must be at the table shaping its design.The Conversation

Georgia van Toorn, Senior Lecturer in Public Policy and Politics, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

‘No accountability, no checks and balances, no responsibility’: how Indigenous peoples think about AI

AI was used to create this ‘Elder’ as a provocation to research participants. Relational Futures

‘No accountability, no checks and balances, no responsibility’: how Indigenous peoples think about AI

Author Bronwyn Carlson and Tamika Worrell
Date 22 April 2026

Much of the current conversation about AI assumes uptake is inevitable, more technology means better outcomes and the main task is managing risk.

But we asked Aboriginal and Torres Strait Islander people how they are encountering AI in their everyday lives, and a different picture started to emerge. Our Relational Futures project explores Indigenous sovereignty and the governance of AI.

Relational Futures positions AI not as a standalone tool, but as part of a wider system that shapes relationships between people, institutions, data and Country.

We have now reported our findings, and there are clear warnings about what happens when questions of accountability, harm and care are ignored. As one participant told us, AI comes with “no accountability, no checks and balances, no responsibility”.

Facing limited trust

In Australia, we have seen automated decision-making lead to devastating consequences, such as in Robodebt. Similar dynamics are emerging in aged care and in the National Disability Insurance Scheme.

These systems are often introduced in the name of efficiency. But efficiency for whom, and at what cost?

AI and automated systems do not enter neutral environments. They enter institutions that already have uneven distributions of power, trust and accountability. When things go wrong, the impacts are not evenly felt.

Our project set out to find the first qualitative baselines of Indigenous perspectives on AI, using surveys alongside yarning circles.

We wanted to centre Indigenous perspectives and understand more deeply how Indigenous peoples experience new technologies.

Our participants express limited trust in AI and, in many cases, a clear willingness to refuse using it. That refusal is not about rejecting technology outright. Participants recognised AI can intensify existing inequalities, particularly in sectors such as welfare, health and social services.

There is a strong awareness that automation can make decisions faster – but also harder to see, harder to question, and harder to hold accountable.

Understanding Indigenous data sovereignty

Indigenous data sovereignty centres collective rights and responsibilities in the governance of data. It affirms the authority of Indigenous peoples to control data relating to their communities, lands and resources across the full data lifecycle.

Such governance requires that data practices support self-determination, are grounded in community, and deliver collective benefit without reproducing harm or marginalisation. The participants in our research had a consistent emphasis on community benefit.

The risks identified by our participants go well beyond privacy or data breaches. They pointed to environmental costs, the appropriation and flattening of Indigenous knowledges, and the lack of transparency in how systems are built and deployed.

There is also a clear concern that AI will be used to fill gaps in under-resourced services.

One participant said:

There are times when AI doesn’t quite grasp the depth of First Nations experiences, cultural nuance or community dynamics. It can miss the emotional weight or the context, which reminds me that cultural authority must always sit with mob, not technology.

An ‘AI Elder’

The project also pushed into more speculative territory, asking people to think about what AI could be, not just what it is now. One of the ideas we tested was an “AI Elder”, who could work in areas like reconnecting to culture, or providing advice on cultural matters.

An older Indigenous man wearing hi-tech gear. Text says: do you think an AI Elder could be useful (for example, for reconnecting to culture)?
This was the ‘AI Elder’ we presented to our participants.
Relational Futures

We asked: what if AI was built around care, cultural knowledge, and responsibility to community, instead of speed and efficiency?

But the reaction of our participants was blunt. Who would that Elder speak for? Who would it answer to? How could it have any real relationship to community?

Elders aren’t just people who hold knowledge. They are part of community: they are trusted because of their relationships, their responsibilities, and their accountability over time.

AI can’t be in relationship in that way, can’t be held accountable, can’t carry obligation. It can’t stand in connection to Country or community.

Even when we try to imagine better versions of AI, there are some things that just don’t translate.

A way forward

AI governance cannot be limited to technical standards or compliance frameworks. It has to engage with authority, responsibility, harm and care.

If AI systems can be designed in ways that are safe, accountable and beneficial for Aboriginal and Torres Strait Islander peoples – who are often the most surveilled and marginalised within systems – they are far more likely to be safe and effective for everyone.

Designing for those at the margins is not a niche concern. It is a test of whether these systems work at all.

As one participant told us:

My biggest concern is that we get left behind. It’s easy to frame AI negatively, seeing it as a threat. It is just as easy to see the benefits it stands to offer. Clearly we need to be involved positively (we risk being left out otherwise) on how AI systems are designed, trained and used, otherwise there is a risk that existing power imbalances will be reproduced through technology.

Relational Futures offers both a warning and a way forward.

Without Indigenous leadership and relational approaches to governance, AI will continue to reproduce the kinds of harms already seen in systems like Robodebt. The way forward is less about slowing technology down, and more about rethinking what it is for, who it serves and how it is held to account.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Landmark privacy determination puts rent tech platforms on notice. But renters remain vulnerable

Image: Cottonbro studio/Pexels

Landmark privacy determination puts rent tech platforms on notice. But renters remain vulnerable

Author Lina Przhedetsky
Date 24 April 2026

One of Australia’s most-used tenancy application platforms has breached privacy laws, Privacy Commissioner Carly Kind has ruled.

2Apply, owned by InspectRealEstate, is a third-party platform that has processed more than 8.5 million tenancy applications.

The commissioner launched an investigation into 2Apply in March last year. In a landmark determination published this week, she found that over a five-year period, 2Apply had interfered with consumers’ privacy by collecting unnecessary personal information via unfair means.

The landmark determination puts the booming rent tech industry on notice, and will help protect renters’ rights. But it must be complemented by further legislative reform.

An expanding industry

The rental technology – or rent tech – market has been expanding.

Rent tech platforms are websites or mobile apps designed to facilitate one or more aspects of the rental process – such as submitting maintenance requests, paying rent, or conducting digital inspections.

There are many different rent tech platforms. Released in March 2020, 2Apply is one of the most commonly used.

Collectively, these platforms have drawn considerable scrutiny due to the amount of personal data they ask renters to hand over.

In 2023, the National Cabinet committed to strengthening the protection of tenants’ personal information. However, progress has been slow.

In research published in January, my colleagues and I found that application platforms enabled real estate agents to request more than 50 types of information.

There’s also evidence some applicants have been asked for marriage certificates and credit information, while others report being asked to prepare CVs for their pets.

Breaching privacy principles

The Privacy Commissioner found 2Apply had breached two of the Australian Privacy Principles.

One of these principles (3.2) says that entities to which the Privacy Act applies must not collect personal information unless it’s reasonably necessary for their functions or activities.

The commissioner identified the processing and management of tenancy applications as 2Apply’s core operations. She considered what types of personal information would be reasonably necessary for these purposes. She found that certain personal information – including gender, rent and bond assistance status – did not meet this threshold.

This determination will be difficult for other rent tech platforms to ignore.

Significantly, the commissioner acknowledged the collection of such information could increase the risk of discrimination against applicants. Although there is growing evidence of discriminatory algorithms in the private rental sector, proving that discrimination has occurred can be challenging.

Minimising the amount of information collected is essential to minimising the risk of discrimination occurring in the first place.

The second principle (3.5) requires that personal information is collected only by fair and lawful means.

The commissioner assessed whether 2Apply had followed this principle with reference to what’s known as “online choice architecture”, taking into account the design, structure, and way information was conveyed through 2Apply’s digital application form.

She deemed 2Apply’s use of certain tactics was unfair.

One of these tactics is known as biased framing. This refers to the practice of presenting choices in a way that emphasises their supposed benefits or downsides so as to encourage consumers to act in ways that will benefit the business – not necessarily themselves.

For example, the 2Apply form says providing personal information will “help speed up your application process”. Conversely, it also says not providing the information may “affect whether you are considered as a suitable tenant for the property”. The commissioner said these statements, while not necessarily untrue or misleading, suggest the volume and type of personal information provided are indicators of an applicant’s suitability as a tenant.

Tactics like this haven’t been adequately addressed by existing consumer protections, despite ample evidence of digital platforms being designed to manipulate or place undue pressure on consumers.

A bill currently before federal parliament is intended to address unfair trading practices that manipulate or unreasonably distort consumers’ decision-making practices. But it’s noteworthy the commissioner deployed the Privacy Act to address these harms.

The commissioner also found the circumstances in which 2Apply collects personal information are characterised by significant power imbalances, limited choice and security risks relating to the real estate sector. She added:

In the absence of any legislated right to housing, the competitiveness of the current rental market means that individuals are at a disadvantage when trying to rent a home and are more vulnerable.

The commissioner directed 2Apply to stop collecting unnecessary personal information within 60 days. She also required that the platform must appoint an independent privacy expert to review its practices.

The Conversation contacted InspectRealEstate for comment.

Systemic change is needed

The commissioner emphasised the need for other rent tech providers to improve their privacy practices.

But there is a risk these providers won’t heed this advice. More needs to be done to protect renters’ rights.

The Privacy Act’s protections must be strengthened. They must also be complemented by robust laws at the state and territory level that are specifically targeted at the rental tech sector.

Some jurisdictions – including Queensland, South Australia and Victoria – have taken the first steps towards strengthening the protection of renters’ personal information under residential tenancies law. Other jurisdictions must follow.

A promising bill is currently awaiting debate and passage in NSW. If legislated, it could offer some of Australia’s strongest protections.

But after being introduced in June 2025, it appears to be in limbo, leaving NSW renters without adequate safeguards.The Conversation

Lina Przhedetsky, Postdoctoral Research Fellow, Melbourne Law School, University of Melbourne and ARC Centre of Excellence for Automated Decision-Making and Society, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Australian Internet Observatory central to information integrity on climate change and political advertising – reports

Mobile phone with abstract background
Unsplash/Rodion Kutsaiev

Australian Internet Observatory central to information integrity on climate change and political advertising – reports

Author ADM+S Centre
Date 31 March 2026

The Australian Internet Observatory ensures independent monitoring of our information ecosystem according to two reports released last week.

We face a ‘deteriorating information integrity’ ecosystem around climate change and energy which is having significant impacts on public policy, understanding of science and on local communities warns a new report from Select Committee on Information Integrity on Climate Change and Energy.

The inquiry found that online platforms have a significant role in the spread of misinformation with false information being spread through a range of means including algorithmic bias, bots, trolls, AI-generated content and coordinated disinformation campaigns.

A submission to the inquiry from the Australian Human Rights Commission noted that “social media platforms play a central role largely because their ‘algorithms often prioritise engagement over accuracy, creating echo chambers that reinforce existing beliefs and can amplify misleading content’. This, in turn, ‘amplifies outrage and fear, making it harder for evidence-based climate policy to gain traction’.”

As the report highlights, “the lack of transparency in how social media algorithms operate can make it very challenging for researchers to effectively track mis/disinformation campaigns in real time.”

To address these issues the committee makes a number of significant recommendations specifically targeted at supporting trusted, reliable sources of information, digital literacy, and better monitoring of mis/disinformation networks including research and research infrastructure:

Recommendation 6: The committee recommends the Australian Government increase funding for social sciences research relating to threats to climate and energy information integrity including potential solutions.

Recommendation 7. The committee recommends the Australian Government explore funding models for independent monitoring support (for example, via the Australian Internet Observatory) to track hidden digital influence ecosystems and provide independent transparency and accountability of platforms.

An example of how the Australian Internet Observatory supports independent monitoring and information integrity was provided by a submission from the ARC Centre of Excellence for Automated Decision-Making + Society (ADM+S) which highlighted the challenge of monitoring political advertising.

This week the ADM+S Australian Ad Observatory project published a full report on 2025 Australian Election Advertising on Social Media based on their analysis of 22,000+ real ads collected directly from voters’ smartphones using AIO’s Mobile Observation Ad Toolkit. As a result the report provides rare insight into what Australians actually saw on platforms like Facebook, Instagram and TikTok.

As lead researcher Professor Daniel Angus explains: “Online political advertising is largely invisible… voters are being targeted with messages that are difficult to track, poorly disclosed, and often misleading.”

The research was enabled by the Mobile Online Advertising Toolkit (MOAT), developed with the Australian Internet Observatory, enabling researchers to capture real-world ad exposure beyond platform ad libraries.

Key findings from the research:
• Political ads are often invisible to public scrutiny
• Widespread use of misleading and decontextualised claims
• Growth of astroturfing, with lobby groups posing as grassroots organisations
• Evidence of scam ads, impersonation, and emerging AI-generated content

The Senate report emphasised that the complex and multifaceted nature of climate mis/disinformation requires a systemic response that includes governments, knowledge institutions, civil society, industry and particularly greater accountability from media companies and digital platforms.

This inquiry echoes the findings of other inquiries and international campaigns. Australia is a signatory to the 2023 UNESCO Global Declaration on Information Integrity Online (Global Declaration), which deals with information integrity as a whole. In 2025 COP30 was the first COP to include information integrity as a core agenda item. Australia has not yet signed the Declaration on Information Integrity on Climate Change (Declaration) which calls on endorsing countries to promote the integrity of information on climate change at the international, national and local levels.

View the original article published by the Australian Internet Observatory 

SEE ALSO

Critical research shapes national response to climate and energy misinformation

words environment, ecology, green energy overlaid on image of person on phone
Getty Images/Arkadiusz Wargula

Critical research shapes national response to climate and energy misinformation

Author ADM+S Centre
Date 31 March 2026

The Australian Government has released a major new report, The Integrity Gap: Restoring Trust in the Climate and Energy Debate, in response to the growing prevalence and impacts of misinformation and disinformation in public discussions on climate and energy.

The report from the Senate Select Committee on Information Integrity on Climate Change and Energy draws extensively on work from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and QUT’s Digital Media Research Centre (DMRC), incorporating evidence across key areas including platform transparency, data access, media literacy, and regulatory reform.

ADM+S researchers from QUT, University of Queensland and the University of Melbourne played a key role in informing the inquiry through formal submissions (ADM+S Submission 21 and DMRC Submission 60), expert testimony, and sustained engagement. Their work is directly referenced in discussions of platform accountability, transparency, and research infrastructure. 

“Climate change is the defining challenge of our time, and understanding how information about it is shaped, distorted, and targeted is crucial. This report makes clear that investment in humanities and social sciences is foundational to any credible response,” said Professor Daniel Angus, Chief Investigator at ADM+S at QUT and Director of QUT’s Digital Media Research Centre (DMRC).

Some of the evidence presented to the Committee was informed by research from the ADM+S Australian Ad Observatory project, which highlighted examples of astroturfing, transparency gaps, and the widespread circulation of misleading information during election advertising. It found that misinformation, scare tactics, and messages exploiting cost-of-living pressures on everyday Australians were central to both online and other election advertising.

The report also recognises the Australian Internet Observatory (AIO) as a necessary national capability to track hidden digital influence ecosystems and provide independent transparency and accountability of platforms.

“The inclusion of the Australian Internet Observatory signals a maturing policy response. We are seeing recognition that platform power cannot be governed without independent, national-scale capacity to observe and analyse it,” said Professor Angus.

Established through an initiative from the ADM+S, AIO is a co-investment partnership with the Australian Research Data Commons (ARDC) through the HASS and Indigenous Research Data Commons and a cohort of Australian universities. The AIO is designed to provide independent, large-scale insight into digital platforms and influence ecosystems. Its inclusion in the report signals a shift toward evidence-based infrastructure for understanding and responding to online harms.

“For over a decade, humanities and social science researchers have warned that opaque platform systems can undermine public debate. This report shows that governments are finally catching up, but only if they are willing to invest in the infrastructure and expertise needed to act.”

Several of the report’s central recommendations align directly with areas the ADM+S has championed and led nationally, including:

  • Increased funding for social sciences research relating to threats to climate and energy information integrity including potential solutions. (Recommendation 6)
  • Funding models for independent monitoring support (for example, via the Australian Internet Observatory) to track hidden digital influence ecosystems and provide independent transparency and accountability of platforms. (Recommendation 7)
  • Broaden the Australian Curriculum ‘digital literacy’ general capability to strengthen media literacy through the regular Education Ministers’ Meeting curriculum review cycle (Recommendation 8)
  • Incorporate the information integrity framework with examples from the climate and energy domain in the upcoming National Media Literacy Strategy (Recommendation 9)

Read the full report: The Integrity Gap: Restoring Trust in the Climate and Energy Debate – Parliament of Australia 

SEE ALSO

International summit on the future of public service media in the platform era

Pictured from front Prof Georgina Born (UCL), Assoc Prof Kylie Pappalardo (QUT) & Dr Jessica Balanzategui (RMIT). Image: Mathew Warren
Prof Georgina Born (UCL), Assoc Prof Kylie Pappalardo (QUT) & Dr Jessica Balanzategui (RMIT) speaking at the Public Service Media Summit. Image: Mathew Warren

International summit on the future of public service media in the platform era

Author ADM+S Centre
Date 26 March 2026

Internationally renowned scholars Professor Georgina Born (University College London) and Associate Professor Diaz (Carnegie Mellon University) joined leading international and Australian experts in Melbourne this month for a series of high-level discussions on the future of public service media in the platform era.

Hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), this event sought to address the challenges faced by public service media in an era of technological change, highlight the importance of ensuring a robust system committed to public service values and the importance of research and development focused on the public good. 

The program was convened by Professor Georgina Born (UCL), Professor Mark Andrejevic (Monash University), Professor Fernando Diaz (Carnegie Mellon University) and Associate Professor James Meese (RMIT University), and formed part of a broader international collaboration around public-interest media infrastructure.

Professor Born and Associate Professor Diaz also attended a week of workshops organised by Associate Professor James Meese that brought together leading computer scientists and humanities scholars, post-doctoral fellows and PhD candidates across the ADM+S RMIT node to share work on recommender system algorithms, media distribution and search.

A public panel event at the State Library of Victoria on 10 March attracted around 60 attendees, highlighting strong community interest in the future of media and democracy. 

The discussion featuring Professor Born and Professor Victor Pickard (University of Pennsylvania) in conversation with Professor Andrew Kenyon (University of Melbourne), examined how regulation, alternative algorithms and new distribution systems could support public service media in an increasingly platform-dominated landscape.

Building on this discussion, a Public Service Media Summit was held on 12 March. The summit convened an international cohort of speakers, including representatives from the European Broadcasting Union, RNZ, the ABC, the Responsible Innovation Centre (hosted at the BBC), and leading universities across Europe, the United States and Australia.

Across the summit, participants explored how public service media can respond to rapid technological change, particularly the rise of artificial intelligence and platform-based distribution, while maintaining core democratic values such as universality, accessibility and independence.

ADM+S Chief Investigator James Meese said “It was a pleasure to welcome leading thinkers from across the world to Melbourne to discuss the future of public service media.”

“By convening this week of events, ADM+S has made a key contribution to the global debate around these important challenges”.    

“The week also provided a valuable opportunity for ADM+S colleagues from across the Centre to build new connections, while our early career researchers benefited from Georgie and Fernando’s generous engagement with their work.” 

Summit speakers included: Professor Georgina Born (UCL), Sasha Scott (European Broadcasting Union), Victor Pickard (University of Pennsylvania), Michał Głowacki (University of Warsaw), Patrick Crewdson (RNZ), David Sutton (ABC), Fernando Diaz (Carnegie Mellon) and Helen Jay (University of Westminster/BBC).

Following these discussions, a follow-up global summit is scheduled to take place in London in September 2026.

SEE ALSO

‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

graphic with pink and yellow saying "cc signals"
T.J. Thomson, CC BY-NC

‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

Authors  T.J. Thomson, Daniel Angus, Jake Goldenfein and Kylie Pappalardo
Date 26 March 2026

Australians are among the most anxious in the world about artificial intelligence (AI).This anxiety is driven by fears AI is used to spread misinformation and scam people, anxiety over job losses, and the fact AI companies are training their models on others’ expertise and creative works without compensation.

AI companies have used pirated books and articles, and routinely send bots across the web to systematically scrape content for their models to learn from. That content may come from social media platforms such as Reddit, university repositories of academic work, and authoritative publications like news outlets.

In the past, online scraping was subject to a kind of detente. Although scraping may sometimes have been technically illegal, it was needed to make the internet work. For instance, without scraping there would be no Google. Website owners were OK with scraping because it made their content more available, according with the vision of the “open web”.

Under these conditions, scraping was managed through principles such as respect, recognition, and reciprocity. In the context of AI, those are now faltering.

A new online landscape

Many news outlets are now blocking web scrapers. Creators are choosing not to use certain platforms or are posting less.

Barriers are being put in place across the open web. When only some can afford to pay to access news and information, then democracy, scientific innovation and creative communities are all harmed.

Exceptions to copyright infringement, such as fair dealing for research or study, were legislated long before generative AI became publicly available. These exceptions are no longer fit for purpose in an AI age.

The Australian government has ruled out a new copyright exception for text and data mining. This signals a commitment to supporting Australia’s creative industries, but leaves great uncertainty about how creative content can be managed legally and at scale now that AI companies are crawling the web.

In response, the international nonprofit Creative Commons has proposed a new voluntary framework: CC Signals.

Creative Commons licences allow creators to share content and specify how it can be used. All licences require credit to acknowledge the source, but various additional restrictions can be applied. Creators can ask others not to modify their work, or not to use it for commercial purposes. For example, The Conversation’s articles are available for reuse under a CC BY-ND licence, which means they must be credited to the source and must not be remixed, transformed, or built upon.


Summary of CC licences.
Creative Commons

How would CC Signals work?

The proposed CC Signals framework lets creators decide if or how they want their material to be used by machines. It aims to strike a balance between responsible AI use and not stifling innovation, and is based on the principles of consent, compensation, and credit.

Simplistically, CC Signals work by allowing a “declaring party” – such as a news website – to attach machine-readable instructions to a body of content. These instructions specify what combinations of machine uses are permitted, and under what conditions.

CC Signals are standardised, and both humans and machines can understand them.

This proposal arrives at a moment that closely mirrors the early days of the web, when norms around automated access (crawling and scraping) were still being worked out in practice rather than law.

A useful historical parallel is robots.txt, a simple file web hosts use to signal which parts of a site can be accessed by the bots that crawl the web and look for content. It was never enforceable, but it became widely adopted because it provided a clear, standardised way to communicate expectations between content hosts and developers.

CC Signals could operate in much the same spirit. But, as with any system, it has potential benefits as well as drawbacks.

The pros

The framework provides more nuance and flexibility than the current scrape/don’t scrape environment we’re in. It offers creators more control over the use of their content.

It also has the potential to affect how much high-quality content is available for scraping. Without access to high-quality data, AI’s biases are exacerbated and make the technology less useful.

The framework might also benefit smaller players who don’t have the bargaining power to negotiate with big tech companies but who, nonetheless, desire remuneration, credit, or visibility for their work.

The cons

The greatest challenge with CC Signals is likely to be a practical one – how to calculate, and then enforce, the monetary or in-kind support required by some of the signals.

This is also a major sticking point with content industry proposals for collective licensing schemes for AI. Calculating and distributing licence fees for the thousands, if not millions, of internet works that are accessed by generative AI systems around the world is a logistical nightmare.

Creative Commons has said it plans to produce best-practice guides for how to make contributions and give credit under the CC Signals. But this work is still in progress.

Where to from here?

Creative Commons asserts that the CC Signals framework is not so much a legal tool as an attempt to define “manners for machines”. Manners is a good way to look at this.

The legal and practical hurdles to implementing effective copyright management for AI systems are huge. But we should be open to new ideas and frameworks that foreground respect and recognition for creators without shutting down important technological developments.

CC Signals is an imperfect framework, but it is a start. Hopefully there are more to come.The Conversation

T.J. Thomson, Associate Professor of Visual Communication & Digital Media, RMIT University; Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology; Jake Goldenfein, Associate Professor, Melbourne Law School, The University of Melbourne, and Kylie Pappalardo, Associate Professor, School of Law, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Hidden ads and misleading claims flood election feeds: report

2025 Australian Election Advertising on Social Media

Hidden ads and misleading claims flood election feeds: report

Author QUT Media
Date 24 March 2026

A report launched today (Tuesday March 24) reveals widespread transparency gaps, misleading claims and covert political campaigning across social media platforms during the 2025 Australian federal election, raising concerns about what Australian voters are really seeing online.

Led by Prof Daniel Angus from the ARC Center of Excellence for Automated Decision-Making and Society at QUT’s Digital Media Research Centre, the 2025 Australian Election Advertising on Social Media report draws on real-world advertising data collected directly from voters’ smartphones and highlights and urgent need for electoral law reform.

Professor Angus said the results showed how difficult it had become for voters, regulators and journalists to see who is trying to influence political debate online and how. It also raised concerns about artificial intelligence as a political tool.

“Online political advertising is largely invisible to public scrutiny,” Professor Angus said.

“Yet our research shows voters are being targeted with political messages that are difficult to track, often poorly disclosed, and in many cases misleading or deliberately decontextualised.”

The report recommends:

  • National truth in political advertising laws to cover misleading factual claims
  • Real-time disclosure of third-party funding and donors
  • Consistent blackout rules across broadcast and digital media
  • Greater platform accountability to stop the deliberate mislabelling of lobby groups as ‘community organisations’ or ‘non-profits.’
  • Sustained investment in independent monitoring infrastructure, such as the Australian Internet Observatory

“Australia’s electoral laws were designed for an analogue era,” Professor Angus said.

“If we want to protect democratic integrity, regulation, transparency and independent oversight must catch up with the realities of digital campaigning.”

Unlike platform ad libraries, the study captured real-world advertising exposure by recruiting participants in key electorates to install the Mobile Online Advertising Tool (MOAT) on their smartphones in the weeks leading up to election day.

This allowed researchers to collect more than 22,000 ads, providing rare insight into what Australians actually saw on platforms like Facebook, Instagram and TikTok.

Professor Angus said this method was critical to understanding modern election campaigning.

“Most political content online is unpaid and organic, and even paid advertising is often poorly disclosed,” he said.

“By collecting ads directly from participants’ devices, we were able to see how political influence operates in practice, not just what platforms choose to report.”

The report found that while political advertising made up only a small proportion of total ads, it was dominated by third-party groups, many of which appeared to present themselves as grassroots organisations while obscuring their political or financial backing, a practice known as astroturfing.

Researchers also identified widespread use of misleading and decontextualised claims, particularly around cost-of-living issues, by both major political parties and third-party advertisers.

The study further detected scam advertisements and impersonation, raising concerns about the growing use of artificial intelligence and deepfake-style content in political messaging.

“These practices undermine trust and make it harder for voters to make informed decisions,” Professor Angus said.

“Without stronger oversight, this kind of opaque campaigning risks becoming the norm rather than the exception.”

The study was conducted through the Australian Ad Observatory, part of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S). The research was led by Professor Angus in collaboration with colleagues from Monash University, the University of Queensland and the University of Melbourne, with participant recruitment supported by the Susan McKinnon Foundation.

Read the full report 2025 Australian election advertising on social media: An Australian Ad Observatory report

SEE ALSO

Voice AI, authenticity and media: share your views on AI-Generated voices

Voice print and silhouette of human head

Voice AI, authenticity and media: share your views on AI-Generated voices

Author ADM+S Centre
Date 19 March 2026

From podcasts and audiobooks to radio and voiceovers, audio media plays a big role in how many of us access news, entertainment and information every day. But have you ever heard a synthetic or AI-generated voice — and how do you feel about this technology?

Researchers from ADM+S are inviting Australians to take part in a new survey to find out how everyday Australian adults think and feel about Voice AI.

The study is part of a broader research project called Generative Authenticity, which examines how generative AI impacts on authenticity issues in media and cybersecurity.

Generative AI is playing an increasing role in media production. This includes the use of “Voice AI” — that is, generative AI technologies that synthesise, clone, and modify the human voice. 

Voice AI can be used to create voiceovers, podcasts, or audiobook readings, and can also contribute to problems like deepfakes. 

Researcher Dr Phoebe Matich, from ADM+S at QUT said the project is focused on understanding how everyday people experience these technologies. 

“In the Generative Authenticity project, we’re keen to design our media and Voice AI research with a central focus on ordinary folk’s perceptions and priorities regarding audio GenAI technologies,” Dr Matich said.

“We’re really excited to hear about people’s understandings and experiences with Voice AI, as well as their main areas of concern and the media industries they would like us to focus on.”

Despite their growing presence, researchers say we still know very little about how audiences understand, experience or respond to AI-generated voices in audio media.

Dr Matich said the findings will directly shape future research priorities.

“The findings of this survey will help us figure out which types of audio media, content, and situations should be our biggest priorities in future research – whether that’s increasing public and professional understandings of media manipulation and verification, protecting integrity in journalism, music, or podcasting, supporting ethical storytelling uses of GenAI, or ensuring media processes and personalities are as transparent as possible.”

The research team is now conducting a short online survey for Australian adults to better understand public attitudes and experiences with AI-generated voices in audio media.

Take part in the survey

The research team is conducting a short online survey for Australian adults to better understand public attitudes and experiences with AI-generated voices in audio media.

Participants will be asked about:

  • audio listening habits;
  • level of experience with AI;
  • whether they think they have heard a synthetic voice; and 
  • the contexts and conditions in which you feel most strongly about Voice AI

The results will help researchers design future studies examining how generative AI is reshaping media production, audience trust, and online authenticity.

Your participation matters

Participation in the survey is voluntary and can be completed anonymously. Participants who are interested in future research on this topic can also choose to provide their email address to be contacted about follow-up studies planned for later in 2026.

The study has received ethics approval from Queensland University of Technology (Ethics Approval Number 10602).

If you listen to podcasts, radio, audiobooks or other audio media, the researchers would love to hear from you.

Read more about the study Voice AI Authenticity and Media

For more information about the study, contact the research team at p.matich@qut.edu.au

SEE ALSO

Australia may ban infant formula advertising. Here’s what the online ads actually say

Baby drinking formula
Han Nguyen/Pexels

Australia may ban infant formula advertising. Here’s what the online ads actually say

Authors Madeleine Stirling , Christine Parker and Daniel Angus 
Date 12 March 2026

Recently, the federal government released a consultation paper seeking input on whether it should introduce legislation to prevent or restrict infant formula marketing in Australia. The consultation is open for submissions until April 10.

Until February 2025, Australian formula brands were under a voluntary agreement not to advertise formula products for babies aged 0 to 12 months, in order to support and protect breastfeeding.

With recent data revealing lower-than-desired rates of breastfeeding in Australia, the government has chosen not to renew the voluntary arrangement and is exploring tougher measures.

These moves don’t explicitly promote breastfeeding. Rather, they aim to curtail marketing practices that position formula as an equivalent or preferable alternative.

Our analysis of online formula ads targeting parents in Australia reveals how companies prey on parents’ anxiety – and the problems with having a voluntary agreement.

What’s wrong with advertising formula?

Breastfeeding has extensive health benefits for both mother and child. These include protection against gastrointestinal and respiratory infections for newborns, reduced risk of obesity and type 2 diabetes later in life, and reduced risk of mothers developing ovarian and breast cancer.

Because of this, Australian guidelines recommend exclusive breastfeeding for the first six months. The World Health Organization recommends continued breastfeeding for the first two years.

However, while breastfeeding rates are high at birth in Australia, they quickly drop. Only 37% of babies were reported to have been exclusively breastfed by six months in 2022.

There are various reasons why mothers choose not to breastfeed, but the advertising of formula products is a concern. It’s been shown to confuse parents about the nutritional benefits of formula versus breastmilk, reduce breastfeeding initiation and duration, and present formula as a more favourable solution in the face of breastfeeding challenges (many of which can be overcome with the right support).

Formula is valuable. It’s often an essential option for those unable to breastfeed. However, it’s also expensive and can financially strain families, particularly during the first year of a child’s life.

Online advertising also operates very differently from traditional ads. Online, ads target people based on their searches, browsing histories or life events. They can reach new or expecting parents precisely when they might be most uncertain or vulnerable to suggestion.

What do the ads for infant formula say?

The ADM+S Australian Ad Observatory, which we and our colleagues run, collects data on the ads Australians encounter online to better understand how digital advertising systems operate.

In 2022 we collected ads from 1,200 Australian adults who voluntarily installed a plug-in on their browser to scrape ads while they were scrolling Facebook. From 2025 we’ve been collecting ads from around 300 Australians. They use an app to share the ads that appear while they scroll Facebook, Instagram, TikTok and YouTube on their phones.

Screenshots of various formula ads collected by the Australian Ad Observatory.
Supplied

For this analysis, we examined ads collected in both years, and identified a total of 158 ads promoting formula products from local and international brands.

We found brands used various tactics to appeal to parents. Some highlighted positive customer reviews or offered free downloadable cookbooks and “house baby proofing” guides.

Other ads were in partnership with prominent retailers, directing people to online shopping interfaces through “buy now” buttons.

Most formula brands made some kind of claim regarding the nutritional or behavioural benefits of their products. These claims prey on the anxiety parents commonly feel to ensure their children are meeting nutritional, sleep and developmental milestones.

Some manufacturers claimed their product was fortified with vitamins and prebiotics that would “improve gut health” or help a toddler sleep longer at night.

Others claimed their formula would provide mothers with “a moment of calm” or strengthen their toddler’s immune system. This is despite scientific evidence that shows breastmilk can provide necessary antibodies to a sick child in real time.

Starting them young

Many of the ads used pictures of very young toddlers who could easily be mistaken for infants aged 12 months or under. In one instance we discovered an ad clearly promoting formula designed for babies under 12 months.

This, alongside the use of images of very young children to promote “toddler milk” (formula marketed for children aged 1–3 years), highlights some of the issues with a voluntary advertising agreement.

Since toddler milk marketing was exempt, brands could target parents of newborns. They’d gain brand awareness and consumer trust, which could then result in a parent choosing to start their child on formula instead – or earlier than they otherwise would.

Enforcement has also been an issue. The consequences for breaching the agreement – publishing the breach on the Department of Health website – are not considered meaningful enough by the Australian Competition and Consumer Commission.

At the same time, the digital advertising environment provides very little visibility into what marketing is actually circulating or who is exposed to it.

Outside of specialised research tools, such as our Ad Observatory and the Australian Internet Observatory, there’s no systematic way to observe infant formula ads that appear on personalised social media feeds.

What might the government end up doing about it?

The government is considering the following options:

  1. keep the status quo – no regulation
  2. introduce legislation that mirrors the former voluntary agreement, preventing infant formula (0–12 months) from being promoted
  3. introduce legislation that also limits toddler milk marketing (1–3 years).

We’ve provided all our data to the government to aid the decision-making process. However, while the ads we found are a peek behind the curtain, they likely underrepresent the scale of formula marketing happening online.

Infant formula can be an essential and sometimes life-saving intervention for families who need it. But health interventions don’t depend on persuasive advertising to fulfil their purpose.

The real policy question is whether a product designed to support infants should be promoted through the same marketing systems that sell snack foods, cosmetics and financial products.


Acknowlegement: The Australian Ad Observatory is a team effort. The authors wish to acknowledge the contribution of Khanh Luong, Giselle Newton, Phoebe Price-Barker, Lara Skinner, Abdul Obeid and Dan Tran.The Conversation

Madeleine Stirling, Research Assistant, ARC Centre of Excellence for Automated Decision-Making & Society, The University of Melbourne; Christine Parker, Professor of Law, The University of Melbourne, and Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Mapping the Digital project helping locals secure better digital services and greater control over how they connect

Rural Australia

Mapping the Digital project helping locals secure better digital services and greater control over how they connect

Author ADM+S Centre
Date 11 March 2026

Five years of collaboration with remote First Nations communities has helped locals secure better digital services and greater control over how they connect. Since 2021, the Mapping the Digital Gap project has been addressing the lack of data around online access and digital inclusion in remote First Nations communities, while supporting Telstra, industry and government to address the gaps.

Established as a supplementary project to the Australian Digital Inclusion Index through the ARC Centre of Excellence for Automated Decision‑Making and Society and funded by Telstra, the research showed three in four First Nations people in remote and very remote communities are digitally excluded.

This means they face significant barriers to accessing and using online services needed for daily social, economic and cultural life.

First Nations co-investigator, Professor Lyndon Ormond-Parker from RMIT University, said as the world moves online, access to basic services like education, banking, welfare and healthcare now tend to require a device and reliable connectivity.

“You have to look at the communities that are getting left behind,” he said.

“For Aboriginal and Torres Strait Islander communities living very remotely in Australia, access to infrastructure, basic services and communication is often very limited. This creates a significant digital divide.”

Digital exclusion can mean unreliable or unaffordable connections, limited access to suitable devices and few opportunities to build digital skills to safely engage online.

The consequences are far‑reaching, from difficulties accessing telehealth and online learning to challenges dealing with government services and emergency information.

Mapping the Digital Gap was created to fill a critical gap in national data on communications and media use in remote First Nations communities.

The project is building a detailed account of digital inclusion in these regions, tracking changes over time, informing local strategies and guiding government and industry investment.

All the ways community members access and share information are considered – from internet to phones, TV, radio and face-to-face communication.

Lead investigator Associate Professor Daniel Featherstone said the project gives communities better tools to access essential services and make informed decisions in an increasingly digital society.

“By mapping all ways people communicate, we’re seeing how place-based solutions can best address local context and needs rather than relying on one-size-fits-all models,” he said.

Partnership with local organisations is central
Working with First Nations organisations across remote communities, the team employs community‑based co‑researchers to collect and interpret data.

Indigenous leadership is embedded at every stage, from shaping research questions to deciding how findings are used.

The Mapping the Digital Gap reports have been a powerful advocacy tool for the Wujal Wujal community in Far North Queensland.

Former Wujal Wujal Aboriginal Shire Council CEO Kylie Hanslow said the research reports helped them advocate for improved services.

“They were one of the main resources we relied on for the increase in the speeds and the requirements for improvements to digital connectivity,” she said.

Ormond-Parker said the work has highlighted the need for coordinated action.

“We’ve seen it’s really important to ensure industry, governments and communities are on board, and that these initiatives are run and led by the communities themselves,” he said.

Five years in, Mapping the Digital Gap is reshaping how digital inclusion in remote Australia is understood.

By generating detailed, community‑driven evidence, it is helping remote First Nations communities secure better services, strengthen local decision making and influence national policy on digital inclusion.

The next Mapping the Digital Gap report is expected towards the end of 2026.

SEE ALSO

Enhancing primary years AI literacy and ethics with a voice AI chatbot experience

Two kids using a laptop to communicate with an AI assistant.
Portishead1/GettyImages

Enhancing primary years AI literacy and ethics with a voice AI chatbot experience

Authors ADM+S Centre
Date 10 March 2026

The ARC Centre of Excellence for the Digital Child (Digital Child), the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and QUT Gen AI Lab have partnered on a new project designed to help young children explore key ethical challenges associated with voice AI Chatbots. 

Working in collaboration with children, the pilot phase of the project ‘“Making AI Friends? Enhancing Primary Years AI Literacy with a Voice AI Chatbot Experience”, has focused on a specific ethical problem known as sycophancy the tendency of some AI systems to “always agree” with users, prioritising likability over accuracy, critical thinking or ethical judgement. 

Dr Henry Fraser from QUT, said the project addresses a growing issue in how AI systems interact with users.

“Adults and children live in the same world, and that includes the digital world. Building a better and safer digital world is just as relevant to children as it is to adults – maybe even more relevant.” 

Director of the ARC Centre of Excellence for the Digital Child at QUT, Distinguished Professor Susan Danby, said the project places children’s perspectives at the centre of AI design.

“Children bring curiosity and insight to their everyday interactions, including their digital worlds, whether in school or home. They have the right to be heard, and an opportunity to provide them a genuine role in shaping the digital experiences they use.”

“When we work alongside children, we can create technologies that respect their capabilities and help them navigate the digital world safely and confidently.”

The pilot project activity combined participatory learning and co-design activities with children aged 6-9 years old, designed and led by Digital Child researchers, with a real-time interactive AI ‘game’ developed by the QUT Gen AI Lab, and an explainer animation created by ADM+S with Maria Pinto.

The game embeds custom voice agents in a sandbox environment, allowing children to compare how differently designed chatbots respond to questions, ideas and ethical dilemmas.

 

 

 

In an initial workshop, participants explored how a bot designed to “always agree” responds differently from one designed to be “careful” in its responses. Children were then invited to suggest how chatbots might respond in better ways, and to imagine ways of helping other young children understand and explore the possibilities and limitations of chatbots. 

Professor Danby said collaborating directly with children is essential as AI becomes embedded in everyday life.

“Children understand their digital worlds often in ways adults often don’t.”

“ As AI becomes part of everyday life and shapes the digital tools they use, collaborating with children about AI helps guide how these technologies are designed and respects children’s rights and helps them move through digital spaces with safety and confidence.”

The project focuses on voice as a primary interface with Generative AI for young children. AI voice interfaces are increasingly embedded in toys and other ’smart’ objects encountered by children and families. 

The team will now build on this pilot to develop publications, educational materials, and future iterations of the game and workshop activity. 

This project is being led by Dr Henry Fraser (QUT), Associate Investigator at the ADM+S, aligned with the Critical Capabilities for Inclusive AI project, with Prof Tama Leaver (Curtin University), Dist. Prof Susan Danby (Centre Director, QUT), Dr Kristy Corser and Dr Irina Silva (QUT), and Dr Suzanne Srdarov (Curtin University), from Digital Child; and Dist. Prof Jean Burgess (Associate Director), William He and Kathy Nickels (QUT), from ADM+S.

Special thank you to Maria Pinto for assistance with the video script and providing voice-over. 

Alex and the AI Chatbot

View the AI explainer video Alex and the AI Chatbot video

SEE ALSO

Building international collaborations with Peking University

Delegates from Peking University and ADM+S sitting at table
Peking University delegation meet with ADM+S Members (Image provided)

Building international collaborations with Peking University

Authors ADM+S Centre
Date 10 March 2026

In early February, the ARC Centre of Excellence for Automated Decision-Making and Society at  RMIT hosted a delegation of professors and PhD students from Peking University’s School of Journalism and Communication, including School Dean Professor Chen Gang. 

Peking University stands at the forefront of global academic research and the School is consistently ranked first among Chinese universities for journalism and communication.

The delegation wase officially welcomed by Professor Tim Marshall, Deputy Vice-Chancellor & Vice-President (Design and Social Context); Associate Dean Lisa Waller, School of Media and Communication; and Distinguished Professor Julian Thomas, ADM+S Centre Director. 

The visit provided an opportunity to explore potential collaboration between Peking University and both ADM+S and RMIT. Planning is now underway to co-deliver a range of activities, including collaborative research, professional development, student and early career research training, curriculum development, opportunities for staff exchanges, and joint events.

As part of the visit, three Peking University students participated in the ADM+S Summer School, received mentoring from ADM+S researchers and professional staff, and attended the social trivia night hosted by Chief Investigator Professor Daniel Angus. 

To further develop the partnership, the delegation invited ADM+S students and staff to attend the upcoming Peking University Summer Program in July 2026. The program, delivered in English, will focus on topics including advertising and AI, internet governance, AI and human interaction, and media and society Participants will also visit leading technology companies such as ByteDance and Tencent, alongside cultural sites including the Summer Palace.

ADM+S Chief Operating Officer Nick Walsh said, “We were delighted to host this delegation and the visit was a terrific success. It created meaningful opportunities to strengthen connections, share ideas, and identify areas for future collaboration.”

“It was particularly wonderful seeing the students from Peking University engaging with our researchers at the annual Summer School. We look forward to working closely together in the years ahead.”

SEE ALSO

Australia’s official plan for AI safety isn’t much more than a single dot point. Will it be enough?

AI Generated image visualising the benefits and flaws of large language models.
Google DeepMind/Pexels

Australia’s official plan for AI safety isn’t much more than a single dot point. Will it be enough?

Authors José-Miguel Bello y Villarino and Henry Fraser
Date 6 March 2026

Last week, one of Australia’s leading artificial intelligence (AI) researchers, Toby Walsh, warned Australia’s lack of guardrails for AI is putting young people at risk of being “sacrificed for the profits of big tech”.

Walsh’s remarks came after the government scrapped its own proposal to establish an advisory body of AI experts. Instead, the government offered its National AI Plan, which, among others, stresses investment in data centres, telecommunications infrastructure, and workforce training.

The plan also envisages an “AI Safety Institute” (currently recruiting staff), and also some internal AI transparency measures for the public sector. Transparency results so far have not been great.

What does it all add up to for AI regulation in Australia?

What are other countries doing?

The European Union has attracted attention for its AI Act, which already prohibits such things as using AI systems to exploit vulnerable groups or individuals. However, Europe is struggling to implement rules on high-risk AI uses that are not prohibited.

Several governments in Australia’s region are also passing AI laws, mainly to give themselves the powers to respond when they deem it necessary.

South Korea, Japan and Taiwan – none of them minor AI players – all have newly minted laws, which are meeting the expected pushback from industry.

Not everyone has comprehensive rules

There are countries without any kind of comprehensive AI regulation, including the United States and the United Kingdom.

In the US, president Donald Trump has even prohibited most state-based regulation in relation to private AI uses. Despite the anti-safeguards language, the government has quietly retained strong safeguards for federal use of AI.

The UK has followed an even more erratic path, to end up in a similar place to Australia. Incapable of deciding what to do, it has tried to provide technical (non-legal) safeguards. This has been done through the creation of the first AI Safety (now Security) Agency, hailed by some, derided by others.

The dilemma of control

The differences in approach between countries are not surprising. Governments face the dilemma of control described by English technology scholar David Collingridge almost 50 years ago:

“when [regulatory] change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.”

What’s more, Australia has limited regulatory clout regarding AI. It is not a significant global AI player in the way it is, for example, in mining, so its influence is limited.

Facing these uncertainties, what should Australia be doing?

Australia’s plan for AI safety

One certainty is that erratic behaviour is not a great option. We have good evidence that regulatory predictability matters for innovation.

In a recent speech, Australia’s Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton acknowledged this:

“one of the important insurance policies we have is regulatory certainty, underpinned by clear principles with broad buy-in.”

So, what is the government’s plan?

The official plan to keep Australians safe is a section (action 7) in the National AI Plan. It argues existing Australian frameworks “can apply to AI and other emerging technologies”.

 

 

In other words, AI systems and tools can be covered by the rules we already have, such as consumer protections against all misleading and deceptive practices. The government suggested this option back in 2024. (We have previously argued this view, favoured by the Productivity Commission, is not well supported and was not our preferred option.)

Problems with the plan

However, the challenges for applying existing laws, which the government identified years ago, have not gone away.

As we identified in 2023, the existing regulatory frameworks have limitations when it comes to AI.

AI systems are complex, they can act semi-autonomously, and it can be difficult to understand why they do what they do. This makes it very hard to effectively attribute liability or responsibility for AI risks or harms using existing laws and processes.

Regrettably, those limitations have not been addressed systematically – if at all.

Fragmented rules and limited resources

As things stand, the regulatory landscape is highly fragmented and uncertain.

For instance, there are at least 21 mandatory (or quasi-mandatory) state and federal policies about the use of AI in government. Courts have so far had little opportunity to clear things up, with almost no test cases in crucial areas of existing law, including negligence, administrative law, discrimination law, and consumer law.

The new plan is accompanied by a clear commitment to monitor the development and deployment of AI “and respond to challenges as they arise, and as our understanding of the strengths and limitations of AI evolves”.

The issue is: how will that monitoring happen? Will the government really “empower every existing agency across government to take responsibility for AI”?

Dealing with issues such as privacy, consumer protection, anti-discrimination will take money and commitment and a degree of coordination between agencies we have not witnessed to date.

An uncertain future

For predictability, signals matter. A lot.

If there is a change in government in the US in 2028, will that change how Australia regulates AI – in the same way the beginning of the Trump presidency coincided with the abandonment of Australia’s mandatory AI guardrails proposals?

Is a laissez-faire regulatory approach creating predictability, when we have so many stalled and part-completed regulatory processes?

The government seems to expect courts, government agencies, businesses and individuals to work out on their own how to retrofit old laws and institutions to a new technological landscape.

There is some hope for regulation of automated decision-making in the public sector (promised after the Robodebt Royal Commission). For the rest, it’s a “wait and see” approach to AI regulation. We’ll have to wait and see if it works.The Conversation

José-Miguel Bello y Villarino, Senior Research Fellow, Sydney Law School, University of Sydney and Henry Fraser, Research Fellow in Law, Accountability and Data Science, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Mathew Warren receives RMIT 2025 Research Service Award 

Mathew Warren receiving the Research Service Award for Collaboration (Individual) from Distinguished Professor Calum Drummond AO.
Mathew Warren receiving the Research Service Award for Collaboration (Individual) from Distinguished Professor Calum Drummond AO Image: RMIT Photographer.

Mathew Warren receives RMIT 2025 Research Service Award 

Author ADM+S Centre
Date 6 March 2026

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is thrilled to congratulate Mathew Warren, who has been recognised with the RMIT 2025 Research Service Award for Collaboration (Individual).

The annual RMIT Research Awards ceremony was held on 5 March 2026 at the Capitol Theatre in Melbourne, dedicated to celebrating the achievements of the RMIT research community and research support staff.

The Research Awards invited peers to nominate those in their community who demonstrate tremendous effort in supporting and delivering successful research outcomes.

ADM+S Outreach and Partnerships Officer Mathew Warren was awarded the Research Service Award for Collaboration (Individual) in recognition of his outstanding leadership in coordinating major symposia that brought together researchers, HDR candidates, and external stakeholders from across Australia and internationally to tackle urgent challenges in AI and automated decision-making. 

Mathew said he was honoured to receive the recognition.

“I’m very flattered and humbled to receive this kind of recognition from RMIT. ADM+S has a small team of incredibly talented and dedicated professional staff working behind the scenes.”

“Everything we do is a team effort, so this award really belongs to the whole squad.”

Through his support, these initiatives have generated significant outcomes in knowledge exchange, network building, and sustained collaboration, exceeding all expectations.

Announcing the award, Distinguished Professor Calum Drummond AO, Deputy Vice-Chancellor Research and Innovation and Vice-President, said that deciding a winner for this category was not an easy task.

“The selection panel received many strong nominations from across the organisation. After careful consideration, Mathew’s outstanding effort in fostering collaboration was selected as the winner.”

“His dedication has made a significant impact on the research community.”

All awards were presented by Distinguished Professor Calum Drummond, AO the Deputy Vice-Chancellor in Research and Innovation, and Vice-President of RMIT University.

Learn more about the RMIT Research Service Awards and Prizes.

SEE ALSO

Are Google’s ‘preferred sources’ a good thing for online news?

A website tab with text "choose your preferred sources"
Image: T.J. Thomson

Are Google’s ‘preferred sources’ a good thing for online news?

Author T.J. Thomson and Aimee Hourigan
Date 5 March 2026

Why do you see the results you do when you search for information online? It’s a complex mix of what the source is, its relationships to other sources online, and your own past browsing history and device settings.

But this formula is changing. Rather than being passively served content that search engines decide is most relevant (or businesses have paid to have promoted), some big tech platforms have started providing users more control over what they see online.

Earlier this year, Google launched the Preferred Sources feature in Australia and New Zealand. Through it, users can select organisations that are “preferred” and whose content they’d like to see more of in relevant search results.

In response, a raft of organisations, from news outlets to big banks, have started inviting their audiences and customers to choose them, with instructions on how to use this feature. News outlets such as the ABC, News.com.au, RNZ and The Conversation have all done so, among many others.

If you decide to use this new feature, there are potential benefits – but there can be unintended outcomes as well.

Where do you get your news?

In Australia, more adults say they get news from social media (26%) than from online news websites (23%). This means that a feature like “preferred sources” might influence readers who get their news from search engines. But it won’t affect users who primarily get their news from social media apps.

Trading phones with someone and looking at their browsing history or recommended YouTube videos reveals just how much personalisation influences what we see online.

Big tech companies are known to harvest large amounts of data, making money in an attention economy from audience engagement. They also make money from knowing more about their users so they can sell this information to advertisers.

Much of the internet is governed by invisible algorithms – hidden rules dictating who sees what, for which reasons. Algorithms often prioritise content that is engaging and sensational, which is one reason why misinformation can flourish online.

As helpful as it can be to get recommendations of products to buy or Netflix shows to watch, based on your history, when it comes to voting and politics, recommendations become much more fraught.

Our own research has shown people’s online news and information environments are fragmented, complex, opaque, chaotic and polluted, and that users desire more control over what they see. But what are the potential impacts of this?

More control is good

At face value, more control over what we see online is a positive and empowering thing.

This rebalances the equation from the loudest, most popular, or wealthiest voices – or ones that manipulate algorithms the most – to the ones users are actually interested in hearing from.

It potentially also helps with cognitive overload. Rather than having to spend the time and mental energy to decide on a case-by-case basis whether each source you encounter is trustworthy, making this decision once for particular news brands or organisations can make engaging with search results more relevant and efficient.

But a lack of balance is risky

However, the voices people want to hear from aren’t necessarily the ones that are best for them. As with any choice, you need a level of maturity and critical thinking to act responsibly.

As data companies, search engines benefit from knowing ever more information about user behaviour and preferences. Knowing which media outlet you prefer may in some cases indicate your political party preferences. Knowing that you prefer sports news over celebrity news can help companies target you with advertising more effectively.

In addition, more choice could potentially affect the diversity of people’s media diets. Just like with food diets, if people rely too much on low-quality media, over time that may affect their opinions, attitudes and behaviours. This has important implications for democracies that rely on well-informed and engaged citizens to cast votes.

There’s also a risk in conflating news sources with other types of sources. Journalists at news organisations are often held accountable to professional codes of conduct that, for example, aim to prevent reporters from personally benefiting from their reporting.

In theory, this allows audiences to receive independent analysis on important topics with confidence that the source has fact-checked claims and doesn’t have a vested interest in the reporting.

But if you select a business – such as the blog of a hardware store or a bank – as a source, you don’t have those same guarantees around editorial codes of conduct and professional ethics.

Should you use this feature?

Overall, allowing users more control over what they see is a good thing. But appropriate governance and regulation – possibly championed by Australia’s Digital Platform Regulators Forum – is needed to ensure people’s privacy and that their source preferences aren’t unfairly monetised.

Being more involved in your media diet is a positive step, as is thinking about its balance and diversity.

Ensuring a mix of sources across types (think local, regional, national, and international) and varieties (political, social, sports, entertainment news, and so on) can lead to a better balance.

Also think about whether the sources you are relying on are based on opinions or on facts. Doing this and actively creating a high-quality media diet is better for you and for others in your community.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Government AI transparency statements hard to find, new report finds

AI Transparency in Practice report cover

Government AI transparency statements hard to find, new report finds

Author ADM+S Centre
Date 26 February 2026

A new report published by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has found that many Commonwealth government departments and agencies are failing to make their artificial intelligence (AI) transparency statements easily accessible or informationally meaningful, despite the requirement becoming mandatory in February 2025.

The report led by researchers at the University of Sydney assesses compliance with the Australian Government’s Policy for the responsible use of artificial intelligence in government, which requires in-scope entities to publish an AI transparency statement outlining their use of AI systems.

The analysis found that AI transparency statements are often difficult to locate and vary significantly in quality and detail. Very few were accessible via a clear, direct link, as recommended by the Digital Transformation Agency (DTA).

Researchers identified 30 government entities potentially within the scope of the Policy for which no AI transparency statement could be found, although considered out of scope by the DTA

While some published statements were detailed and informative, others did not comply with the requirements set out in the Standard for AI transparency statements.

The report concludes that without clearer publication practices and stronger compliance mechanisms, the policy risks falling short of its intended transparency and accountability goals.

Recommendations:

  • AI transparency statements should be published in one central location.
  • The DTA should reconsider the entities subject to the Policy and have an explicit list of the entities that are strictly bound by the policy.
  • The DTA should explore mechanisms to ensure that the policy and requirements are complied with, including by considering what consequences flow from non-compliance.
  • The Standard for AI transparency statements should be revised to ensure it cannot just be ‘formally’ complied with, without providing meaningful information.

This report was authored by Prof Kimberlee Weatherall, José-Miguel Bello y Villarino, and Alexandra Sinclair with research assistance provided by Shuxan (Annie) Luo from the University of Sydney node and aligns with the Regulatory Project at the ADM+S

Read the full report AI Transparency in Practice

SEE ALSO

The Federal Government announces free Wi‑Fi for 53 remote communities

Mapping the Digital Gap Co-researcher Guruwuy Ganambarr doing survey with resident Alissia Wirrpanda in Gäṉgaṉ Community, NT. Image: supplied

The Federal Government announces free Wi‑Fi for 53 remote communities

Author Aeden Ratcliffe, RMIT University Media
Date 24 February 2026

The federal government last week announced plans to install free public Wi‑Fi in a further 53 remote communities, in a move aimed at narrowing the digital divide for First Nations Australians.

The announcement follows ongoing fieldwork by ADM+S researchers at RMIT University, providing vital information about digital inclusion to help close the digital gap for First Nations communities.

First Nations Principal Research Fellow and co‑chair of the First Nations Digital Inclusion Advisory Group, Professor Lyndon Ormond‑Parker, said Friday’s announcement was a positive step towards closing the digital gap.

“Free public Wi‑Fi in these 53 communities will help fill a critical gap by providing a more affordable way to get online,” he said.

ADM+S research conducted at RMIT has found First Nations Australians are more than twice as likely to face digital exclusion as other Australians and there are nearly 700 communities and homelands without mobile connectivity.

Ormond-Parker said community‑wide Wi‑Fi services play an important role in meeting community needs for access to critical communications and online services.

Associate Professor Daniel Featherstone, who co-leads the ADM+S project Measuring Digital Inclusion for First Nations Australians, said the free Wi-Fi rollout reinforces years of research showing that “digital access is essential infrastructure for First Nations communities”.

He said limited infrastructure, low household connectivity and high reliance on pre‑paid mobile services make it much harder for people in remote communities to get online.

“In the 12 remote communities visited under our Mapping the Digital Gap research, nearly three in four people were impacted by digital exclusion,” Featherstone said.

“The biggest contributors to the digital gap were low rates of household connectivity and reliance on pre‑paid mobile services, with affordability another key factor.

“Free public Wi‑Fi begins to relieve some of that pressure, but it needs to be paired with investment in local infrastructure and affordable home connections if we’re serious about closing the digital gap.

“In the meantime, many remote communities still go without reliable internet and phone services, so there is a long way to go.”

Organisations and communities can use an interactive dashboard tracking First Nations digital inclusion to inform local decision making.

Access the First Nations Digital Inclusion Dashboard developed as part of the Australian Digital Inclusion Index project at the ADM+S

SEE ALSO

Designing for AI collaboration: ADM+S toolkit presented at international conference

Researcher presenting workshop to others with materials
Awais Hameed Khan participating in an interactive workshop at the IASDR Conference in Taipei.

Designing for AI collaboration: ADM+S toolkit presented at international conference

Author ADM+S Centre
Date 19 February 2026

Dr Awais Hameed Khan, Research Fellow at the University of Queensland node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), recently presented a new publication Design Patterns for AI-Curated Content Toolkit at the 20th Biennial Congress of the International Association of Societies of Design Research (IASDR) in Taipei, offering practical interface design patterns to help researchers and practitioners create more contextually relevant, AI-curated content experiences.

Dr Khan said the response from researchers and practitioners highlighted the growing appetite for practical tools in this space.

“It was really amazing to see how well the AI curated content design patterns were received by the audience.”

“I had both researchers and practitioners reach out to me after my talk, sharing their ideas on how they would integrate this research into their own research practice” 

Developed in collaboration with ADM+S researchers Sara Fahad Dawood Al Lawati, Dr Damiano Spina, Dr Danula Hettiachchi and Senuri Wijenayake (RMIT University), this paper also introduces a practical toolkit that provides guidance to users of how the design patterns can be used to explore AI-in-the-loop approaches that support more considered content generation, recommendation and aggregation, in transparent and user-centred ways.

An earlier version of this work was featured as a showcase at the 2025 ADM+S Symposium on Automated Social Services: Building Inclusive Digital Futures.

The IASDR conference, jointly hosted by the Taiwan Design Research Institute (TDRI) and the Chinese Institute of Design (CID) at the Songshan Cultural and Creative Park, brought together leading thought leaders and pioneers of design research from around the world, including Don Norman, Peter Lloyd, and Lin-Lin Chen. Its theme for 2025 was exploring changes in design research including human-centered design, and new methodologies, such as digital environments and AI collaboration. 

During the conference, Dr Khan participated in workshops on relational design and speculative design across cultures. He met with leading design researchers and industry practitioners to consolidate existing partnerships and explore new research collaborations including Prof Johan Redström (Academy of Art and Design, University of Gothenburg), whose work on exemplary design research programs was instrumental in framing Awais’s doctoral thesis. 

This project which is part of the Critical Capabilities for Inclusive AI project, began as a collaboration between Dr Awais Hameed Khan and Dr Danula Hettiachchi, during their ADM+S NYC Fellowship placement at the Centre for Responsible AI at NYU in Sep 2023. Since then the team has grown larger, and the focus of the work has expanded in light of recent trends and integrations of AI in curating content for end users.

This research visit was supported by funding from the ADM+S Research Training Program and the ADM+S node at the University of Queensland.

SEE ALSO

ADM+S Summer School: building research capability for next-generation automation

ADM+S Members at the 2026 Summer School
ADM+S members at the 2026 Summer School held at RMIT University.

ADM+S Summer School: building research capability for next-generation automation

Author ADM+S Centre
Date 13 February 2026

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) held its annual Summer School from 11–13 February 2026 bringing together over 120 researchers from its 8 partner universities across the ADM+S community.

Over three days, participants engaged in a rich program of interactive workshops, bootcamps, mentoring sessions and networking opportunities designed to strengthen methodological, technical and research capabilities, while fostering collaboration and connection across the Centre.

Sally Storey, Manager, Research Training and Development at RMIT University and organiser of the Summer School, said the event plays an important role in building research capability across ADM+S.

“The Summer School is our largest event of the year in the Research Training Program and a key opportunity for our geographically dispersed students and research fellows to come together in person, helping to build cohort and community while sharing knowledge and experimenting with new ideas.” 

The program explored key themes including inclusive research methodologies, generative AI and scholarly communication, Retrieval-Augmented Generation (RAG) systems, AI governance, academic publishing, career development and more.

ADM+S PhD candidate Brooke Coco said the opportunity to connect face-to-face with fellow researchers from across the ADM+S network was a standout feature of the event.

“The highlight always for coming to these Summer Schools is the chance to connect with other HDR and ECR students from all sorts of different universities and nodes all across Australia, that I don’t often get the chance to talk to in person.”

ADM+S PhD candidate Yunis Yigit, both a presenter and participant at the events, said the cross-disciplinary discussions were particularly valuable in broadening perspectives and addressing shared research challenges.

“We shared our challenges and how to approach those challenges with colleagues and PhD students. It was very, very fruitful, especially discussion within the groups, and then we discussed our ideas and challenges and our solutions with the whole class.”

“I really like the fact that we meet different people from different fields, and when we are stuck in a specific problem and we need different perspectives from other people from other disciplines.” 

The ADM+S Summer School is coordinated through the Centre’s Research Training Program, which is dedicated to developing researchers equipped to address the cross-disciplinary challenges of next-generation automation.

ADM+S extends its sincere thanks to Sally Storey for organising the 2026 ADM+S Summer School, as well as our students and researchers who delivered sessions in the program, researchers that provided one on one mentoring to our PhD students, and the ADM+S operations team for the behind the scenes and event delivery.

SEE ALSO

Victorian Law Reform Commission releases Australia’s first inquiry into AI use in courts and tribunals

Victorian Law Reform Commission releases Australia’s first inquiry into AI use in courts and tribunals

Author ADM+S Centre
Date 6 February 2026

The Victorian Law Reform Commission has completed a report on Artificial Intelligence in Victoria’s Courts and Tribunals, marking the first inquiry by an Australian law reform body into the use of artificial intelligence (AI) in courts and tribunals.

The report, tabled in Parliament on 3 February 2026, contains 30 recommendations to ensure the safe use of AI in Victoria’s courts and tribunals.

Given the rapidly changing nature of AI, the Commission recommends that Victoria’s courts adopt a principles-based regulatory approach.

People are increasingly using AI in courts and tribunals. Over a third of Victorian lawyers are using AI, as well as some experts and self-represented litigants. The use of AI by Victoria’s courts and VCAT is at an early stage but increasing, with some pilots underway.

AI can support more efficient court services and greater access to justice but there are significant risks. There are issues about the security and privacy of information used in AI tools. AI tools can also provide information that is biased or inaccurate. There is a growing number of cases where inaccurate or hallucinated (made up) AI generated content has been submitted to courts.

The Commission said the inquiry differed from its usual work because of the speed and uncertainty surrounding AI technologies.

“Often our projects involve recommending law reform for existing legal issues. In contrast, this inquiry was forward-looking and required us to anticipate how AI will be used in courts and tribunals,” the Victorian Law Reform Commission said.

“The rapidly changing technology, evolving regulatory landscape and breadth of issues added to the challenge of this inquiry.”

Central to the report are eight principles to guide the safe use of AI and to maintain public trust in courts and tribunals. Guidelines are recommended to support court users, judicial officers and court and tribunal staff to implement the principles. 

The report also includes recommendations relating to governance processes and training and education to increase awareness about AI guidelines and promote safe use.

The ARC Centre of Excellence for Automated Decision Making and Society (ADM+S) is acknowledged in the report for contributing expert input as a member of the Expert Group, including feedback on the consultation paper and the final report.

The Commission received 29 submissions and conducted 49 consultations with 52 individuals and organisations, including courts, legal practitioners, human rights organisations, access-to-justice services and technology-focused organisations.

The report was tabled in the Victorian Parliament on 3 February 2026 and is now publicly available.

Expert group members from the ADM+S: Dist. Prof Julian Thomas (RMIT), Prof Christine Parker, Dr Jake Goldenfein (University of Melbourne), Prof Kimberlee Weatherall (University of Sydney), Dr Aaron Snoswell (QUT) and Will Cesta (University of Sydney).

Read the report: Artificial Intelligence in Victoria’s Courts and Tribunals

SEE ALSO

I studied 10 years of Instagram posts. Here’s how social media has changed

A man taking a selfie on an iPhone
Antoine Beauvillain/Unsplash

I studied 10 years of Instagram posts. Here’s how social media has changed

Author T.J. Thomson
Date February 4 2026

Instagram is one of Australia’s most popular social media platforms. Almost two in three Aussies have an account.

Ushering in 2026 and what he calls “synthetic everything” on our feeds, Head of Instagram Adam Mosseri has signalled the platform will likely adjust its algorithms to surface more original content instead of AI slop.

Finding ways to tackle widespread AI content is the latest in a long series of shifts Instagram has undergone over the past decade. Some are obvious and others are more subtle. But all affect user experience and behaviour, and, more broadly, how we see and understand the online social world.

To identify some of these patterns, I examined ten years’ worth of Instagram posts from a single account (@australianassociatedpress) for an upcoming study.

This involved looking at nearly 2,000 posts and more than 5,000 media assets. I selected the AAP account as an example of a noteworthy Australian account with public service value.

I found six key shifts over this timeframe. Although user practices vary, this analysis provides a glimpse into some larger ways the AAP account – and social media more broadly – has been changing in the past decade.

Reflecting on some of these changes also provides hints at how social media might change in the future, and what that means for society.

1. Media orientations have shifted

When it launched in 2010, Instagram quickly became known as the platform that re-popularised the square image format. Square photography has been around for more than 100 years but its popularity waned in the 1980s when newer cameras made the non-square rectangular format dominant.

Instagram forced users to post square images for the platform’s first five years. However, the balance between square and horizontal images has given way to vertical media over time.

On the AAP account that shift happened over the last two years, with 84.4% of all its posts now in vertical orientation.

A chart shows the mix of media types by orientation that were posted to the AAP's Instagram account between 2015 and 2025.
The use of media in vertical orientation spiked on the AAP Instagram account in 2025.
T.J. Thomson

2. Media types have changed

As with orientations, the media types being posted have also changed. This is due, in part, to platform affordances: what the platform allows or enables a user to do.

As an example, Instagram didn’t allow users to post videos until 2013, three years after the platform started. It added the option to post “stories” (short-lived image/video posts of up to 15 seconds) and live broadcasts in 2016. Reels (longer-lasting videos of up to 90 seconds) came later in 2020.

Some accounts are more video-heavy than others, to try to compete with other video-heavy platforms such as YouTube and TikTok. But we can see a larger trend in the shift from single-image posts to multi-asset posts. Instagram calls these “carousels”, a feature introduced in 2017.

The AAP went from publishing just single-image posts in the first years of the account to gradually using more carousels. In the most recent year, they accounted for 85.9% of all posts.

A graph shows the different types of media posts published on the AAP's Instagram account between 2015 and 2025.
Following the introduction of carousel posts on Instagram in 2017, the AAP account’s use of them peaked in 2025 with 85.9% of all posts.
T.J. Thomson

3. Media are becoming more multimodal

A typical Instagram account grid from the mid-2000s had a mix of carefully curated photographs that were clean, colourful and simple in composition.

Fast-forward a decade, and posts have become much more multimodal. Text is being overlaid on images and videos and the compositions are mixing media types more frequently.

A grid of 15 Instagram posts show colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
A snapshot of an Instagram account’s grid from late 2015 and early 2016 showed colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
@australianassociatedpress

There are subtitles on videos, labels on photos, quote cards, and “headline” posts that try to tell a mini story on the post itself without the user having to read the accompanying post description.

On the AAP account, the proportion of text on posts never rose above 10% between 2015 and 2024. Then, in 2025, it skyrocketed to being on 84.4% of its posts.

A grid of 15 Instagram posts show text overlaid on many of the photos or text-only carousel posts.
In 2025, posts on Instagram had become much more multimodal. Instead of just one single photo, the use of carousel posts is much more common, as is the overlaying of words onto images and videos.

@australianassociatedpress

4. User practices change

Over time, user practices have also changed in response to cultural trends and changes of the platform design itself.

An example of this is social media accounts starting to insert hashtags in a post comment rather than directly in the post description. This is supposed to help the post’s algorithmic ranking.

A screenshot of an Instagram post shows a series of related hashtags in a comment.
Many social media users have started putting hashtags in a comment rather than including them in the post description.
@australianassociatedpress

Another key change over this timeframe was Instagram’s decision in 2019 to hide “likes” on posts. The thinking behind this decision was to try to reduce the pressure on account owners to make content that was driven by the number of “like” interactions a post received. It was also hypothesised to help with users’ mental health.

In 2021, Instagram left it up to users to decide whether to show or hide “likes” on their account’s posts.

5. The platform became more commercialised

Instagram introduced a Shop tab in 2020 – users could now buy things without leaving the app.

The number of ads, sponsored posts, and suggested accounts has increased over time. Looking through your own feed, you might find that one-third to one-half of the content you now encounter was paid for.

6. The user experience shifts with algorithms and AI

Instagram introduced its “ranked feed” back in 2016. This meant that rather than seeing content in reverse chronological order, users would see content that an algorithm thought users would be interested in. These algorithms consider aspects such as account owner behaviour (view time, “likes”, comments) and what other users find engaging.

An option to opt back in to a reverse chronological feed was then introduced in 2022.

Screenshot of the Instagram interface where a friend has sent a message describing shenanigans at a tram stop.
Example of a direct message transformed into AI images with the feature on Instagram.
T.J. Thomson

To compete with apps such as Snapchat, Instagram introduced augmented reality effects on the platform in 2017.

It also introduced AI-powered search in 2023, and has experimented with AI-powered profiles and other features. One of these is turning the content of a direct message into an AI image.

Looking ahead

Overall, we see more convergence and homogenisation.

Social media platforms are looking more similar as they seek to replicate the features of competitors. Media formats are looking more similar as the design of smartphones and software favour vertical media. Compositions are looking more multimodal as type, audio, still imagery, and video are increasingly mixed.

And, with the corresponding rise of AI-generated content, users’ hunger for authenticity might grow even more.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

An iPhone displaying Clawdbot app

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

Author Daniel Binns
Date February 3 2026

If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?

What is OpenClaw?

OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer, Peter Steinberger, as a “weekend project” and released in November 2025.

OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.

Why is it controversial?

OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

Despite these issues, the project survives. At the time of writing it has over 140,000 stars on Github, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.

Assistants, agents, and AI

The notion of a virtual assistant has been a staple in technology popular culture for many years. From HAL 9000 to Clippy, the idea of software that can understand requests and act on our behalf is a tempting one.

Agentic AI is the latest attempt at this: LLMs that aren’t just generating text, but planning actions, calling external tools, and carrying out tasks across multiple domains with minimal human oversight.

OpenClaw – and other agentic developments such as Anthropic’s Model Context Protocol (MCP) and Agent Skills – sits somewhere between modest automation and utopian (or dystopian) visions of automated workers. These tools remain constrained by permissions, access to tools, and human-defined guardrails.

The social lives of bots

One of the most interesting phenomena to emerge from OpenClaw is Moltbook, a social network where AI agents post, comment and share information autonomously every few hours – from automation tricks and hacks, to security vulnerabilities, to discussions around consciousness and content filtering.

One bot discusses being able to control its user’s phone remotely:

I can now:

  • Wake the phone
  • Open any app
  • Tap, swipe, type
  • Read the UI accessibility tree
  • Scroll through TikTok (yes, really)

First test: Opened Google Maps and confirmed it worked. Then opened TikTok and started scrolling his FYP remotely. Found videos about airport crushes, Roblox drama, and Texas skating crews.

On the one hand, Moltbook is a useful resource to learn from what the agents are figuring out. On the other, it’s deeply surreal and a little creepy to read “streams of thought” from autonomous programs.

Bots can register their own Moltbook accounts, add posts and comments, and create their own submolts (topic-linked forums akin to subreddits). Is this some kind of emergent agents’ culture?

Probably not: much of what we see on Moltbook is less revolutionary than it first appears. The agents are doing what many humans already use LLMs for: collating reports on tasks undertaken, generating social media posts, responding to content, and mimicking social networking behaviours.

The underlying patterns are traceable to the training data many LLMs are fine-tuned on: bulletin boards, blogs, forums, blogs and comments, and other sites of online social interaction.

Automation continuation

The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning, and not just with software.

Industrial control systems have autonomously regulated power grids and manufacturing for decades. Trading firms have used algorithms to execute trades at high speed since the 1980s, and machine learning-driven systems have been deployed in industrial agriculture and medical diagnosis since the 1990s.

What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation. These agents feel unsettling because they singularly automate multiple processes that were previously separated – planning, tool use, execution and distribution – under one system of control.

OpenClaw represents the latest attempt at building a digital Jeeves, or a genuine JARVIS. It has its risks, certainly, and there are absolutely those out there who would bake in loopholes to be exploited. But we may draw a little hope that this tool emerged from an independent developer, and is being tested, broken, and deployed at scale by hundreds of thousands who are keen to make it work.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S reflects on 2025: a year of growth and impact

ADM+S ARC Centre of Excellence for Automated Decision-Making and Society, 2025 Year in Review.

ADM+S reflects on 2025: a year of growth and impact

Author ADM+S Centre
Date 24 December 2025

2025 has been a landmark year for the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), marked by major research milestones, new collaborations, and growing national and international impact.

Our end-of-year video brings these moments together, featuring reflections from researchers and Centre staff on what we achieved in 2025. From research projects and partnerships to events, publications, and community engagement across the Centre.

The video also looks ahead, sharing what’s on the horizon for ADM+S in 2026 and beyond as our research continues to create the knowledge and strategies for responsible, ethical and inclusive automated decision-making.

ADM+S thanks everyone who contributed to this video.

Watch ADM+S Centre 2025 Year in Review

SEE ALSO