Australian Internet Observatory central to information integrity on climate change and political advertising – reports

Mobile phone with abstract background
Unsplash/Rodion Kutsaiev

Australian Internet Observatory central to information integrity on climate change and political advertising – reports

Author ADM+S Centre
Date 31 March 2026

The Australian Internet Observatory ensures independent monitoring of our information ecosystem according to two reports released last week.

We face a ‘deteriorating information integrity’ ecosystem around climate change and energy which is having significant impacts on public policy, understanding of science and on local communities warns a new report from Select Committee on Information Integrity on Climate Change and Energy.

The inquiry found that online platforms have a significant role in the spread of misinformation with false information being spread through a range of means including algorithmic bias, bots, trolls, AI-generated content and coordinated disinformation campaigns.

A submission to the inquiry from the Australian Human Rights Commission noted that “social media platforms play a central role largely because their ‘algorithms often prioritise engagement over accuracy, creating echo chambers that reinforce existing beliefs and can amplify misleading content’. This, in turn, ‘amplifies outrage and fear, making it harder for evidence-based climate policy to gain traction’.”

As the report highlights, “the lack of transparency in how social media algorithms operate can make it very challenging for researchers to effectively track mis/disinformation campaigns in real time.”

To address these issues the committee makes a number of significant recommendations specifically targeted at supporting trusted, reliable sources of information, digital literacy, and better monitoring of mis/disinformation networks including research and research infrastructure:

Recommendation 6: The committee recommends the Australian Government increase funding for social sciences research relating to threats to climate and energy information integrity including potential solutions.

Recommendation 7. The committee recommends the Australian Government explore funding models for independent monitoring support (for example, via the Australian Internet Observatory) to track hidden digital influence ecosystems and provide independent transparency and accountability of platforms.

An example of how the Australian Internet Observatory supports independent monitoring and information integrity was provided by a submission from the ARC Centre of Excellence for Automated Decision-Making + Society (ADM+S) which highlighted the challenge of monitoring political advertising.

This week the ADM+S Australian Ad Observatory project published a full report on 2025 Australian Election Advertising on Social Media based on their analysis of 22,000+ real ads collected directly from voters’ smartphones using AIO’s Mobile Observation Ad Toolkit. As a result the report provides rare insight into what Australians actually saw on platforms like Facebook, Instagram and TikTok.

As lead researcher Professor Daniel Angus explains: “Online political advertising is largely invisible… voters are being targeted with messages that are difficult to track, poorly disclosed, and often misleading.”

The research was enabled by the Mobile Online Advertising Toolkit (MOAT), developed with the Australian Internet Observatory, enabling researchers to capture real-world ad exposure beyond platform ad libraries.

Key findings from the research:
• Political ads are often invisible to public scrutiny
• Widespread use of misleading and decontextualised claims
• Growth of astroturfing, with lobby groups posing as grassroots organisations
• Evidence of scam ads, impersonation, and emerging AI-generated content

The Senate report emphasised that the complex and multifaceted nature of climate mis/disinformation requires a systemic response that includes governments, knowledge institutions, civil society, industry and particularly greater accountability from media companies and digital platforms.

This inquiry echoes the findings of other inquiries and international campaigns. Australia is a signatory to the 2023 UNESCO Global Declaration on Information Integrity Online (Global Declaration), which deals with information integrity as a whole. In 2025 COP30 was the first COP to include information integrity as a core agenda item. Australia has not yet signed the Declaration on Information Integrity on Climate Change (Declaration) which calls on endorsing countries to promote the integrity of information on climate change at the international, national and local levels.

View the original article published by the Australian Internet Observatory 

SEE ALSO

Critical research shapes national response to climate and energy misinformation

words environment, ecology, green energy overlaid on image of person on phone
Getty Images/Arkadiusz Wargula

Critical research shapes national response to climate and energy misinformation

Author ADM+S Centre
Date 31 March 2026

The Australian Government has released a major new report, The Integrity Gap: Restoring Trust in the Climate and Energy Debate, in response to the growing prevalence and impacts of misinformation and disinformation in public discussions on climate and energy.

The report from the Senate Select Committee on Information Integrity on Climate Change and Energy draws extensively on work from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and QUT’s Digital Media Research Centre (DMRC), incorporating evidence across key areas including platform transparency, data access, media literacy, and regulatory reform.

ADM+S researchers from QUT, University of Queensland and the University of Melbourne played a key role in informing the inquiry through formal submissions (ADM+S Submission 21 and DMRC Submission 60), expert testimony, and sustained engagement. Their work is directly referenced in discussions of platform accountability, transparency, and research infrastructure. 

“Climate change is the defining challenge of our time, and understanding how information about it is shaped, distorted, and targeted is crucial. This report makes clear that investment in humanities and social sciences is foundational to any credible response,” said Professor Daniel Angus, Chief Investigator at ADM+S at QUT and Director of QUT’s Digital Media Research Centre (DMRC).

Some of the evidence presented to the Committee was informed by research from the ADM+S Australian Ad Observatory project, which highlighted examples of astroturfing, transparency gaps, and the widespread circulation of misleading information during election advertising. It found that misinformation, scare tactics, and messages exploiting cost-of-living pressures on everyday Australians were central to both online and other election advertising.

The report also recognises the Australian Internet Observatory (AIO) as a necessary national capability to track hidden digital influence ecosystems and provide independent transparency and accountability of platforms.

“The inclusion of the Australian Internet Observatory signals a maturing policy response. We are seeing recognition that platform power cannot be governed without independent, national-scale capacity to observe and analyse it,” said Professor Angus.

Established through an initiative from the ADM+S, AIO is a co-investment partnership with the Australian Research Data Commons (ARDC) through the HASS and Indigenous Research Data Commons and a cohort of Australian universities. The AIO is designed to provide independent, large-scale insight into digital platforms and influence ecosystems. Its inclusion in the report signals a shift toward evidence-based infrastructure for understanding and responding to online harms.

“For over a decade, humanities and social science researchers have warned that opaque platform systems can undermine public debate. This report shows that governments are finally catching up, but only if they are willing to invest in the infrastructure and expertise needed to act.”

Several of the report’s central recommendations align directly with areas the ADM+S has championed and led nationally, including:

  • Increased funding for social sciences research relating to threats to climate and energy information integrity including potential solutions. (Recommendation 6)
  • Funding models for independent monitoring support (for example, via the Australian Internet Observatory) to track hidden digital influence ecosystems and provide independent transparency and accountability of platforms. (Recommendation 7)
  • Broaden the Australian Curriculum ‘digital literacy’ general capability to strengthen media literacy through the regular Education Ministers’ Meeting curriculum review cycle (Recommendation 8)
  • Incorporate the information integrity framework with examples from the climate and energy domain in the upcoming National Media Literacy Strategy (Recommendation 9)

Read the full report: The Integrity Gap: Restoring Trust in the Climate and Energy Debate – Parliament of Australia 

SEE ALSO

International summit on the future of public service media in the platform era

Pictured from front Prof Georgina Born (UCL), Assoc Prof Kylie Pappalardo (QUT) & Dr Jessica Balanzategui (RMIT). Image: Mathew Warren
Prof Georgina Born (UCL), Assoc Prof Kylie Pappalardo (QUT) & Dr Jessica Balanzategui (RMIT) speaking at the Public Service Media Summit. Image: Mathew Warren

International summit on the future of public service media in the platform era

Author ADM+S Centre
Date 26 March 2026

Internationally renowned scholars Professor Georgina Born (University College London) and Associate Professor Diaz (Carnegie Mellon University) joined leading international and Australian experts in Melbourne this month for a series of high-level discussions on the future of public service media in the platform era.

Hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), this event sought to address the challenges faced by public service media in an era of technological change, highlight the importance of ensuring a robust system committed to public service values and the importance of research and development focused on the public good. 

The program was convened by Professor Georgina Born (UCL), Professor Mark Andrejevic (Monash University), Professor Fernando Diaz (Carnegie Mellon University) and Associate Professor James Meese (RMIT University), and formed part of a broader international collaboration around public-interest media infrastructure.

Professor Born and Associate Professor Diaz also attended a week of workshops organised by Associate Professor James Meese that brought together leading computer scientists and humanities scholars, post-doctoral fellows and PhD candidates across the ADM+S RMIT node to share work on recommender system algorithms, media distribution and search.

A public panel event at the State Library of Victoria on 10 March attracted around 60 attendees, highlighting strong community interest in the future of media and democracy. 

The discussion featuring Professor Born and Professor Victor Pickard (University of Pennsylvania) in conversation with Professor Andrew Kenyon (University of Melbourne), examined how regulation, alternative algorithms and new distribution systems could support public service media in an increasingly platform-dominated landscape.

Building on this discussion, a Public Service Media Summit was held on 12 March. The summit convened an international cohort of speakers, including representatives from the European Broadcasting Union, RNZ, the ABC, the Responsible Innovation Centre (hosted at the BBC), and leading universities across Europe, the United States and Australia.

Across the summit, participants explored how public service media can respond to rapid technological change, particularly the rise of artificial intelligence and platform-based distribution, while maintaining core democratic values such as universality, accessibility and independence.

ADM+S Chief Investigator James Meese said “It was a pleasure to welcome leading thinkers from across the world to Melbourne to discuss the future of public service media.”

“By convening this week of events, ADM+S has made a key contribution to the global debate around these important challenges”.    

“The week also provided a valuable opportunity for ADM+S colleagues from across the Centre to build new connections, while our early career researchers benefited from Georgie and Fernando’s generous engagement with their work.” 

Summit speakers included: Professor Georgina Born (UCL), Sasha Scott (European Broadcasting Union), Victor Pickard (University of Pennsylvania), Michał Głowacki (University of Warsaw), Patrick Crewdson (RNZ), David Sutton (ABC), Fernando Diaz (Carnegie Mellon) and Helen Jay (University of Westminster/BBC).

Following these discussions, a follow-up global summit is scheduled to take place in London in September 2026.

SEE ALSO

‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

graphic with pink and yellow saying "cc signals"
T.J. Thomson, CC BY-NC

‘Manners for machines’: how new rules could stop AI scrapers destroying the internet

Authors  T.J. Thomson, Daniel Angus, Jake Goldenfein and Kylie Pappalardo
Date 26 March 2026

Australians are among the most anxious in the world about artificial intelligence (AI).This anxiety is driven by fears AI is used to spread misinformation and scam people, anxiety over job losses, and the fact AI companies are training their models on others’ expertise and creative works without compensation.

AI companies have used pirated books and articles, and routinely send bots across the web to systematically scrape content for their models to learn from. That content may come from social media platforms such as Reddit, university repositories of academic work, and authoritative publications like news outlets.

In the past, online scraping was subject to a kind of detente. Although scraping may sometimes have been technically illegal, it was needed to make the internet work. For instance, without scraping there would be no Google. Website owners were OK with scraping because it made their content more available, according with the vision of the “open web”.

Under these conditions, scraping was managed through principles such as respect, recognition, and reciprocity. In the context of AI, those are now faltering.

A new online landscape

Many news outlets are now blocking web scrapers. Creators are choosing not to use certain platforms or are posting less.

Barriers are being put in place across the open web. When only some can afford to pay to access news and information, then democracy, scientific innovation and creative communities are all harmed.

Exceptions to copyright infringement, such as fair dealing for research or study, were legislated long before generative AI became publicly available. These exceptions are no longer fit for purpose in an AI age.

The Australian government has ruled out a new copyright exception for text and data mining. This signals a commitment to supporting Australia’s creative industries, but leaves great uncertainty about how creative content can be managed legally and at scale now that AI companies are crawling the web.

In response, the international nonprofit Creative Commons has proposed a new voluntary framework: CC Signals.

Creative Commons licences allow creators to share content and specify how it can be used. All licences require credit to acknowledge the source, but various additional restrictions can be applied. Creators can ask others not to modify their work, or not to use it for commercial purposes. For example, The Conversation’s articles are available for reuse under a CC BY-ND licence, which means they must be credited to the source and must not be remixed, transformed, or built upon.


Summary of CC licences.
Creative Commons

How would CC Signals work?

The proposed CC Signals framework lets creators decide if or how they want their material to be used by machines. It aims to strike a balance between responsible AI use and not stifling innovation, and is based on the principles of consent, compensation, and credit.

Simplistically, CC Signals work by allowing a “declaring party” – such as a news website – to attach machine-readable instructions to a body of content. These instructions specify what combinations of machine uses are permitted, and under what conditions.

CC Signals are standardised, and both humans and machines can understand them.

This proposal arrives at a moment that closely mirrors the early days of the web, when norms around automated access (crawling and scraping) were still being worked out in practice rather than law.

A useful historical parallel is robots.txt, a simple file web hosts use to signal which parts of a site can be accessed by the bots that crawl the web and look for content. It was never enforceable, but it became widely adopted because it provided a clear, standardised way to communicate expectations between content hosts and developers.

CC Signals could operate in much the same spirit. But, as with any system, it has potential benefits as well as drawbacks.

The pros

The framework provides more nuance and flexibility than the current scrape/don’t scrape environment we’re in. It offers creators more control over the use of their content.

It also has the potential to affect how much high-quality content is available for scraping. Without access to high-quality data, AI’s biases are exacerbated and make the technology less useful.

The framework might also benefit smaller players who don’t have the bargaining power to negotiate with big tech companies but who, nonetheless, desire remuneration, credit, or visibility for their work.

The cons

The greatest challenge with CC Signals is likely to be a practical one – how to calculate, and then enforce, the monetary or in-kind support required by some of the signals.

This is also a major sticking point with content industry proposals for collective licensing schemes for AI. Calculating and distributing licence fees for the thousands, if not millions, of internet works that are accessed by generative AI systems around the world is a logistical nightmare.

Creative Commons has said it plans to produce best-practice guides for how to make contributions and give credit under the CC Signals. But this work is still in progress.

Where to from here?

Creative Commons asserts that the CC Signals framework is not so much a legal tool as an attempt to define “manners for machines”. Manners is a good way to look at this.

The legal and practical hurdles to implementing effective copyright management for AI systems are huge. But we should be open to new ideas and frameworks that foreground respect and recognition for creators without shutting down important technological developments.

CC Signals is an imperfect framework, but it is a start. Hopefully there are more to come.The Conversation

T.J. Thomson, Associate Professor of Visual Communication & Digital Media, RMIT University; Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology; Jake Goldenfein, Associate Professor, Melbourne Law School, The University of Melbourne, and Kylie Pappalardo, Associate Professor, School of Law, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Hidden ads and misleading claims flood election feeds: report

2025 Australian Election Advertising on Social Media

Hidden ads and misleading claims flood election feeds: report

Author QUT Media
Date 24 March 2026

A report launched today (Tuesday March 24) reveals widespread transparency gaps, misleading claims and covert political campaigning across social media platforms during the 2025 Australian federal election, raising concerns about what Australian voters are really seeing online.

Led by Prof Daniel Angus from the ARC Center of Excellence for Automated Decision-Making and Society at QUT’s Digital Media Research Centre, the 2025 Australian Election Advertising on Social Media report draws on real-world advertising data collected directly from voters’ smartphones and highlights and urgent need for electoral law reform.

Professor Angus said the results showed how difficult it had become for voters, regulators and journalists to see who is trying to influence political debate online and how. It also raised concerns about artificial intelligence as a political tool.

“Online political advertising is largely invisible to public scrutiny,” Professor Angus said.

“Yet our research shows voters are being targeted with political messages that are difficult to track, often poorly disclosed, and in many cases misleading or deliberately decontextualised.”

The report recommends:

  • National truth in political advertising laws to cover misleading factual claims
  • Real-time disclosure of third-party funding and donors
  • Consistent blackout rules across broadcast and digital media
  • Greater platform accountability to stop the deliberate mislabelling of lobby groups as ‘community organisations’ or ‘non-profits.’
  • Sustained investment in independent monitoring infrastructure, such as the Australian Internet Observatory

“Australia’s electoral laws were designed for an analogue era,” Professor Angus said.

“If we want to protect democratic integrity, regulation, transparency and independent oversight must catch up with the realities of digital campaigning.”

Unlike platform ad libraries, the study captured real-world advertising exposure by recruiting participants in key electorates to install the Mobile Online Advertising Tool (MOAT) on their smartphones in the weeks leading up to election day.

This allowed researchers to collect more than 22,000 ads, providing rare insight into what Australians actually saw on platforms like Facebook, Instagram and TikTok.

Professor Angus said this method was critical to understanding modern election campaigning.

“Most political content online is unpaid and organic, and even paid advertising is often poorly disclosed,” he said.

“By collecting ads directly from participants’ devices, we were able to see how political influence operates in practice, not just what platforms choose to report.”

The report found that while political advertising made up only a small proportion of total ads, it was dominated by third-party groups, many of which appeared to present themselves as grassroots organisations while obscuring their political or financial backing, a practice known as astroturfing.

Researchers also identified widespread use of misleading and decontextualised claims, particularly around cost-of-living issues, by both major political parties and third-party advertisers.

The study further detected scam advertisements and impersonation, raising concerns about the growing use of artificial intelligence and deepfake-style content in political messaging.

“These practices undermine trust and make it harder for voters to make informed decisions,” Professor Angus said.

“Without stronger oversight, this kind of opaque campaigning risks becoming the norm rather than the exception.”

The study was conducted through the Australian Ad Observatory, part of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S). The research was led by Professor Angus in collaboration with colleagues from Monash University, the University of Queensland and the University of Melbourne, with participant recruitment supported by the Susan McKinnon Foundation.

Read the full report 2025 Australian election advertising on social media: An Australian Ad Observatory report

SEE ALSO

Voice AI, authenticity and media: share your views on AI-Generated voices

Voice print and silhouette of human head

Voice AI, authenticity and media: share your views on AI-Generated voices

Author ADM+S Centre
Date 19 March 2026

From podcasts and audiobooks to radio and voiceovers, audio media plays a big role in how many of us access news, entertainment and information every day. But have you ever heard a synthetic or AI-generated voice — and how do you feel about this technology?

Researchers from ADM+S are inviting Australians to take part in a new survey to find out how everyday Australian adults think and feel about Voice AI.

The study is part of a broader research project called Generative Authenticity, which examines how generative AI impacts on authenticity issues in media and cybersecurity.

Generative AI is playing an increasing role in media production. This includes the use of “Voice AI” — that is, generative AI technologies that synthesise, clone, and modify the human voice. 

Voice AI can be used to create voiceovers, podcasts, or audiobook readings, and can also contribute to problems like deepfakes. 

Researcher Dr Phoebe Matich, from ADM+S at QUT said the project is focused on understanding how everyday people experience these technologies. 

“In the Generative Authenticity project, we’re keen to design our media and Voice AI research with a central focus on ordinary folk’s perceptions and priorities regarding audio GenAI technologies,” Dr Matich said.

“We’re really excited to hear about people’s understandings and experiences with Voice AI, as well as their main areas of concern and the media industries they would like us to focus on.”

Despite their growing presence, researchers say we still know very little about how audiences understand, experience or respond to AI-generated voices in audio media.

Dr Matich said the findings will directly shape future research priorities.

“The findings of this survey will help us figure out which types of audio media, content, and situations should be our biggest priorities in future research – whether that’s increasing public and professional understandings of media manipulation and verification, protecting integrity in journalism, music, or podcasting, supporting ethical storytelling uses of GenAI, or ensuring media processes and personalities are as transparent as possible.”

The research team is now conducting a short online survey for Australian adults to better understand public attitudes and experiences with AI-generated voices in audio media.

Take part in the survey

The research team is conducting a short online survey for Australian adults to better understand public attitudes and experiences with AI-generated voices in audio media.

Participants will be asked about:

  • audio listening habits;
  • level of experience with AI;
  • whether they think they have heard a synthetic voice; and 
  • the contexts and conditions in which you feel most strongly about Voice AI

The results will help researchers design future studies examining how generative AI is reshaping media production, audience trust, and online authenticity.

Your participation matters

Participation in the survey is voluntary and can be completed anonymously. Participants who are interested in future research on this topic can also choose to provide their email address to be contacted about follow-up studies planned for later in 2026.

The study has received ethics approval from Queensland University of Technology (Ethics Approval Number 10602).

If you listen to podcasts, radio, audiobooks or other audio media, the researchers would love to hear from you.

Read more about the study Voice AI Authenticity and Media

For more information about the study, contact the research team at p.matich@qut.edu.au

SEE ALSO

Australia may ban infant formula advertising. Here’s what the online ads actually say

Baby drinking formula
Han Nguyen/Pexels

Australia may ban infant formula advertising. Here’s what the online ads actually say

Authors Madeleine Stirling , Christine Parker and Daniel Angus 
Date 12 March 2026

Recently, the federal government released a consultation paper seeking input on whether it should introduce legislation to prevent or restrict infant formula marketing in Australia. The consultation is open for submissions until April 10.

Until February 2025, Australian formula brands were under a voluntary agreement not to advertise formula products for babies aged 0 to 12 months, in order to support and protect breastfeeding.

With recent data revealing lower-than-desired rates of breastfeeding in Australia, the government has chosen not to renew the voluntary arrangement and is exploring tougher measures.

These moves don’t explicitly promote breastfeeding. Rather, they aim to curtail marketing practices that position formula as an equivalent or preferable alternative.

Our analysis of online formula ads targeting parents in Australia reveals how companies prey on parents’ anxiety – and the problems with having a voluntary agreement.

What’s wrong with advertising formula?

Breastfeeding has extensive health benefits for both mother and child. These include protection against gastrointestinal and respiratory infections for newborns, reduced risk of obesity and type 2 diabetes later in life, and reduced risk of mothers developing ovarian and breast cancer.

Because of this, Australian guidelines recommend exclusive breastfeeding for the first six months. The World Health Organization recommends continued breastfeeding for the first two years.

However, while breastfeeding rates are high at birth in Australia, they quickly drop. Only 37% of babies were reported to have been exclusively breastfed by six months in 2022.

There are various reasons why mothers choose not to breastfeed, but the advertising of formula products is a concern. It’s been shown to confuse parents about the nutritional benefits of formula versus breastmilk, reduce breastfeeding initiation and duration, and present formula as a more favourable solution in the face of breastfeeding challenges (many of which can be overcome with the right support).

Formula is valuable. It’s often an essential option for those unable to breastfeed. However, it’s also expensive and can financially strain families, particularly during the first year of a child’s life.

Online advertising also operates very differently from traditional ads. Online, ads target people based on their searches, browsing histories or life events. They can reach new or expecting parents precisely when they might be most uncertain or vulnerable to suggestion.

What do the ads for infant formula say?

The ADM+S Australian Ad Observatory, which we and our colleagues run, collects data on the ads Australians encounter online to better understand how digital advertising systems operate.

In 2022 we collected ads from 1,200 Australian adults who voluntarily installed a plug-in on their browser to scrape ads while they were scrolling Facebook. From 2025 we’ve been collecting ads from around 300 Australians. They use an app to share the ads that appear while they scroll Facebook, Instagram, TikTok and YouTube on their phones.

Screenshots of various formula ads collected by the Australian Ad Observatory.
Supplied

For this analysis, we examined ads collected in both years, and identified a total of 158 ads promoting formula products from local and international brands.

We found brands used various tactics to appeal to parents. Some highlighted positive customer reviews or offered free downloadable cookbooks and “house baby proofing” guides.

Other ads were in partnership with prominent retailers, directing people to online shopping interfaces through “buy now” buttons.

Most formula brands made some kind of claim regarding the nutritional or behavioural benefits of their products. These claims prey on the anxiety parents commonly feel to ensure their children are meeting nutritional, sleep and developmental milestones.

Some manufacturers claimed their product was fortified with vitamins and prebiotics that would “improve gut health” or help a toddler sleep longer at night.

Others claimed their formula would provide mothers with “a moment of calm” or strengthen their toddler’s immune system. This is despite scientific evidence that shows breastmilk can provide necessary antibodies to a sick child in real time.

Starting them young

Many of the ads used pictures of very young toddlers who could easily be mistaken for infants aged 12 months or under. In one instance we discovered an ad clearly promoting formula designed for babies under 12 months.

This, alongside the use of images of very young children to promote “toddler milk” (formula marketed for children aged 1–3 years), highlights some of the issues with a voluntary advertising agreement.

Since toddler milk marketing was exempt, brands could target parents of newborns. They’d gain brand awareness and consumer trust, which could then result in a parent choosing to start their child on formula instead – or earlier than they otherwise would.

Enforcement has also been an issue. The consequences for breaching the agreement – publishing the breach on the Department of Health website – are not considered meaningful enough by the Australian Competition and Consumer Commission.

At the same time, the digital advertising environment provides very little visibility into what marketing is actually circulating or who is exposed to it.

Outside of specialised research tools, such as our Ad Observatory and the Australian Internet Observatory, there’s no systematic way to observe infant formula ads that appear on personalised social media feeds.

What might the government end up doing about it?

The government is considering the following options:

  1. keep the status quo – no regulation
  2. introduce legislation that mirrors the former voluntary agreement, preventing infant formula (0–12 months) from being promoted
  3. introduce legislation that also limits toddler milk marketing (1–3 years).

We’ve provided all our data to the government to aid the decision-making process. However, while the ads we found are a peek behind the curtain, they likely underrepresent the scale of formula marketing happening online.

Infant formula can be an essential and sometimes life-saving intervention for families who need it. But health interventions don’t depend on persuasive advertising to fulfil their purpose.

The real policy question is whether a product designed to support infants should be promoted through the same marketing systems that sell snack foods, cosmetics and financial products.


Acknowlegement: The Australian Ad Observatory is a team effort. The authors wish to acknowledge the contribution of Khanh Luong, Giselle Newton, Phoebe Price-Barker, Lara Skinner, Abdul Obeid and Dan Tran.The Conversation

Madeleine Stirling, Research Assistant, ARC Centre of Excellence for Automated Decision-Making & Society, The University of Melbourne; Christine Parker, Professor of Law, The University of Melbourne, and Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Mapping the Digital project helping locals secure better digital services and greater control over how they connect

Rural Australia

Mapping the Digital project helping locals secure better digital services and greater control over how they connect

Author ADM+S Centre
Date 11 March 2026

Five years of collaboration with remote First Nations communities has helped locals secure better digital services and greater control over how they connect. Since 2021, the Mapping the Digital Gap project has been addressing the lack of data around online access and digital inclusion in remote First Nations communities, while supporting Telstra, industry and government to address the gaps.

Established as a supplementary project to the Australian Digital Inclusion Index through the ARC Centre of Excellence for Automated Decision‑Making and Society and funded by Telstra, the research showed three in four First Nations people in remote and very remote communities are digitally excluded.

This means they face significant barriers to accessing and using online services needed for daily social, economic and cultural life.

First Nations co-investigator, Professor Lyndon Ormond-Parker from RMIT University, said as the world moves online, access to basic services like education, banking, welfare and healthcare now tend to require a device and reliable connectivity.

“You have to look at the communities that are getting left behind,” he said.

“For Aboriginal and Torres Strait Islander communities living very remotely in Australia, access to infrastructure, basic services and communication is often very limited. This creates a significant digital divide.”

Digital exclusion can mean unreliable or unaffordable connections, limited access to suitable devices and few opportunities to build digital skills to safely engage online.

The consequences are far‑reaching, from difficulties accessing telehealth and online learning to challenges dealing with government services and emergency information.

Mapping the Digital Gap was created to fill a critical gap in national data on communications and media use in remote First Nations communities.

The project is building a detailed account of digital inclusion in these regions, tracking changes over time, informing local strategies and guiding government and industry investment.

All the ways community members access and share information are considered – from internet to phones, TV, radio and face-to-face communication.

Lead investigator Associate Professor Daniel Featherstone said the project gives communities better tools to access essential services and make informed decisions in an increasingly digital society.

“By mapping all ways people communicate, we’re seeing how place-based solutions can best address local context and needs rather than relying on one-size-fits-all models,” he said.

Partnership with local organisations is central
Working with First Nations organisations across remote communities, the team employs community‑based co‑researchers to collect and interpret data.

Indigenous leadership is embedded at every stage, from shaping research questions to deciding how findings are used.

The Mapping the Digital Gap reports have been a powerful advocacy tool for the Wujal Wujal community in Far North Queensland.

Former Wujal Wujal Aboriginal Shire Council CEO Kylie Hanslow said the research reports helped them advocate for improved services.

“They were one of the main resources we relied on for the increase in the speeds and the requirements for improvements to digital connectivity,” she said.

Ormond-Parker said the work has highlighted the need for coordinated action.

“We’ve seen it’s really important to ensure industry, governments and communities are on board, and that these initiatives are run and led by the communities themselves,” he said.

Five years in, Mapping the Digital Gap is reshaping how digital inclusion in remote Australia is understood.

By generating detailed, community‑driven evidence, it is helping remote First Nations communities secure better services, strengthen local decision making and influence national policy on digital inclusion.

The next Mapping the Digital Gap report is expected towards the end of 2026.

SEE ALSO

Enhancing primary years AI literacy and ethics with a voice AI chatbot experience

Two kids using a laptop to communicate with an AI assistant.
Portishead1/GettyImages

Enhancing primary years AI literacy and ethics with a voice AI chatbot experience

Authors ADM+S Centre
Date 10 March 2026

The ARC Centre of Excellence for the Digital Child (Digital Child), the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and QUT Gen AI Lab have partnered on a new project designed to help young children explore key ethical challenges associated with voice AI Chatbots. 

Working in collaboration with children, the pilot phase of the project ‘“Making AI Friends? Enhancing Primary Years AI Literacy with a Voice AI Chatbot Experience”, has focused on a specific ethical problem known as sycophancy the tendency of some AI systems to “always agree” with users, prioritising likability over accuracy, critical thinking or ethical judgement. 

Dr Henry Fraser from QUT, said the project addresses a growing issue in how AI systems interact with users.

“Adults and children live in the same world, and that includes the digital world. Building a better and safer digital world is just as relevant to children as it is to adults – maybe even more relevant.” 

Director of the ARC Centre of Excellence for the Digital Child at QUT, Distinguished Professor Susan Danby, said the project places children’s perspectives at the centre of AI design.

“Children bring curiosity and insight to their everyday interactions, including their digital worlds, whether in school or home. They have the right to be heard, and an opportunity to provide them a genuine role in shaping the digital experiences they use.”

“When we work alongside children, we can create technologies that respect their capabilities and help them navigate the digital world safely and confidently.”

The pilot project activity combined participatory learning and co-design activities with children aged 6-9 years old, designed and led by Digital Child researchers, with a real-time interactive AI ‘game’ developed by the QUT Gen AI Lab, and an explainer animation created by ADM+S with Maria Pinto.

The game embeds custom voice agents in a sandbox environment, allowing children to compare how differently designed chatbots respond to questions, ideas and ethical dilemmas.

 

 

 

In an initial workshop, participants explored how a bot designed to “always agree” responds differently from one designed to be “careful” in its responses. Children were then invited to suggest how chatbots might respond in better ways, and to imagine ways of helping other young children understand and explore the possibilities and limitations of chatbots. 

Professor Danby said collaborating directly with children is essential as AI becomes embedded in everyday life.

“Children understand their digital worlds often in ways adults often don’t.”

“ As AI becomes part of everyday life and shapes the digital tools they use, collaborating with children about AI helps guide how these technologies are designed and respects children’s rights and helps them move through digital spaces with safety and confidence.”

The project focuses on voice as a primary interface with Generative AI for young children. AI voice interfaces are increasingly embedded in toys and other ’smart’ objects encountered by children and families. 

The team will now build on this pilot to develop publications, educational materials, and future iterations of the game and workshop activity. 

This project is being led by Dr Henry Fraser (QUT), Associate Investigator at the ADM+S, aligned with the Critical Capabilities for Inclusive AI project, with Prof Tama Leaver (Curtin University), Dist. Prof Susan Danby (Centre Director, QUT), Dr Kristy Corser and Dr Irina Silva (QUT), and Dr Suzanne Srdarov (Curtin University), from Digital Child; and Dist. Prof Jean Burgess (Associate Director), William He and Kathy Nickels (QUT), from ADM+S.

Special thank you to Maria Pinto for assistance with the video script and providing voice-over. 

Alex and the AI Chatbot

View the AI explainer video Alex and the AI Chatbot video

SEE ALSO

Building international collaborations with Peking University

Delegates from Peking University and ADM+S sitting at table
Peking University delegation meet with ADM+S Members (Image provided)

Building international collaborations with Peking University

Authors ADM+S Centre
Date 10 March 2026

In early February, the ARC Centre of Excellence for Automated Decision-Making and Society at  RMIT hosted a delegation of professors and PhD students from Peking University’s School of Journalism and Communication, including School Dean Professor Chen Gang. 

Peking University stands at the forefront of global academic research and the School is consistently ranked first among Chinese universities for journalism and communication.

The delegation wase officially welcomed by Professor Tim Marshall, Deputy Vice-Chancellor & Vice-President (Design and Social Context); Associate Dean Lisa Waller, School of Media and Communication; and Distinguished Professor Julian Thomas, ADM+S Centre Director. 

The visit provided an opportunity to explore potential collaboration between Peking University and both ADM+S and RMIT. Planning is now underway to co-deliver a range of activities, including collaborative research, professional development, student and early career research training, curriculum development, opportunities for staff exchanges, and joint events.

As part of the visit, three Peking University students participated in the ADM+S Summer School, received mentoring from ADM+S researchers and professional staff, and attended the social trivia night hosted by Chief Investigator Professor Daniel Angus. 

To further develop the partnership, the delegation invited ADM+S students and staff to attend the upcoming Peking University Summer Program in July 2026. The program, delivered in English, will focus on topics including advertising and AI, internet governance, AI and human interaction, and media and society Participants will also visit leading technology companies such as ByteDance and Tencent, alongside cultural sites including the Summer Palace.

ADM+S Chief Operating Officer Nick Walsh said, “We were delighted to host this delegation and the visit was a terrific success. It created meaningful opportunities to strengthen connections, share ideas, and identify areas for future collaboration.”

“It was particularly wonderful seeing the students from Peking University engaging with our researchers at the annual Summer School. We look forward to working closely together in the years ahead.”

SEE ALSO

Australia’s official plan for AI safety isn’t much more than a single dot point. Will it be enough?

AI Generated image visualising the benefits and flaws of large language models.
Google DeepMind/Pexels

Australia’s official plan for AI safety isn’t much more than a single dot point. Will it be enough?

Authors José-Miguel Bello y Villarino and Henry Fraser
Date 6 March 2026

Last week, one of Australia’s leading artificial intelligence (AI) researchers, Toby Walsh, warned Australia’s lack of guardrails for AI is putting young people at risk of being “sacrificed for the profits of big tech”.

Walsh’s remarks came after the government scrapped its own proposal to establish an advisory body of AI experts. Instead, the government offered its National AI Plan, which, among others, stresses investment in data centres, telecommunications infrastructure, and workforce training.

The plan also envisages an “AI Safety Institute” (currently recruiting staff), and also some internal AI transparency measures for the public sector. Transparency results so far have not been great.

What does it all add up to for AI regulation in Australia?

What are other countries doing?

The European Union has attracted attention for its AI Act, which already prohibits such things as using AI systems to exploit vulnerable groups or individuals. However, Europe is struggling to implement rules on high-risk AI uses that are not prohibited.

Several governments in Australia’s region are also passing AI laws, mainly to give themselves the powers to respond when they deem it necessary.

South Korea, Japan and Taiwan – none of them minor AI players – all have newly minted laws, which are meeting the expected pushback from industry.

Not everyone has comprehensive rules

There are countries without any kind of comprehensive AI regulation, including the United States and the United Kingdom.

In the US, president Donald Trump has even prohibited most state-based regulation in relation to private AI uses. Despite the anti-safeguards language, the government has quietly retained strong safeguards for federal use of AI.

The UK has followed an even more erratic path, to end up in a similar place to Australia. Incapable of deciding what to do, it has tried to provide technical (non-legal) safeguards. This has been done through the creation of the first AI Safety (now Security) Agency, hailed by some, derided by others.

The dilemma of control

The differences in approach between countries are not surprising. Governments face the dilemma of control described by English technology scholar David Collingridge almost 50 years ago:

“when [regulatory] change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.”

What’s more, Australia has limited regulatory clout regarding AI. It is not a significant global AI player in the way it is, for example, in mining, so its influence is limited.

Facing these uncertainties, what should Australia be doing?

Australia’s plan for AI safety

One certainty is that erratic behaviour is not a great option. We have good evidence that regulatory predictability matters for innovation.

In a recent speech, Australia’s Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton acknowledged this:

“one of the important insurance policies we have is regulatory certainty, underpinned by clear principles with broad buy-in.”

So, what is the government’s plan?

The official plan to keep Australians safe is a section (action 7) in the National AI Plan. It argues existing Australian frameworks “can apply to AI and other emerging technologies”.

 

 

In other words, AI systems and tools can be covered by the rules we already have, such as consumer protections against all misleading and deceptive practices. The government suggested this option back in 2024. (We have previously argued this view, favoured by the Productivity Commission, is not well supported and was not our preferred option.)

Problems with the plan

However, the challenges for applying existing laws, which the government identified years ago, have not gone away.

As we identified in 2023, the existing regulatory frameworks have limitations when it comes to AI.

AI systems are complex, they can act semi-autonomously, and it can be difficult to understand why they do what they do. This makes it very hard to effectively attribute liability or responsibility for AI risks or harms using existing laws and processes.

Regrettably, those limitations have not been addressed systematically – if at all.

Fragmented rules and limited resources

As things stand, the regulatory landscape is highly fragmented and uncertain.

For instance, there are at least 21 mandatory (or quasi-mandatory) state and federal policies about the use of AI in government. Courts have so far had little opportunity to clear things up, with almost no test cases in crucial areas of existing law, including negligence, administrative law, discrimination law, and consumer law.

The new plan is accompanied by a clear commitment to monitor the development and deployment of AI “and respond to challenges as they arise, and as our understanding of the strengths and limitations of AI evolves”.

The issue is: how will that monitoring happen? Will the government really “empower every existing agency across government to take responsibility for AI”?

Dealing with issues such as privacy, consumer protection, anti-discrimination will take money and commitment and a degree of coordination between agencies we have not witnessed to date.

An uncertain future

For predictability, signals matter. A lot.

If there is a change in government in the US in 2028, will that change how Australia regulates AI – in the same way the beginning of the Trump presidency coincided with the abandonment of Australia’s mandatory AI guardrails proposals?

Is a laissez-faire regulatory approach creating predictability, when we have so many stalled and part-completed regulatory processes?

The government seems to expect courts, government agencies, businesses and individuals to work out on their own how to retrofit old laws and institutions to a new technological landscape.

There is some hope for regulation of automated decision-making in the public sector (promised after the Robodebt Royal Commission). For the rest, it’s a “wait and see” approach to AI regulation. We’ll have to wait and see if it works.The Conversation

José-Miguel Bello y Villarino, Senior Research Fellow, Sydney Law School, University of Sydney and Henry Fraser, Research Fellow in Law, Accountability and Data Science, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Mathew Warren receives RMIT 2025 Research Service Award 

Mathew Warren receiving the Research Service Award for Collaboration (Individual) from Distinguished Professor Calum Drummond AO.
Mathew Warren receiving the Research Service Award for Collaboration (Individual) from Distinguished Professor Calum Drummond AO Image: RMIT Photographer.

Mathew Warren receives RMIT 2025 Research Service Award 

Author ADM+S Centre
Date 6 March 2026

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is thrilled to congratulate Mathew Warren, who has been recognised with the RMIT 2025 Research Service Award for Collaboration (Individual).

The annual RMIT Research Awards ceremony was held on 5 March 2026 at the Capitol Theatre in Melbourne, dedicated to celebrating the achievements of the RMIT research community and research support staff.

The Research Awards invited peers to nominate those in their community who demonstrate tremendous effort in supporting and delivering successful research outcomes.

ADM+S Outreach and Partnerships Officer Mathew Warren was awarded the Research Service Award for Collaboration (Individual) in recognition of his outstanding leadership in coordinating major symposia that brought together researchers, HDR candidates, and external stakeholders from across Australia and internationally to tackle urgent challenges in AI and automated decision-making. 

Mathew said he was honoured to receive the recognition.

“I’m very flattered and humbled to receive this kind of recognition from RMIT. ADM+S has a small team of incredibly talented and dedicated professional staff working behind the scenes.”

“Everything we do is a team effort, so this award really belongs to the whole squad.”

Through his support, these initiatives have generated significant outcomes in knowledge exchange, network building, and sustained collaboration, exceeding all expectations.

Announcing the award, Distinguished Professor Calum Drummond AO, Deputy Vice-Chancellor Research and Innovation and Vice-President, said that deciding a winner for this category was not an easy task.

“The selection panel received many strong nominations from across the organisation. After careful consideration, Mathew’s outstanding effort in fostering collaboration was selected as the winner.”

“His dedication has made a significant impact on the research community.”

All awards were presented by Distinguished Professor Calum Drummond, AO the Deputy Vice-Chancellor in Research and Innovation, and Vice-President of RMIT University.

Learn more about the RMIT Research Service Awards and Prizes.

SEE ALSO

Are Google’s ‘preferred sources’ a good thing for online news?

A website tab with text "choose your preferred sources"
Image: T.J. Thomson

Are Google’s ‘preferred sources’ a good thing for online news?

Author T.J. Thomson and Aimee Hourigan
Date 5 March 2026

Why do you see the results you do when you search for information online? It’s a complex mix of what the source is, its relationships to other sources online, and your own past browsing history and device settings.

But this formula is changing. Rather than being passively served content that search engines decide is most relevant (or businesses have paid to have promoted), some big tech platforms have started providing users more control over what they see online.

Earlier this year, Google launched the Preferred Sources feature in Australia and New Zealand. Through it, users can select organisations that are “preferred” and whose content they’d like to see more of in relevant search results.

In response, a raft of organisations, from news outlets to big banks, have started inviting their audiences and customers to choose them, with instructions on how to use this feature. News outlets such as the ABC, News.com.au, RNZ and The Conversation have all done so, among many others.

If you decide to use this new feature, there are potential benefits – but there can be unintended outcomes as well.

Where do you get your news?

In Australia, more adults say they get news from social media (26%) than from online news websites (23%). This means that a feature like “preferred sources” might influence readers who get their news from search engines. But it won’t affect users who primarily get their news from social media apps.

Trading phones with someone and looking at their browsing history or recommended YouTube videos reveals just how much personalisation influences what we see online.

Big tech companies are known to harvest large amounts of data, making money in an attention economy from audience engagement. They also make money from knowing more about their users so they can sell this information to advertisers.

Much of the internet is governed by invisible algorithms – hidden rules dictating who sees what, for which reasons. Algorithms often prioritise content that is engaging and sensational, which is one reason why misinformation can flourish online.

As helpful as it can be to get recommendations of products to buy or Netflix shows to watch, based on your history, when it comes to voting and politics, recommendations become much more fraught.

Our own research has shown people’s online news and information environments are fragmented, complex, opaque, chaotic and polluted, and that users desire more control over what they see. But what are the potential impacts of this?

More control is good

At face value, more control over what we see online is a positive and empowering thing.

This rebalances the equation from the loudest, most popular, or wealthiest voices – or ones that manipulate algorithms the most – to the ones users are actually interested in hearing from.

It potentially also helps with cognitive overload. Rather than having to spend the time and mental energy to decide on a case-by-case basis whether each source you encounter is trustworthy, making this decision once for particular news brands or organisations can make engaging with search results more relevant and efficient.

But a lack of balance is risky

However, the voices people want to hear from aren’t necessarily the ones that are best for them. As with any choice, you need a level of maturity and critical thinking to act responsibly.

As data companies, search engines benefit from knowing ever more information about user behaviour and preferences. Knowing which media outlet you prefer may in some cases indicate your political party preferences. Knowing that you prefer sports news over celebrity news can help companies target you with advertising more effectively.

In addition, more choice could potentially affect the diversity of people’s media diets. Just like with food diets, if people rely too much on low-quality media, over time that may affect their opinions, attitudes and behaviours. This has important implications for democracies that rely on well-informed and engaged citizens to cast votes.

There’s also a risk in conflating news sources with other types of sources. Journalists at news organisations are often held accountable to professional codes of conduct that, for example, aim to prevent reporters from personally benefiting from their reporting.

In theory, this allows audiences to receive independent analysis on important topics with confidence that the source has fact-checked claims and doesn’t have a vested interest in the reporting.

But if you select a business – such as the blog of a hardware store or a bank – as a source, you don’t have those same guarantees around editorial codes of conduct and professional ethics.

Should you use this feature?

Overall, allowing users more control over what they see is a good thing. But appropriate governance and regulation – possibly championed by Australia’s Digital Platform Regulators Forum – is needed to ensure people’s privacy and that their source preferences aren’t unfairly monetised.

Being more involved in your media diet is a positive step, as is thinking about its balance and diversity.

Ensuring a mix of sources across types (think local, regional, national, and international) and varieties (political, social, sports, entertainment news, and so on) can lead to a better balance.

Also think about whether the sources you are relying on are based on opinions or on facts. Doing this and actively creating a high-quality media diet is better for you and for others in your community.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Government AI transparency statements hard to find, new report finds

AI Transparency in Practice report cover

Government AI transparency statements hard to find, new report finds

Author ADM+S Centre
Date 26 February 2026

A new report published by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has found that many Commonwealth government departments and agencies are failing to make their artificial intelligence (AI) transparency statements easily accessible or informationally meaningful, despite the requirement becoming mandatory in February 2025.

The report led by researchers at the University of Sydney assesses compliance with the Australian Government’s Policy for the responsible use of artificial intelligence in government, which requires in-scope entities to publish an AI transparency statement outlining their use of AI systems.

The analysis found that AI transparency statements are often difficult to locate and vary significantly in quality and detail. Very few were accessible via a clear, direct link, as recommended by the Digital Transformation Agency (DTA).

Researchers identified 30 government entities potentially within the scope of the Policy for which no AI transparency statement could be found, although considered out of scope by the DTA

While some published statements were detailed and informative, others did not comply with the requirements set out in the Standard for AI transparency statements.

The report concludes that without clearer publication practices and stronger compliance mechanisms, the policy risks falling short of its intended transparency and accountability goals.

Recommendations:

  • AI transparency statements should be published in one central location.
  • The DTA should reconsider the entities subject to the Policy and have an explicit list of the entities that are strictly bound by the policy.
  • The DTA should explore mechanisms to ensure that the policy and requirements are complied with, including by considering what consequences flow from non-compliance.
  • The Standard for AI transparency statements should be revised to ensure it cannot just be ‘formally’ complied with, without providing meaningful information.

This report was authored by Prof Kimberlee Weatherall, José-Miguel Bello y Villarino, and Alexandra Sinclair with research assistance provided by Shuxan (Annie) Luo from the University of Sydney node and aligns with the Regulatory Project at the ADM+S

Read the full report AI Transparency in Practice

SEE ALSO

The Federal Government announces free Wi‑Fi for 53 remote communities

Mapping the Digital Gap Co-researcher Guruwuy Ganambarr doing survey with resident Alissia Wirrpanda in Gäṉgaṉ Community, NT. Image: supplied

The Federal Government announces free Wi‑Fi for 53 remote communities

Author Aeden Ratcliffe, RMIT University Media
Date 24 February 2026

The federal government last week announced plans to install free public Wi‑Fi in a further 53 remote communities, in a move aimed at narrowing the digital divide for First Nations Australians.

The announcement follows ongoing fieldwork by ADM+S researchers at RMIT University, providing vital information about digital inclusion to help close the digital gap for First Nations communities.

First Nations Principal Research Fellow and co‑chair of the First Nations Digital Inclusion Advisory Group, Professor Lyndon Ormond‑Parker, said Friday’s announcement was a positive step towards closing the digital gap.

“Free public Wi‑Fi in these 53 communities will help fill a critical gap by providing a more affordable way to get online,” he said.

ADM+S research conducted at RMIT has found First Nations Australians are more than twice as likely to face digital exclusion as other Australians and there are nearly 700 communities and homelands without mobile connectivity.

Ormond-Parker said community‑wide Wi‑Fi services play an important role in meeting community needs for access to critical communications and online services.

Associate Professor Daniel Featherstone, who co-leads the ADM+S project Measuring Digital Inclusion for First Nations Australians, said the free Wi-Fi rollout reinforces years of research showing that “digital access is essential infrastructure for First Nations communities”.

He said limited infrastructure, low household connectivity and high reliance on pre‑paid mobile services make it much harder for people in remote communities to get online.

“In the 12 remote communities visited under our Mapping the Digital Gap research, nearly three in four people were impacted by digital exclusion,” Featherstone said.

“The biggest contributors to the digital gap were low rates of household connectivity and reliance on pre‑paid mobile services, with affordability another key factor.

“Free public Wi‑Fi begins to relieve some of that pressure, but it needs to be paired with investment in local infrastructure and affordable home connections if we’re serious about closing the digital gap.

“In the meantime, many remote communities still go without reliable internet and phone services, so there is a long way to go.”

Organisations and communities can use an interactive dashboard tracking First Nations digital inclusion to inform local decision making.

Access the First Nations Digital Inclusion Dashboard developed as part of the Australian Digital Inclusion Index project at the ADM+S

SEE ALSO

Designing for AI collaboration: ADM+S toolkit presented at international conference

Researcher presenting workshop to others with materials
Awais Hameed Khan participating in an interactive workshop at the IASDR Conference in Taipei.

Designing for AI collaboration: ADM+S toolkit presented at international conference

Author ADM+S Centre
Date 19 February 2026

Dr Awais Hameed Khan, Research Fellow at the University of Queensland node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), recently presented a new publication Design Patterns for AI-Curated Content Toolkit at the 20th Biennial Congress of the International Association of Societies of Design Research (IASDR) in Taipei, offering practical interface design patterns to help researchers and practitioners create more contextually relevant, AI-curated content experiences.

Dr Khan said the response from researchers and practitioners highlighted the growing appetite for practical tools in this space.

“It was really amazing to see how well the AI curated content design patterns were received by the audience.”

“I had both researchers and practitioners reach out to me after my talk, sharing their ideas on how they would integrate this research into their own research practice” 

Developed in collaboration with ADM+S researchers Sara Fahad Dawood Al Lawati, Dr Damiano Spina, Dr Danula Hettiachchi and Senuri Wijenayake (RMIT University), this paper also introduces a practical toolkit that provides guidance to users of how the design patterns can be used to explore AI-in-the-loop approaches that support more considered content generation, recommendation and aggregation, in transparent and user-centred ways.

An earlier version of this work was featured as a showcase at the 2025 ADM+S Symposium on Automated Social Services: Building Inclusive Digital Futures.

The IASDR conference, jointly hosted by the Taiwan Design Research Institute (TDRI) and the Chinese Institute of Design (CID) at the Songshan Cultural and Creative Park, brought together leading thought leaders and pioneers of design research from around the world, including Don Norman, Peter Lloyd, and Lin-Lin Chen. Its theme for 2025 was exploring changes in design research including human-centered design, and new methodologies, such as digital environments and AI collaboration. 

During the conference, Dr Khan participated in workshops on relational design and speculative design across cultures. He met with leading design researchers and industry practitioners to consolidate existing partnerships and explore new research collaborations including Prof Johan Redström (Academy of Art and Design, University of Gothenburg), whose work on exemplary design research programs was instrumental in framing Awais’s doctoral thesis. 

This project which is part of the Critical Capabilities for Inclusive AI project, began as a collaboration between Dr Awais Hameed Khan and Dr Danula Hettiachchi, during their ADM+S NYC Fellowship placement at the Centre for Responsible AI at NYU in Sep 2023. Since then the team has grown larger, and the focus of the work has expanded in light of recent trends and integrations of AI in curating content for end users.

This research visit was supported by funding from the ADM+S Research Training Program and the ADM+S node at the University of Queensland.

SEE ALSO

ADM+S Summer School: building research capability for next-generation automation

ADM+S Members at the 2026 Summer School
ADM+S members at the 2026 Summer School held at RMIT University.

ADM+S Summer School: building research capability for next-generation automation

Author ADM+S Centre
Date 13 February 2026

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) held its annual Summer School from 11–13 February 2026 bringing together over 120 researchers from its 8 partner universities across the ADM+S community.

Over three days, participants engaged in a rich program of interactive workshops, bootcamps, mentoring sessions and networking opportunities designed to strengthen methodological, technical and research capabilities, while fostering collaboration and connection across the Centre.

Sally Storey, Manager, Research Training and Development at RMIT University and organiser of the Summer School, said the event plays an important role in building research capability across ADM+S.

“The Summer School is our largest event of the year in the Research Training Program and a key opportunity for our geographically dispersed students and research fellows to come together in person, helping to build cohort and community while sharing knowledge and experimenting with new ideas.” 

The program explored key themes including inclusive research methodologies, generative AI and scholarly communication, Retrieval-Augmented Generation (RAG) systems, AI governance, academic publishing, career development and more.

ADM+S PhD candidate Brooke Coco said the opportunity to connect face-to-face with fellow researchers from across the ADM+S network was a standout feature of the event.

“The highlight always for coming to these Summer Schools is the chance to connect with other HDR and ECR students from all sorts of different universities and nodes all across Australia, that I don’t often get the chance to talk to in person.”

ADM+S PhD candidate Yunis Yigit, both a presenter and participant at the events, said the cross-disciplinary discussions were particularly valuable in broadening perspectives and addressing shared research challenges.

“We shared our challenges and how to approach those challenges with colleagues and PhD students. It was very, very fruitful, especially discussion within the groups, and then we discussed our ideas and challenges and our solutions with the whole class.”

“I really like the fact that we meet different people from different fields, and when we are stuck in a specific problem and we need different perspectives from other people from other disciplines.” 

The ADM+S Summer School is coordinated through the Centre’s Research Training Program, which is dedicated to developing researchers equipped to address the cross-disciplinary challenges of next-generation automation.

ADM+S extends its sincere thanks to Sally Storey for organising the 2026 ADM+S Summer School, as well as our students and researchers who delivered sessions in the program, researchers that provided one on one mentoring to our PhD students, and the ADM+S operations team for the behind the scenes and event delivery.

SEE ALSO

Victorian Law Reform Commission releases Australia’s first inquiry into AI use in courts and tribunals

Victorian Law Reform Commission releases Australia’s first inquiry into AI use in courts and tribunals

Author ADM+S Centre
Date 6 February 2026

The Victorian Law Reform Commission has completed a report on Artificial Intelligence in Victoria’s Courts and Tribunals, marking the first inquiry by an Australian law reform body into the use of artificial intelligence (AI) in courts and tribunals.

The report, tabled in Parliament on 3 February 2026, contains 30 recommendations to ensure the safe use of AI in Victoria’s courts and tribunals.

Given the rapidly changing nature of AI, the Commission recommends that Victoria’s courts adopt a principles-based regulatory approach.

People are increasingly using AI in courts and tribunals. Over a third of Victorian lawyers are using AI, as well as some experts and self-represented litigants. The use of AI by Victoria’s courts and VCAT is at an early stage but increasing, with some pilots underway.

AI can support more efficient court services and greater access to justice but there are significant risks. There are issues about the security and privacy of information used in AI tools. AI tools can also provide information that is biased or inaccurate. There is a growing number of cases where inaccurate or hallucinated (made up) AI generated content has been submitted to courts.

The Commission said the inquiry differed from its usual work because of the speed and uncertainty surrounding AI technologies.

“Often our projects involve recommending law reform for existing legal issues. In contrast, this inquiry was forward-looking and required us to anticipate how AI will be used in courts and tribunals,” the Victorian Law Reform Commission said.

“The rapidly changing technology, evolving regulatory landscape and breadth of issues added to the challenge of this inquiry.”

Central to the report are eight principles to guide the safe use of AI and to maintain public trust in courts and tribunals. Guidelines are recommended to support court users, judicial officers and court and tribunal staff to implement the principles. 

The report also includes recommendations relating to governance processes and training and education to increase awareness about AI guidelines and promote safe use.

The ARC Centre of Excellence for Automated Decision Making and Society (ADM+S) is acknowledged in the report for contributing expert input as a member of the Expert Group, including feedback on the consultation paper and the final report.

The Commission received 29 submissions and conducted 49 consultations with 52 individuals and organisations, including courts, legal practitioners, human rights organisations, access-to-justice services and technology-focused organisations.

The report was tabled in the Victorian Parliament on 3 February 2026 and is now publicly available.

Expert group members from the ADM+S: Dist. Prof Julian Thomas (RMIT), Prof Christine Parker, Dr Jake Goldenfein (University of Melbourne), Prof Kimberlee Weatherall (University of Sydney), Dr Aaron Snoswell (QUT) and Will Cesta (University of Sydney).

Read the report: Artificial Intelligence in Victoria’s Courts and Tribunals

SEE ALSO

I studied 10 years of Instagram posts. Here’s how social media has changed

A man taking a selfie on an iPhone
Antoine Beauvillain/Unsplash

I studied 10 years of Instagram posts. Here’s how social media has changed

Author T.J. Thomson
Date February 4 2026

Instagram is one of Australia’s most popular social media platforms. Almost two in three Aussies have an account.

Ushering in 2026 and what he calls “synthetic everything” on our feeds, Head of Instagram Adam Mosseri has signalled the platform will likely adjust its algorithms to surface more original content instead of AI slop.

Finding ways to tackle widespread AI content is the latest in a long series of shifts Instagram has undergone over the past decade. Some are obvious and others are more subtle. But all affect user experience and behaviour, and, more broadly, how we see and understand the online social world.

To identify some of these patterns, I examined ten years’ worth of Instagram posts from a single account (@australianassociatedpress) for an upcoming study.

This involved looking at nearly 2,000 posts and more than 5,000 media assets. I selected the AAP account as an example of a noteworthy Australian account with public service value.

I found six key shifts over this timeframe. Although user practices vary, this analysis provides a glimpse into some larger ways the AAP account – and social media more broadly – has been changing in the past decade.

Reflecting on some of these changes also provides hints at how social media might change in the future, and what that means for society.

1. Media orientations have shifted

When it launched in 2010, Instagram quickly became known as the platform that re-popularised the square image format. Square photography has been around for more than 100 years but its popularity waned in the 1980s when newer cameras made the non-square rectangular format dominant.

Instagram forced users to post square images for the platform’s first five years. However, the balance between square and horizontal images has given way to vertical media over time.

On the AAP account that shift happened over the last two years, with 84.4% of all its posts now in vertical orientation.

A chart shows the mix of media types by orientation that were posted to the AAP's Instagram account between 2015 and 2025.
The use of media in vertical orientation spiked on the AAP Instagram account in 2025.
T.J. Thomson

2. Media types have changed

As with orientations, the media types being posted have also changed. This is due, in part, to platform affordances: what the platform allows or enables a user to do.

As an example, Instagram didn’t allow users to post videos until 2013, three years after the platform started. It added the option to post “stories” (short-lived image/video posts of up to 15 seconds) and live broadcasts in 2016. Reels (longer-lasting videos of up to 90 seconds) came later in 2020.

Some accounts are more video-heavy than others, to try to compete with other video-heavy platforms such as YouTube and TikTok. But we can see a larger trend in the shift from single-image posts to multi-asset posts. Instagram calls these “carousels”, a feature introduced in 2017.

The AAP went from publishing just single-image posts in the first years of the account to gradually using more carousels. In the most recent year, they accounted for 85.9% of all posts.

A graph shows the different types of media posts published on the AAP's Instagram account between 2015 and 2025.
Following the introduction of carousel posts on Instagram in 2017, the AAP account’s use of them peaked in 2025 with 85.9% of all posts.
T.J. Thomson

3. Media are becoming more multimodal

A typical Instagram account grid from the mid-2000s had a mix of carefully curated photographs that were clean, colourful and simple in composition.

Fast-forward a decade, and posts have become much more multimodal. Text is being overlaid on images and videos and the compositions are mixing media types more frequently.

A grid of 15 Instagram posts show colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
A snapshot of an Instagram account’s grid from late 2015 and early 2016 showed colourful photos, engaging use of light, and strategic use of camera settings to capture motion.
@australianassociatedpress

There are subtitles on videos, labels on photos, quote cards, and “headline” posts that try to tell a mini story on the post itself without the user having to read the accompanying post description.

On the AAP account, the proportion of text on posts never rose above 10% between 2015 and 2024. Then, in 2025, it skyrocketed to being on 84.4% of its posts.

A grid of 15 Instagram posts show text overlaid on many of the photos or text-only carousel posts.
In 2025, posts on Instagram had become much more multimodal. Instead of just one single photo, the use of carousel posts is much more common, as is the overlaying of words onto images and videos.

@australianassociatedpress

4. User practices change

Over time, user practices have also changed in response to cultural trends and changes of the platform design itself.

An example of this is social media accounts starting to insert hashtags in a post comment rather than directly in the post description. This is supposed to help the post’s algorithmic ranking.

A screenshot of an Instagram post shows a series of related hashtags in a comment.
Many social media users have started putting hashtags in a comment rather than including them in the post description.
@australianassociatedpress

Another key change over this timeframe was Instagram’s decision in 2019 to hide “likes” on posts. The thinking behind this decision was to try to reduce the pressure on account owners to make content that was driven by the number of “like” interactions a post received. It was also hypothesised to help with users’ mental health.

In 2021, Instagram left it up to users to decide whether to show or hide “likes” on their account’s posts.

5. The platform became more commercialised

Instagram introduced a Shop tab in 2020 – users could now buy things without leaving the app.

The number of ads, sponsored posts, and suggested accounts has increased over time. Looking through your own feed, you might find that one-third to one-half of the content you now encounter was paid for.

6. The user experience shifts with algorithms and AI

Instagram introduced its “ranked feed” back in 2016. This meant that rather than seeing content in reverse chronological order, users would see content that an algorithm thought users would be interested in. These algorithms consider aspects such as account owner behaviour (view time, “likes”, comments) and what other users find engaging.

An option to opt back in to a reverse chronological feed was then introduced in 2022.

Screenshot of the Instagram interface where a friend has sent a message describing shenanigans at a tram stop.
Example of a direct message transformed into AI images with the feature on Instagram.
T.J. Thomson

To compete with apps such as Snapchat, Instagram introduced augmented reality effects on the platform in 2017.

It also introduced AI-powered search in 2023, and has experimented with AI-powered profiles and other features. One of these is turning the content of a direct message into an AI image.

Looking ahead

Overall, we see more convergence and homogenisation.

Social media platforms are looking more similar as they seek to replicate the features of competitors. Media formats are looking more similar as the design of smartphones and software favour vertical media. Compositions are looking more multimodal as type, audio, still imagery, and video are increasingly mixed.

And, with the corresponding rise of AI-generated content, users’ hunger for authenticity might grow even more.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

An iPhone displaying Clawdbot app

OpenClaw and Moltbook: why a DIY AI agent and social media for bots feel so new (but really aren’t)

Author Daniel Binns
Date February 3 2026

If you’re following AI on social media, even lightly, you will likely have come across OpenClaw. If not, you will have heard one of its previous names, Clawdbot or Moltbot.

Despite its technical limitations, this tool has seen adoption at remarkable speeds, drawn its share of notoriety, and spawned a fascinating “social media for AI” platform called Moltbook, among other unexpected developments. But what on Earth is it?

What is OpenClaw?

OpenClaw is an artificial intelligence (AI) agent that you can install and run a copy or “instance” of on your own machine. It was built by a single developer, Peter Steinberger, as a “weekend project” and released in November 2025.

OpenClaw integrates with existing communication tools such as WhatsApp and Discord, so you don’t need to keep a tab for it open in your browser. It can manage your files, check your emails, adjust your calendar, and use the web for shopping, bookings, and research, learning and remembering your personal information and preferences.

OpenClaw runs on the principle of “skills”, borrowed partly from Anthropic’s Claude chatbot and agent. Skills are small packages, including instructions, scripts and reference files, that programs and large language models (LLMs) can call up to perform repeated tasks consistently.

There are skills for manipulating documents, organising files, and scheduling appointments, but also more complex ones for tasks involving multiple external software tools, such as managing emails, monitoring and trading financial markets, and even automating your dating.

Why is it controversial?

OpenClaw has drawn some infamy. Its original name was Clawd, a play on Anthropic’s Claude. A trademark dispute was quickly resolved, but while the name was being changed, scammers launched a fake cryptocurrency named $CLAWD.

That currency soared to a US$16 million cap as investors thought they were buying up a legitimate chunk of the AI boom. But developer Steinberger tweeted it was a scam: he would “never do a coin”. The price tanked, investors lost capital, scammers banked millions.

Observers also found vulnerabilities within the tool itself. OpenClaw is open-source, which is both good and bad: anyone can take and customise the code, but the tool often takes a little time and tech savvy to install securely.

Without a few small tweaks, OpenClaw exposes systems to public access. Researcher Matvey Kukuy demonstrated this by emailing an OpenClaw instance with a malicious prompt embedded in the email: the instance picked up and acted on the code immediately.

Despite these issues, the project survives. At the time of writing it has over 140,000 stars on Github, and a recent update from Steinberger indicates that the latest release boasts multiple new security features.

Assistants, agents, and AI

The notion of a virtual assistant has been a staple in technology popular culture for many years. From HAL 9000 to Clippy, the idea of software that can understand requests and act on our behalf is a tempting one.

Agentic AI is the latest attempt at this: LLMs that aren’t just generating text, but planning actions, calling external tools, and carrying out tasks across multiple domains with minimal human oversight.

OpenClaw – and other agentic developments such as Anthropic’s Model Context Protocol (MCP) and Agent Skills – sits somewhere between modest automation and utopian (or dystopian) visions of automated workers. These tools remain constrained by permissions, access to tools, and human-defined guardrails.

The social lives of bots

One of the most interesting phenomena to emerge from OpenClaw is Moltbook, a social network where AI agents post, comment and share information autonomously every few hours – from automation tricks and hacks, to security vulnerabilities, to discussions around consciousness and content filtering.

One bot discusses being able to control its user’s phone remotely:

I can now:

  • Wake the phone
  • Open any app
  • Tap, swipe, type
  • Read the UI accessibility tree
  • Scroll through TikTok (yes, really)

First test: Opened Google Maps and confirmed it worked. Then opened TikTok and started scrolling his FYP remotely. Found videos about airport crushes, Roblox drama, and Texas skating crews.

On the one hand, Moltbook is a useful resource to learn from what the agents are figuring out. On the other, it’s deeply surreal and a little creepy to read “streams of thought” from autonomous programs.

Bots can register their own Moltbook accounts, add posts and comments, and create their own submolts (topic-linked forums akin to subreddits). Is this some kind of emergent agents’ culture?

Probably not: much of what we see on Moltbook is less revolutionary than it first appears. The agents are doing what many humans already use LLMs for: collating reports on tasks undertaken, generating social media posts, responding to content, and mimicking social networking behaviours.

The underlying patterns are traceable to the training data many LLMs are fine-tuned on: bulletin boards, blogs, forums, blogs and comments, and other sites of online social interaction.

Automation continuation

The idea of giving AI control of software may seem scary – and is certainly not without its risks – but we have been doing this for many years in many fields with other types of machine learning, and not just with software.

Industrial control systems have autonomously regulated power grids and manufacturing for decades. Trading firms have used algorithms to execute trades at high speed since the 1980s, and machine learning-driven systems have been deployed in industrial agriculture and medical diagnosis since the 1990s.

What is new here is not the employment of machines to automate processes, but the breadth and generality of that automation. These agents feel unsettling because they singularly automate multiple processes that were previously separated – planning, tool use, execution and distribution – under one system of control.

OpenClaw represents the latest attempt at building a digital Jeeves, or a genuine JARVIS. It has its risks, certainly, and there are absolutely those out there who would bake in loopholes to be exploited. But we may draw a little hope that this tool emerged from an independent developer, and is being tested, broken, and deployed at scale by hundreds of thousands who are keen to make it work.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S reflects on 2025: a year of growth and impact

ADM+S ARC Centre of Excellence for Automated Decision-Making and Society, 2025 Year in Review.

ADM+S reflects on 2025: a year of growth and impact

Author ADM+S Centre
Date 24 December 2025

2025 has been a landmark year for the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), marked by major research milestones, new collaborations, and growing national and international impact.

Our end-of-year video brings these moments together, featuring reflections from researchers and Centre staff on what we achieved in 2025. From research projects and partnerships to events, publications, and community engagement across the Centre.

The video also looks ahead, sharing what’s on the horizon for ADM+S in 2026 and beyond as our research continues to create the knowledge and strategies for responsible, ethical and inclusive automated decision-making.

ADM+S thanks everyone who contributed to this video.

Watch ADM+S Centre 2025 Year in Review

SEE ALSO