ADM+S publications named in APO’s 2024 Top Ten

ADM+S publications named in APO’s 2024 Top Ten

Author Kathy Nickels
Date 23 December 2024

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) publications have been named in the APO’s Top Content for 2024.

The Analysis and Policy Observatory (APO) is an open access platform that makes public policy research and resources accessible and useable for evidence-informed decision-making. Each year, the APO names the top ten most visited and viewed resources across 15 broad subject areas for the period December 2023 to November 2024.

This year publications from the ADM+S Centre have been named in the APO’s Top Ten across five subject areas including Communications, Education, Government, Science and Technology. 

In 2024, the ADM+S Centre has contributed 30 publications to the Automated Decision-Making and Society collection on the APO.

The Analysis & Policy Observatory (APO) is one of Australia’s leading open access research repositories. The ADM+S shares APO’s goal of supporting evidence-based policy and public debate on the critical challenges facing Australia, and we’re delighted to be working with APO to make ADM+S research more findable, more useable, and more accessible.

Publications named in APO’s Top Ten 2024

Communications
AI and automated decision-making in news and media
Dang Nguyen, James Meese, Jean Burgess, Julian Thomas

Education
A little book of creative methods for social inquiry and research communication
Deborah Lupton

Government
Automated decision-making in New South Wales: mapping and analysis of the use of ADM systems by State and Local governments
Kimberlee Weatherall, Paul Henman, Jose-Miguel Bello y Villarino, Rita Matulionyte, Lyndal Sleep, Melanie Trezise

Science
Generative AI technologies applied to ecosystems and the environment: a scoping review
Ella Butler, Deborah Lupton

Technology
GenAI Concepts
Fan Yang, Jake Goldenfein, Kathy Nickels

Building a trauma-informed algorithmic assessment toolkit
Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman

United against algorithms: a primer on disability-led struggles against algorithmic injustice
Georgia van Toorn

Safe and responsible AI in Australia: proposals paper for introducing mandatory guardrails for AI in high-risk settings
Kimberlee Weatherall, Henry Fraser, Aaron Snoswell

SEE ALSO

Edward Small advances AI model for predicting long-term health risks during Alan Turing Internship

Edward Small presenting at Bristol

Edward Small advances AI model for predicting long-term health risks during Alan Turing Internship

Author ADM+S Centre
Date 20 December 2024

Edward Small, higher degree research student at the ARC Centre for Excellence for Automated Decision-Making and Society (ADM+S), RMIT University has recently wrapped up his internship in Manchester, UK, which began in May 2024. 

The internship, part of a collaborative program run through the prestigious Alan Turing Institute, saw Edward working alongside industry leaders at Accenture to deploy deep learning models to predict the risk of developing long-term conditions such as diabetes, chronic obstructive pulmonary disease, heart attacks, and depression in different populations. 

“It is models such as these that allow our health systems to move from reactive care to proactive care; preventing a disease before it occurs instead of just treating it after the fact,” said Mr Small.  

The deep learning models developed during the internship have shown to be significantly more accurate than current models, are also a lot more flexible to use, and are simple to train and deploy.

The models are currently being trialed in London and Yorkshire, where they are being integrated into the local healthcare systems to assess their impact. 

“We worked closely with medical professionals and data scientists in the National Health Service to validate these models, and we are hoping that not only will they lead to a healthier population but also reduce health inequalities across the board.”

These AI-powered models have the potential to significantly improve the identification and management of long-term health conditions. If successful, there are plans to expand their use across the UK’s integrated care boards, with the goal of embedding them into the National Health Service federated data platform by next year.

Edward is planning to finalise a paper on this project, its methodologies, findings, and potential implications of the deep learning models for healthcare in 2025.

This internship was supported by the ADM+S Higher Degree Research Training Program.

SEE ALSO

2024 ADM+S Hackathon explores public service media recommender systems with the ABC

(L-R) Angel Felipe Magnossao de Paula, Xinye Wanyan and Amanda Lawrence at the 2024 ADM+S Hackathon

2024 ADM+S Hackathon explores public service media recommender systems with the ABC

Author Natalie Campbell
Date 11 December 2024

The 2024 ADM+S Hackathon, ‘Recommender Systems for Public Service Media’, was held on 5-6 December in partnership with the ABC.  

Providing an alternative to commercial forms of content curation and recommender systems is one of the challenges faced by public service media outlets in the digital era. 

Hackathon participants were invited to design systems that could rank or assess content according to public service values such as social responsibility, cultural diversity, and the public interest.   

“We live in a world where algorithms and automated recommendation and curation systems play a hugely important role in shaping the information that we see,” explains Prof Mark Andrejevic, ADM+S Chief Investigator at Monash University and co-organiser of the Hackathon.  

Unlike commercial media, which are driven by commercial values, public media aim to serve the public good. The rise of automated systems has created challenges in ensuring that public service values are reflected in the algorithms shaping content curation and distribution.  

“We’ve had four teams working hard to figure out how you might approach the challenge of developing algorithms that instead of prioritizing commercial values, prioritize what we would think of as public service values.  

“The big challenge is to think about, what are those values? How do you operationalize them? How can you put them into an automated system that would recognize them? And how would you implement that in the newsroom of a public service broadcaster to support the editorial decisions that are being made.” 

Leading into the two-day Hackathon, ADM+S AI James Meese was joined by industry partners Angela Ross (Research Lead), Laura Gartry (Innovation Lead) and Stuart Watt (Head of News Strategy and Innovation) from the ABC, for a panel discussion on Wednesday 4 December titled ‘What is a public service news algorithm, and why might we need one?’  

This conversation, moderated by Prof Mark Andrejevic, is now available on the ADM+S podcast. 

Judges L-R: Angela Ross, Laura Gartry, Saarim Saghir and Stuart Watt

Following the Wednesday night panel, Angela Ross, Laura Gartry and Stuart Watt, as well as Saarim Saghir, Strategy manager at Google USA, engaged with the team’s ideas over the course of the Hackathon, ultimately deciding a winner to secure $5,000 in research funds for further development of their idea. 

“It’s been amazing to be part of this Hackathon,” said ABC Research Lead Angela Ross. 

“We’ve been blown away by the ideas and the innovative thinking, and it’s actually made us rethink some of the ways that we’re looking at the problems.” 

2024 Hackathon team proposals:

L-R: Angela Blakston, Ned Watt, Jiyoon Lee, Yueqing Xuan and Damiano Spina (team leader)

‘BYO Values’
The Value Aware Ranking (VAR) Co-Pilot is an editorial toolkit which provides a scoring system to rate stories based on pre-defined and customisable public service media values.  

While the ABC produces news stories with its public service values in mind, its existing recommender systems are not optimised for these values. VAR could be integrated into a back-end editorial interface and assist editorial teams to better serve their public interest function in a practical and transparent way.  

VAR would help identify value gaps in certain topics that an editorial team could address with story commissioning and editing.  

For consumers, VAR could help to strengthen public trust and accountability by providing transparency, explicitly linking editorial decisions to public service media values and helping consumers understand the rationale behind story prioritisation.  

L-R: Baiyu (Breeze) Chen, Wilson Wongso, Alexa Scarlata (team leader) and Mohammad Faisal

‘ABC News Wrap’
ABC News Wrap is an innovative news recommendation system designed to provide users with a curated list of the top 10 headlines they need to read daily, such as during their commute. 

  • This functionality is powered by an agentic LLM-augmented recommender system, which integrates agents with distinct priorities such as:
  • ABC’s values and charter guidelines
  • Users’ interests and preferences
  • Community-driven trends (what other users in their proximity or demographic are reading) 
L-R: Xinye Wanyan, Angel Felipe Magnossao de Paula, Amanda Lawrence (team leader) and Sara Fahad Dawood Al Lawati. Absent: Ben Shaw

‘IchiBan’
Public Service Media around the world are developing recommendation systems but often struggle to find effective ways to monitor and evaluate them. The User Behaviour Simulation for Recommender Systems proposal uses agent-based simulation methods to generate interaction data for monitoring and evaluation as an iterative process.  

This solution methodology involves the following: 

  • LLM or manual-based generation of user profiles.
  • LLM User Simulation generation.
  • The generated User Simulations interact with the recommender system and give feedback for evaluation and further development of recommender systems in an iterative way. 
L-R: Madhurima Khirbat, Sachin Cherumanal, Jiaman He and Jenn Wilson (team leader). Absent: Fletcher Scott

‘Meow’
The Meow team project focused on the re-ranking section of the recommendation system pipeline, where they advocated for ABC subscribers, journalists, and editors to collaboratively operationalise public service news values. This begins with a user study, asking subscribers to allocate a public service news value of their choice to each story they engaged with on the ABC news website.  

The aim of this study would be to understand how ABC subscribers’ ideas of public service news values may be similar or different to that of journalists. 

“I really enjoyed the Hackathon because I could collaborate with people from different backgrounds to work together to solve a problem which might help the ABC,” said participant Yueqing Xuan.  

“We had technical people, we also had people from design and journalism, so we could work together to make sure everything was understandable.”  

The ADM+S Hackathon was organised as part of the Centre’s Research Training program for 2024.  

View event photos.  

Listen to ‘What is a public service news algorithm, and why might we need one?’ 

SEE ALSO

Discover the Digital You at the 2024 Woodford Folk Festival

Data Donation Stall: Unveilling the digital self

Discover the Digital You at the 2024 Woodford Folk Festival

Author Kathy Nickels
Date 13 December 2024

Festival-goers at the 2024 Woodford Folk Festival are invited to step into the hidden world behind their data at the Data Donation Booth: Unveiling the Digital Self – an interactive experience that offers a unique opportunity to explore and better understand their “digital selves.”

This initiative has been developed by the QUT’s Digital Media Research Centre (DMRC) in collaboration with the Australian Internet Observatory (AIO). Together, they are shining a light on how personal data shapes the digital world we live in.

At the Data Donation Booth, visitors will have the chance to interact with expert “algorithm whisperers” who will provide personalised insights into their digital footprints.

From the ads people see on social media to the playlists they enjoy, researchers will help visitors uncover the data trail they leave behind every day. This immersive experience will also feature the Tree of Data, a visual representation of how individual digital choices collectively shape our online culture.

In addition to the interactive booth, QUT DMRC will host three thought-provoking panels addressing pressing topics in digital media and society:

  • The 5W’s of Online Safety: Who’s Responsible for Protecting Us Online?
  • Living Cinema: Is the ‘Death’ of Moviegoing Really the End?
  • The Kids Are Alright: Debunking Fears About Children’s Digital Lives.

These panels offer a chance to hear critical discussions about online safety, the future of cinema, and how children navigate digital spaces.

Visit the Data Donation Booth and attend the DMRC panels at the Woodford Folk Festival from 27 December to 1 January in Stanmore, Queensland, to explore the intersection of digital media, society, and the pressing issues of our time.

The Data Donation Booth is part of the Australian Internet Observatory, an initiative from the ARC Centre of Excellence for Automated Decision-Making + Society (ADM+S).

Special recognition is given to the developers, designers, and creative contributors whose work has brought the Data Donation Stall to life including ADM+S members Prof Daniel Angus, Dr Khahn Luong, Prof Patrik Wikstrom, Dr Abdul Obeid and William He alongside Tia Bayer (QUT’s Digital Media Research Centre), Adam Smit, Ryan Bennett and Iksha Limbu (QUT eResearch), Thom Saunders (QUT VISER). We also acknowledge the volunters from ADM+S and DMRC who will be volunteering at the Booth over the 6 days.

For more details about these sessions and timings, visit the Woodford Folk Festival website.

SEE ALSO

ADM+S funded $6 million to measure First Nations digital inclusion

Tennant Creek co-researcher Floyd King conducting a survey with elder John Duggie
Tennant Creek co-researcher Floyd King conducting a survey with elder John Duggie

ADM+S funded $6 million to measure First Nations digital inclusion

Author ADM+S Centre and RMIT University
Date 12 December 2024

The announcement is part of the Australian Government’s plan to invest $68 million to narrow the digital gap by supporting more First Nations communities to access the internet.

The funding will support a new three-year project led by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), to collect data on the digital inclusion experiences of First Nations people in urban, regional and remote areas across Australia.

Digital inclusion refers to access to communications services and devices, affordability and digital ability. The digital gap is the difference in levels of digital inclusion between First Nations people and national averages across Australia.

With Target 17 of the National Agreement on Closing the Gap aimed at addressing the uneven levels of digital inclusion amongst First Nations Australians by 2026, ADM+S Director and RMIT Distinguished Professor Julian Thomas said accurate measurement of digital inclusion is needed to track progress.

“There is currently a lack of data to measure the scale and changing nature of digital inclusion amongst First Nations peoples over time,” he said.

“This is largely because First Nations digital inclusion is difficult to capture accurately in national survey approaches.

“Measuring digital inclusion both within and across First Nations communities requires close engagement with the communities themselves, their organisations and leaders.

“It also requires different approaches tailored for urban, regional, rural and remote communities.”

First Nations lead researcher, Associate Professor Heron Loban, said it’s important the research is First Nations led and delivered, while following principles of Indigenous data governance.

“We will be ensuring the data is available to First Nations communities and organisations to use in their own analysis, planning and service delivery,” she said.

The funding will support a three-year project capturing the digital inclusion experiences of First Nations people. The project builds on ADM+S’s existing projects supported by industry partner Telstra: Australian Digital Inclusion Index, which measures digital inclusion across Australia, and Mapping the Digital Gap, which focuses on the digital inclusion experiences of First Nations Australians in remote locations.

The national data collection project was announced by Communications Minister Michelle Rowland at the launch of the First Nations Digital Inclusion Advisory Group’s Roadmap; 2026 and Beyond on 10 December.

The roadmap outlines long-term and community-led strategies for addressing the digital divide and supporting First Nations technological innovation.

RMIT Senior Research Fellow, Dr Daniel Featherstone, said improving digital inclusion and access to services was critically important to ensure informed decision-making and agency among Aboriginal and Torres Strait Islander people.

“Everyone should have the opportunity to benefit from digital technologies,” he said.

“We use these technologies to access essential services for health, welfare, finance and education, participate in social and cultural activities, follow news and media, as well as connect with family, friends, and the wider world.”

A second phase of Mapping the Digital Gap, to be conducted in 10 First Nations communities from 2025–28, will contribute remote data to measure Target 17.

View the orginal article by Shu Shu Zheng, RMIT University Media.

SEE ALSO

Exploring AI Governance across Australia and Chile

Aaron Snoswell and Gonzalo Arenas
Aaron Snoswell (left) and Gonzalo Arenas (right).

Exploring AI Governance across Australia and Chile

Author Kathy Nickels
Date 12 December 2024

ADM+S Associate Investigator and Senior Research Fellow at the GenAI Lab at QUT, Dr Aaron Snoswell met with the Head of International Affairs at the Ministry of Science, Technology, Knowledge and Innovation of Chile to discuss the evolving regulatory landscapes of artificial intelligence in Australia and Chile.

The discussion focused on fostering global collaboration in AI governance, sharing insights into best practices, and examining opportunities to address the challenges posed by rapid technological advancements.

Mr Gonzalo Arenas, Head of International Affairs at the Ministry of Science, Technology, Knowledge and Innovation of Chile has a career spanning roles in international relations within the Science, Technology, and Innovation (STI) sector, and has facilitated agreements with world-renowned institutions such as MIT, Harvard, Columbia, and China’s Ministry of Science and Technology.

“AI is a technology that doesn’t respect borders. As such, AI governance inherently needs to involve international cooperation.” said Dr Snoswell.

Discussions explored how Australia and Chile can jointly contribute to shaping global AI norms and standards, leveraging their respective strengths. For Chile, this includes integrating AI initiatives into its broader focus on sustainable development, such as clean technologies and renewable energy. For ADM+S, it means contributing its research expertise in the societal impacts of AI to inform robust regulatory frameworks both here in Australia and within the asia pacific region more broadly.

Chile has been an active participant in international AI safety efforts, with representation at the 2023 UK AI Safety Summit at Bletchley Park and at the 2024 Seoul AI Safety Summit, and as part of the organizing comittee for the upcoming Paris AI Action Summit in Feburary 2025. Chile published its first National AI Policy in 2021 and since then there have been significant advances such as the creation of the National Centre for Artificial Intelligence (CENIA) and the creation of the first AI doctorate in Chile and Latin America.

SEE ALSO

ADM+S PhD Student wins third place at ACM ICMI’24 for paper developed during research visit

Sachin Cherumanal presenting at ACM ICMI'24
Sachin Cherumanal presenting at ACM ICMI'24

ADM+S PhD Student wins third place at ACM ICMI’24 for paper developed during research visit

Author Natalie Campbell
Date 11 December 2024

ADM+S PhD Student Sachin Pathiyan Cherumanal from RMIT University has returned from the 2024 Association for Computing Machinery (ACM) International Conference on Multimodal Interaction (ICMI), winning third place for a paper developed during a 2023 research visit.

In March 2023,  A. Prof Ujwal Gadiraju, Director of the Delft AI Design at Scale Lab, visited the ADM+ Centre and presented his work on ‘The How, What, and Why of Effecting Human-AI Decision-Making’.

It was during this visit that ADM+S PhD Student Sachin Pathiyan Cherumanal identified synergies between Ujwal’s research, and his own PhD topic: Fairness-Aware Question Answering for Intelligent Assistants.

“Our mutual interest in interactive information access, misinformation management, cognitive bias and the ADM+S project ‘Quantifying and Measuring Bias and Engagement’, sparked discussions that later led to us collaborating on this project,” explained Sachin.

This connection inspired a three-month research visit to TU Delft (funded by Dr Spina’s ARC DECRA Fellowship) September-December 2023, working with Ujwal and colleagues from the Web Information Systems group of the Faculty of Electrical Engineering, Mathematics and Computer Science.

During this visit, Sachin, Ujwal, and Damiano Spina (RMIT) developed the paper, ‘Everything We Hear: Towards Tackling Misinformation in Podcasts’, which was published and presented at the recent ACM ICMI’24 in San Jose, Costa Rica.

In the paper, authors argues that the rise of podcasts as a popular medium for disseminating information necessitates a proactive strategy to combat the spread of misinformation in this format.

The work envisions the application of auditory alerts as an effective tool to tackle misinformation in podcasts and proposes the integration of alerts to notify listeners of potential misinformation within the podcasts they are listening to.

The authors identify several opportunities and challenges in this path and aim to provoke novel conversations around instruments, methods, and measures to tackle misinformation in podcasts.

The paper was presented in the Blue Sky track, which calls for open-ended, possibly “outrageous” or “wacky” ideas that present new problems, new application domains, or new methodologies that are likely to stimulate significant new research, and was awarded third prize.

“The presentation sparked insightful and diverse perspectives from the audience,” he said.

“The discussions following the presentation involving experts from diverse research areas also inspired us to think of possibilities of multimodal approaches to address misinformation in podcasts.”

While at the conference, Sachin also presented Towards Investigating Biases in Spoken Conversational Search on behalf of ADM+S co-authors Prof Falk Scholer, Dr Damiano Spina, and Dr Johanne Trippas.

Towards Investigating Biases in Spoken Conversational Search addresses the challenging linear-nature of voice-based systems and proposes four means of further examination into designing fair and effective voice-based systems.

This research visit was supported by ARC Centre of Excellence for Automated Decision-Making and Society and the DECRA awarded to Dr Spina, DE200100064.

SEE ALSO

ADM+S hosts USC Election Cybersecurity Initiative to consider Fair Elections in the Age of AI

ADM+S hosts USC Election Cybersecurity Initiative to consider Fair Elections in the Age of AI

Author ADM+S Centre
Date 4 December 2024

On 4 December 2024 the ARC Centre of Excellence for Automated Decision-making and Society met with colleagues from the University of Southern California’s Election Cybersecurity Initiative for a day-long workshop around fair elections in the age of AI.

RMIT University Vice Chancellor Alec Cameron welcomed attendees, explaining “This is an essential event for sharing experiences across countries.”

The program brought together journalists, policymakers, researchers and industry experts to examine the capabilities of current AI systems, the dynamics of digital media platforms, and the institutional, technical and regulatory strategies that can protect elections now and in the future.

“This is one of the most interesting conferences we’ve had anywhere in the world,” said Adam Clayton Powell, Executive Director of the USC Election Cybersecurity Initiative.

“What we find is that all of the free and fair democracies around the world share some of the same challenges and some of the same adversaries.

“Our job here is to connect the best practices around the world and to put people in touch with each other because the bad actors are always evolving and always changing, looking for vulnerabilities.”

Across the day, sessions covered digital infrastructures and political economies, verifying and fact checking, election auditing, media obligation, online credibility, and much more.

ABC journalist and ADM+S Industry partner Casey Briggs delivered a ‘What just happened? The US Election in Review’ presentation, after covering the event extensively in his role at ABC Australia.

Experts in academia contributed insights from respective fields, spanning fact-checking, AI, digital cultures, dis- and misinformation, platform regulation, and more, with case studies and evidence to ground critical discussions.

Closing the event, ADM+S Director Prof Julian Thomas said, “It’s been inspiring to see the work the Election Cybersecurity Initiative are doing, especially the engagement in public education, policy, and with practitioners in the field.

“It’s been an incredible day for us,” he said.

A recording of this event will be made available on the ADM+S YouTube channel shortly.

More information about this event.

View event image library.

SEE ALSO

Internet use grows in remote First Nations communities, but cost still a barrier

Internet use grows in remote First Nations communities, but cost still a barrier

Author RMIT University
Date 3 December 2024

A new report shows internet access in Australia’s remote and very remote communities improved in the past two years as 4G, Wi-Fi and satellite infrastructure is bolstered across regional Australia.

The Mapping the Digital Gap report led by RMIT University found a 12% increase in internet access and an 18% increase in regular internet usage.

Despite improvements in access, however, the survey found cost remains a barrier to greater digital inclusion.

Affordability is limiting uptake

More than two-thirds of First Nations people surveyed in Australia’s remote and very remote communities are struggling to afford internet, while over half the communities and homelands still don’t have mobile access.

Close to 70% of survey respondents reported they had made sacrifices or cut back on essential costs to afford internet, up from 40% in 2022.

The study also found 99% of mobile phone users rely on prepaid credit recharges, with low and unreliable incomes limiting uptake of better value monthly plans.

Out of Australia’s 1,505 remote and very remote communities and homelands, about 796 don’t have access to mobile services.

RMIT University researchers have been mapping digital inclusion in remote First Nations communities over three years.

Lead investigator Dr Daniel Featherstone said the gap is showing signs of narrowing but some striking inequalities remain.

“As access to mobile technology slowly improves, we’re finding affordability is still a critical barrier to digital inclusion,” he said.

Featherstone and his RMIT research team visited 12 remote Indigenous communities, working with local First Nations organisations.

The purpose is to track progress towards Aboriginal and Torres Strait Islander people having equal levels of digital inclusion by 2026, a target of the National Partnership Agreement on Closing the Gap.

Digital inclusion initiatives are improving access

There have been significant developments aimed at improving digital inclusion between 2022 and 2024, particularly in terms of Wi-Fi and mobile infrastructure to enable access.

Featherstone said there’s been a 6% increase in regular internet users in remote First Nations communities since 2022.

“More than 60% of people we surveyed are now using the internet several times a day or more,” he said.

“It’s an improvement, but there’s still 14% of non-internet users and many sites still struggle with patchy, slow and unreliable services.”

Work to improve this is ongoing, with expanded delivery of free Wi-Fi hotspots and mesh networks playing an important role in improving access.

There was a 31% increase in those accessing the internet via Wi-Fi in public spaces, up from 15% in 2022.

Telstra, the study’s industry partner, has been installing 4G mobile towers in remote communities, providing essential connectivity.

Internet company Starlink introduced new satellites covering northern Australia in 2022, which has seen rapid uptake among community agencies and staff.

But as one community leader commented, digital inclusion is not just about access to technology for the sake of it.

“It’s about empowering our communities to participate fully in the digital world. This project has been instrumental in highlighting the unique challenges we face and the progress we are making towards closing the digital gap,” they said.

Researchers noted more of Australia’s First Nations people are engaging online with music, videos and games, up 17% since 2022, as well as high social media use to connect with family and friends.

“This highlights that First Nations people are engaging in online activities that are relevant to their lives and communities, despite significant barriers around access and affordability,” Featherstone said.

There was also increased use of online banking, government services and online shopping, however some communities had reductions in internet usage.

It could be because there’s been a 19% drop in households who own computers, meaning there’s less opportunity, particularly for children, to learn how to use computers at home.

Lack of identification and two-factor authentication with regular changes in mobile number and low use of email impacted online engagement.

So too did a lack of digital support, poor accessibility, English literacy and concerns about scams and cyber-safety issues.

Still more to do on closing the digital gap

Featherstone said there remains an urgent need for programs and support to address digital ability.

“The planned Digital Mentors program included in 2024 Budget measures will be a positive development,” he said.

“But with so many communities, more expansive programs for digital skills, support and online safety awareness are needed.

“Without this, the gap in digital ability is likely to further widen, particularly in a period of rapid digital transformation to online service delivery and withdrawal of face-to-face services.”

The Mapping the Digital Gap project is funded by Telstra and the ARC Centre of Excellence for Automated Decision-Making and Society.

The project’s second phase starts next year, with funding for a larger-scale survey measuring access to mobile technology for First Nations people across Australia, not just in remote communities.

Mapping the Digital Gap: 2024 Outcomes Report was prepared for ARC Centre of Excellence for Automated Decision-Making and Society. (DOI: 10.60836/xspj-w062).

Co-authors: Daniel Featherstone, Lyndon Ormond-Parker, Julian Thomas, Sharon Parkinson, Kieran Hegarty, Leah Hawkins, Jenny Kennedy, Lucy Valenta and Lauren Ganley.

View full report: http://doi.org/10.60836/xspj-w062

View original article published by RMIT University.

SEE ALSO

ADM+S professional staff honoured with Awards for Excellence

Award recipients Leah Hawkins, Nick Walsh and Natalie Campbell (left to right)

ADM+S professional staff honoured with Awards for Excellence

Author ADM+S Centre
Date 5 December 2024

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is proud to announce that several of its professional staff have been recognised with awards celebrating their exceptional contributions to their organisations and the broader research community.

2024 Research Service Award for Service Excellence, RMIT University
This award recognises and celebrates excellence in research and innovation support.

Nick Walsh received this award in recognition of his pioneering role as the first Chief Operating Officer of RMIT’s inaugural ARC Centre of Excellence, where he has successfully navigated complex systems and coordinated a remarkable mid-term report with global partners.

Professor Calum Drummond, RMIT’S Deputy Vice-Chancellor for Research & Innovation and Vice-President, said ‘Nick’s dedication to meeting the centre’s needs and resolving challenges within RMIT’s enterprise systems demonstrates his outstanding service and leadership. It is the contribution of staff members such as Nick that helps make RMIT a world leader in research excellence.’

2024 Dean’s Awards for General Excellence, School of Media and Communication, RMIT University
This award recognises outstanding innovation, technical expertise, and leadership in research translation and communication, elevating the profile of the School of Media and Communication (M&C) and the ADM+S Centre.

Natalie Campbell and Leah Hawkins received this award for using their technical expertise, initiative and creativity in a practice of research translation to promote awareness of the excellent work being done by our academics at ADM+S.

As the Research Communications Officers for the ADM+S Centre, Natalie and Leah demonstrated outstanding innovation and leadership via their development and production of an impressive suite of high-quality interviews, short films, and podcasts to promote the research of ADM+S academics and our industry partners, which were recognised and praised by the ARC in the ADM+S Centre’s highly successful Mid Term Review.

Rebecca Ralph, Kathy Nickels, Tia Bayer and Taal Hampson (left to right)
Rebecca Ralph, Kathy Nickels, Tia Bayer and Taal Hampson (left to right). Image by Jean Burgess.

2024 Service Excellence Award, Queensland University of Technology
The Faculty Awards Program recognises exceptional and innovative performance of staff who demonstrated sustained and outstanding achievement in activities that support the faculty’s strategic priorities, align with its culture principles, and enhance the student experience.

Kathy Nickels (ADM+S Manager, Communications and Engagement) and Rebecca Ralph (ADM+S QUT Node Administrator) received this award for their exceptional ability to manage critical administrative functions, and foster a supportive and connected community.

Through their support and commitment to the success and wellbeing of others, they have created an environment at ADM+S at QUT where collaboration and excellence thrive.

We are also delighted to announce that Taal Hampson and Tia Bayer from the QUT Digital Media Research Centre were also recognised for their outstanding service excellence. Taal and Tia were instrumental in providing critical support for our 2024 ADM+S Summer School at QUT and we congratulate them on their awards too.

RMIT University 2024 Research Service Awards – Collaboration (Special Commendation)
This category recognises and celebrates active collaboration with others to deliver outcomes.

After winning this award in 2023, the ADM+S RMIT Operations Team (Natalie Campbell, Leah Hawkins, Sally Storey, Julie Stuart, Lucy Valenta, Nick Walsh and Matthew Warren) were once again recognised for their outstanding efforts via a Special Commendation.

Prof Drummond said ‘this nomination recognised their exceptional ability to drive significant research outcomes through innovative cross-departmental initiatives; by fostering synergies across diverse teams, the team has created a dynamic ecosystem that amplifies research potential and enhances the impact of institutional research.”

These awards recognise the vital role that our professional staff play in supporting research excellence, innovation, and collaboration at ADM+S.

SEE ALSO

Unmaking AI Workshop – Engaging Critically and Creatively with GenAI

Participants sitting in groups at the Unmaking AI Workshop
Participants at the "Unmaking AI Workshop"

Unmaking AI Workshop – Engaging Critically and Creatively with GenAI

Author ADM+S Centre
Date 5 December 2024

On 1 December the “Unmaking AI” Workshop at OzCHI 2024 brought together a diverse group of researchers, practitioners, and AI enthusiasts at The University of Queensland, Brisbane. 

With AI technologies rapidly reshaping industries and societal norms, the workshop addressed the pressing need to critically analyse their impacts. Moving beyond surface-level discussions of “bias,” participants explored the social, cultural, political, and environmental dimensions of GenAI. 

Facilitated by Dr Luke Munn, Dr Danula Hettiachchi, and Dr Awais Hameed Khan alongside AI developers, scholars, and industry partners from Microsoft, Google, and Canva, the workshop invited participants to engage with generative AI (GenAI) in creative and critical ways.

An Inspiration Showcase demonstrated how people were using genAI in innovative ways across  very different domains, from drama and art history to journalism and qualitative research. 

A core part of the workshop was “HelloAI”, a custom card deck featuring Action, Reflection, and Contemplation cards. Participants chose their cards and were guided through activities in a thoughtful and playful way, discussing questions ranging from accessibility and authorship to gender and race.  

The tools and resources developed for the workshop—including slide decks, a reading library, and inspiration videos—have sparked ideas for future applications.

Workshop organisers: Dr Luke Munn, Dr Awais Hameed Khan, Dr Danula Hettiachchi, Samar Sabie, Dr Lida Ghahremanlou, Saarim Saghir, Nicholas Lambourne and Assoc Prof Liam Magee

SEE ALSO

ADM+S members showcase research at AANZCA 2024 in Melbourne

AANZCA at RMIT University, November 2024. Image: T.J. Thomson
AANZCA at RMIT University, November 2024. Image: T.J. Thomson

ADM+S members showcase research at AANZCA 2024 in Melbourne

Author Natalie Campbell
Date 5 December 2024

The Australian and Aotearoa New Zealand Communication Association (AANZCA) 2024 conference took place at RMIT University on 25-28 November, with a jam-packed program featuring a diverse display of ADM+S research.

AANZCA is a premier academic association that brings together researchers, students, and teachers from an array of communication disciplines to promote scholarship, inform social policy, and encourage progress in the broad field of media and communications.

ADM+S Associate Investigator James Meese was the organiser of the 2024 conference, held at RMIT University.

“This was a fantastic opportunity to share all of the wonderful work conducted by ADM+S researchers with the wider media and communication discipline.

“It was also great to see our ADM+S community contribute to the running of the event by chairing sessions, and leading insightful panels,” he said.

The theme for the 2024 conference was ‘Pause’, inviting attendees to consider the benefits of pausing during moments of global crisis and disciplinary chance.

The Call for Papers preceding the event invited submissions addressing the conceptualisation of a ‘pause’ in relation to media consumption and production, historic or contemporary junctures in media, communications or relevant sub-fields, the impact of GenAI tools in professional communication, creative practice and everyday digital cultures, findings, outcomes or reflections from activist, advocacy, social change and community-based research projects and their stakeholders, and more.

ADM+S researchers presented their work in the following sessions:

Session: Local and international journalism
Presentation: “Ink in our veins”: Insights from Australia’s small-town family newspaper dynasties
Alison McAdam, Kristy Hess, Angela Blakston, Matthew Ricketson

Session: Audience research in the age of the user panel
Presentation: Audience research in the age of the user
Djoymi Baker, Jessica BalanzateguiRamon Lobato, Alexa Scarlata, Ashleigh Dharmawardhana, Sean Redmond, Shweta Kishore

Session: Governance ‘of’ and ‘by’ digital media in authoritarian states: China and Singapore
Presentation: Governance ‘of’ and ‘by’ digital media in authoritarian states: China and Singapore
Jian Xu, Terence Lee, Andy {Xinyu} Zhao, Howard Lee, Xiyao Liu

Session: Crip Space/Time and Media
Presentation: Disability Communication Panel II: Crip Space/Time and Media____Crip time and social virtual reality: Frictions of synchroneity and chrononormativity
Wenqi Tan

Session: Democracy disrupted: perspectives on politics and media in turbulent times
Presentation: Democracy disrupted: perspectives on politics and media in turbulent times
Ella Simone Chorazy, Stephen Harrington, Kurt Sengul, Agata Stepnik, Cameron McTernan, Caroline Fisher, Aljosha Karim Schapals, Timothy Graham, Kristy Hess

Session: Investigating and responding to everyday experiences with misinformation
Presentation: Investigating and responding to everyday experiences with misinformation
Tanya Notley, Sora Park, Aimee Hourigan, T.J. Thomson, Michael Dezuanni, Simon Chambers

Session: New tools and methods for observing digital platforms via the Australian Internet Observatory
Presentation: New tools and methods for observing digital platforms via the Australian Internet Observatory
Amanda Sarah Lawrence, Daniel Angus, Abdul Karim Obeid, Shanika Karunasekera, Patrik Wikstrom, Dang Nguyen

Session: AI and image generation
Presentation: “NO TO AI GENERATED IMAGES”: practices of authorship and user-generated content in fandom
Jiaru Tang, Xiyao Liu

Presentation: Adaptation, multimodality and intermediality in Generative AI: image prompts as creative work and commodity
Christopher Bradford Chesher, César Alberto Albarrán-Torres

Session: Disability and/in the media
Presentation: Disability Communication Panel I: Disability and/in the Media____Pause and delay: Understanding smart TV accessibility in Australia
Alexa Scarlata, Tessa Dwyer

Session: Social Media and politics
Presentation: Sunshine, or a shady visual signature? Examining coordinated sharing of problematic climate information on Instagram through logo detection
Caroline Rachel Gardam, Guangnan Zhu

Session: Mediatised national identities
Presentation: Chinese and South Asian Migrants’ Political and Media Literacy: Reflecting on Trust and Engagement before the 2025 Election
Sukhmani Khorana, Fan Yang, Hao Zheng

Session: Studying Platforms
Chair: Kieran Hegarty
Presentation: Untangling the Furball: A Practice Mapping Approach to the Analysis of Multimodal Interactions in Social Networks
Axel Bruns, Kateryna Kasianenko, Vishnuprasad Padinjaredath Suresh, Ehsan Dehghan, Laura Vodden

Session: AI: User Perspectives
Chair: Daniel Angus
Presentation: Re-Framing AI: The Weird and the Workaday
Daniel Binns

Session: Disability and Communication Policy
Presentation: Disability Communication Panel III: Disability and Communication Policy____Disability and Digital Citizenship
Gerard Goggin, Wayne Hawkins, Aaron Schokman

Session: Media and Democracy
Presentation: Power, Publics, and Parasites: Towards a Communicative Model of Online Radicalisation
Vishnuprasad Padinjaredath Suresh

Session: Advocacy for Indigenous Australians
Presentation: Mapping News Media Polarisation during the Voice to Parliament Referendum
Katharina Esau, Axel Bruns, Michelle Riedlinger, Samantha Vilkins, Laura Vodden

Community-created, client-centric digital innovation by and for Indigenous Australians
Bernadette Hyland-Wood

Session: Mediated migration discourses
Presentation: Governance by Data: Visualization and Migration Policy Regimes
Verity Anne Trott, William L Allen, Kathryn Nash, Elsa Gomis

Session: Digital pause in the cultural sector: reflections and future possibilities
Presentation: Digital pause in the cultural sector: reflections and future possibilities
Caitlin Aithne McGrane, Jasmine Aslan, Larissa Hjorth, Ingrid Richardson, Vince Dziekan, Indigo Holcombe-James

Session: Digital Platforms and Regulation
Presentation: Digital Platforms and Regulation
Terry Flew, Timothy Koskie, Agata Stepnik, Justine Humphry, Jonathon Hutchinson, Catherine Page Jeffery, Joanne E Gray, Marcus Carter, Ben Egliston

Session: AI Development
Presentation: Gold Standards in Computer Science: Influences and Practices in Algorithm Development
Daniel Angus

Session: Ageing and Data: Pausing with older adults in a digitised world
Presentation: Ageing and Data: Pausing with older adults in a digitised world
Caitlin Aithne McGrane, Bernardo Figueiredo, Diana Bossio, Michael Doneman, Larissa Hjorth, Anthony McCosker

Session: News Media Regulation
Presentation: Facebook without the News: Link-Sharing Patterns during the Meta’s Australian and Canadian News Bans
Axel Bruns, Daniel Angus, Laura Vodden, Ashwin Nagappa

Session: 5G: Domestic media infrastructures
Chair: Gerard Goggin
Presentation: 5G Home Internet: Preliminary findings from an Australian household study
Rowan Wilken, Catherine Middleton, Stephanie Livingstone, James Meese

Session: Online cultures: gender and feminisim
Presentation: Mapping Network Actions and Interactions of Fan and Anti-Fan Subreddit Responses to Taylor Swift at Peak Saturation
Samantha Vilkins, Sebastian F. K. Svegaard, Katherine M. FitzGerald, Axel Bruns

Session: Pause for translation: the opportunities and challenges of interdisciplinary research collaboration with health organisations
Presentation: Pause for translation: the opportunities and challenges of interdisciplinary research collaboration with health organisations
Kath Albury, Daniel Reeders, Anthony McCosker, Benjamin Hanckel, Alan McKee

Session: Pausing on posthumanism
Presentation: Pausing on Posthumanism
Mark Nicholas Gibson, Mark Andrejevic, Marsha Berry, Fotini Toso, Toija Cinque, Luke Heemsbergen

Session: Communications infrastructure in regional and remote Australia: enduring challenges and possible futures
Presentation: Communications infrastructure in regional and remote Australia: enduring challenges and possible futures
Jessa Rogers, Jenny Kennedy, Sharon Parkinson, Lyndon Ormond-Parker, Daniel Featherstone, Kieran Hegarty

Session: Digital and online pedagogies
Presentation: Pausing before producing: Structural alignment as the key to communication efficacy
Ella Simone Chorazy, Rowan Wilken

In addition, the following sessions were Chaired by ADM+S researchers:

Session: Audience Research and consumption patterns
Chair: Ramon Lobato

Session: Democracy, counterpublics and populist media
Chair: Verity Anne Trott

Session: Creative Industries: Screen and beyond
Chair: James Meese

Session: Understanding Games Cultures
Chair: César Albarrán Torres

Session: Communities, crises and communication
Chair: Rowan Wilken

 Session: Online News Consumption
Chair: Ashwin Nagappa

Session: Online disinformation and trolling
Chair: Axel Bruns

Session: AI in the classroom
Chair: Alexa Scarlata

Session: Digital engagement and inclusion
Chair: Jenny Kennedy

Session: Communicating Health and Sexuality
Chair: Kath Albury

Learn more about this conference.

SEE ALSO

ADM+S research informs latest ACCC Digital Platform Services Inquiry

Report cover for the ACCC Digital platform services inquiry

ADM+S research informs latest ACCC Digital Platform Services Inquiry

Author ADM+S Centre
Date 5 December 2024

On 4 December 2024, the Australian Competition and Consumer Commission (ACCC) released the ninth interim report for its Digital Platform Services Inquiry 2020–25

The report revisits competition and consumer issues arising in the supply of general search services in Australia. It examines technology, regulatory and industry change since the ACCC last considered general search services in the September 2021 interim report of this Inquiry, including the impact of general search and trends in search quality.

Key findings include the evolution of search engine business models, the integration of AI-driven tools, and ongoing challenges in ensuring competition and consumer protection. While the report makes no new recommendations, it reiterates the critical need for regulatory reforms proposed in earlier reports, including the September 2022 interim report.

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and School of Computing Technology, RMIT University contributed significant insights to the ACCC’s report through its submission, particularly the Australian Search Experience project, which tracked over 350 million search results from 1,000 volunteers.

Key contributions and citations from the ADM+S submissions

  • User Priorities in Search: key consumer demand of a search service is that it responds very quickly to any query issued.
  • Personalisation and Targeting: Findings showed minimal personalisation for generic queries, with search engines primarily tailoring results to geographic relevance. 
  • Construction of search queries: The extent of differences in how users construct search queries, and how those differences impact what results a user is served, is the focus of the next stage of the ADM+S’ Australian Search Experience project.
  • Diversity in Results: When evaluating the quality of general search services, diversity of search results is valued by searchers.
  • Impact of AI on Search: The integration of generative AI in search services is reshaping advertising models, for instance, where product placements are injected or integrated into conversational chat interfaces.

ADM+S researchers stressed the importance of search engines maintaining transparency and fairness in how they present results, ensuring users can access a broad range of reliable and diverse information sources.

Professor Mark Sanderson, expert in information retrieval, led the submission from ADM+S and the RMIT School of Computing Technologies.

He said,Search is an important research priority for ADM+S. Evidence points to a growing distrust in society of information pushed out by others and a consequent desire to seek answers proactively. 

“We were delighted to collaborate with colleagues at RMIT School of Computing Technologies to contribute to the ACCC’s inquiry.” 

The ACCC’s findings and ADM+S contributions serve as a call to action for policymakers, industry leaders, and researchers to work together in shaping a digital ecosystem that is equitable and consumer-focused.

The ADM+S submission was led by Prof Mark Sanderson with contributions from Prof Daniel Angus, Dr Aaron Snoswell, Dr Dana Mckay and Dr Johanne Trippas.

SEE ALSO

New report reveals the experiences of Australians encountering misinformation online

Report: Online Misinformation in Australia'
Report: Online Misinformation in Australia'

New report reveals the experiences of Australians encountering misinformation online

Author Natalie Campbell
Date 6 December 2024

The report ‘Online Misinformation in Australia’, a collaboration between Prof Sora Park (University of Canberra), Assoc Prof Tanya Notley and Dr Aimee Hourigan (Western Sydney University), Prof Michael Dezuanni (QUT) and ADM+S Affiliate Dr T.J. Thomson (RMIT University) reveals a significant gap between how people perceive their ability to verify information online, compared to their actual ability.

The study included two nationally representative surveys of 3,852 and 2,115 adult Australians, as well as a week-long digital diary study with 55 participants to provide an intimate look into everyday experiences with misinformation online.

Results revealed that more than half of Australians encounter misinformation online in a typical week, and this proportion increases the more active a person is on social media.

“Misinformation has many harms, from the individual effects of bullying or scams to broader societal risks around legislation and deliberative democracy,” explains Dr Thomson.

“Understanding more about online misinformation and adult Australians’ abilities to respond to it is vital so that we can protect our mental health, protect our assets during the current cost-of-living crisis, and ensure we have important societal conversations that are based on facts and verifiable evidence rather rumours, conjecture, and false or misleading information.”

Key findings from the study include:

  1. Most adult Australians are not able to identify misinformation online
  2. Many adults overestimated their ability to identify misinformation online
  3. Online news and information consumption habits are related to people’s ability to verify information online
  4. Economics, celebrity and crime/crisis are the top topics of misinformation people encountered during the diary study
  5. Text-based content is the most prevalent form of misinformation people identified during the diary study
  6. Adults want to develop their media literacy abilities

While misinformation research is often topical – in the context of COVID-19, political referenda, or elections – this study focuses on the way misinformation manifests in everyday online communication and social media use.

“Through the project’s in-depth diary study, we’re able to explore the false, misleading, or untrustworthy claims adult Australians encounter on a daily basis and better understand the assumptions these people have about news and information sources, information attributes, and communication platforms,” explains Dr Hourigan, who co-led the diary study component with Dr Thomson.

Of the 80 percent of Australians who think misinformation on social media needs to be addressed, 93 percent of these agree that people need to be taught how to identify misinformation, 91 percent think social media platforms should monitor, label, and remove misinformation, and 87 percent think the government should introduce laws to make social media platforms remove harmful misinformation.

“Eight in ten Australians think the spread of misinformation online needs to be addressed and this starts with your own media literacy knowledge and skills,” says Assoc Prof Notley.

“We can all hone our media literacy by asking three simple questions when encountering potentially dodgy claims online: Who made the claim? What is the evidence? And what do trusted sources say?”

This research was supported by the Australian Government through the Australian Research Council’s Linkage Projects funding scheme – LP220100208 Addressing Misinformation with Media Literacy through Cultural Institutions.

You can access this report via APO, as well as an infographic with study results.

SEE ALSO

Booking a summer holiday deal? Beware ‘drip pricing’ and other tactics to make you pay more than you planned

wichayada suwanachun/Shutterstock

Booking a summer holiday deal? Beware ‘drip pricing’ and other tactics to make you pay more than you planned

Author  Jeannie Marie Paterson
Date 2 December 2024

Have you ever spotted what looked like a great deal on a website, added it to your “basket” and proceeded to checkout – only to find extra fees added on at the last minute?

It’s frustratingly common when making airline, hotel and many other kinds of bookings to see an advertised price get ratcheted up at checkout with additional fees – perhaps “shipping insurance”, “resort fees” or just “taxes”.

The practice is known as “drip pricing” and it can distort consumer decision-making and affect competition. Nonetheless, there is no specific ban on this conduct in Australia.

Some companies have, however, effectively been prosecuted for it under the Australian Consumer Law, which contains some strict rules about misleading consumers through advertising.

Many of us have already begun booking flights, hotels and more as we head into the summer holiday season. Here’s what the law says about companies changing prices in the lead-up to checkout, and how you can protect yourself as a consumer.

What’s wrong with drip pricing?

The tactic that underpins drip pricing is to draw a customer in with an attractive “headline” price but then add in other fees as the customer approaches the checkout.

It’s reasonable to ask whether there’s anything wrong with this practice: after all, the customer still sees the final price at checkout. Why might that be seen as misleading conduct under Australian Consumer Law?

Close up of a mouse on a buy button
Drip pricing aims to capture a customer’s interest with a good looking deal, then add extra fees before checkout.
Gorodenkoff/Shutterstock

The reasons lie in views about consumer buying behaviour and the nature of the statutory prohibition.

Typically, the closer a consumer gets to a sale, the less likely they are to pull out or even fully notice any additional fees.

They may then end up paying more than they intended and also have lost the opportunity to deal with other suppliers of the same product at a better price.

In the relevant section of Australian Consumer Law, there’s no requirement of an intention to mislead. It’s also not necessarily relevant that the true pricing situation is eventually revealed to the consumer or that it’s in the “fine print”.

Thus, in the eyes of the law, it can be enough that consumers were enticed by an attractive headline price.

Price surprises

This legal position is well illustrated by a case settled by the High Court in 2013, after the Australian Competition and Consumer Commission (ACCC) took on telecom provider TPG Internet in 2010, alleging misleading conduct.

In this case, TPG had been advertising broadband internet services for $29.99 per month.

But on reading the fine print, you’d have discovered this deal was only available with a landline service costing an additional $30 per month.

Internet router on working table with blurred man in background
One important case centred on telecom provider TPG’s advertisements for a broadband internet deal.
BritCats Studio

The case moved up through Australia’s court system, but ultimately, the High Court majority held that the telco had engaged in misleading conduct.

The High Court recognised that the very point of advertising is to draw consumers into “the marketing web”. It is therefore not enough to disclose the true (higher) price only at the point the transaction is concluded.

TPG was fined $2 million in this case. Since then, the maximum penalties have increased, now the higher of:

  • $50 million
  • three times the value obtained from the contravention, or
    (if the benefit can not be determined)
  • 30% of the business’s adjusted turnover during the breach period.

Dynamic pricing

Other pricing complaints have been in the news recently, including concerns about point-of-sale dynamic pricing.

Basically, this means using an algorithm that adjusts ticket prices in response to demand, as consumers wait in a virtual purchasing queue.

Recent media reporting has centred on concerns about the use of point-of-sale dynamic pricing in the events ticketing industry.

A form of dynamic pricing is used by hotels and airlines. They increase prices seasonally and according to demand. But these “dynamic” prices are clearly visible to consumers as they start looking for a deal. Some bodies even publish helpful tables of likely prices at different times.

Crowd of concert goers enjoying a live show with their hands up
Dynamic pricing has become a common practice for many ticket sales – such as concerts.
KRxMedia/Shutterstock

The kind of dynamic pricing that happens at the very point consumers are waiting to buy is very different and arguably creates an “unfair surprise”.

Whether these kinds of practices also fall within the category of misleading conduct remains to be seen.

But it is arguable that consumers could reasonably expect the real-time movement of prices to be disclosed upfront.

Earlier this year, the government announced plans to address both drip pricing and dynamic pricing as part of a broader ban on unfair trading practices.

What can consumers do?

While all this law reform and litigation is playing out, here are some things you can do to avoid pricing shock.

1. Slow down. One of the strategies that online markets often rely on is “scarcity signalling” – those clocks or numbers you see counting down as you move through a website.

The very purpose of these is to make a consumer rush – which can mean failing to notice those additional fees that may make the buy not a good deal.

2. Take screen shots as you progress. Remember what it is you thought you were getting. Doing this also provides a basis for lodging a complaint if the headline and actual price don’t match up.

3. Check. Take a close look at the final bill before pressing pay.

4. Report. Tell your local Fair Trading Office or the ACCC if the advertised deal and the final price don’t meet up.

A recent action taken by the ACCC against Woolworths and Coles alleging “illusory” discounts was launched because of consumer tip-offs.The Conversation

Jeannie Marie Paterson, Professor of Law, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S Chief Investigator elected Fellow of the Australian Academy of the Humanities

Jacki Leach Scully

ADM+S Chief Investigator elected Fellow of the Australian Academy of the Humanities

Author ADM+S Centre
Date 2 December 2024

Congratulations to ADM+S Chief Investigator Prof Jackie Leach Scully from UNSW who is one of 41 new Fellows elected to the Australian Academy of the Humanities (AAH) in 2024.

Prof Leach Scully is an internationally recognised bioethicist specialising in disability and feminist bioethics, and is a Professor of Bioethics and Director of the Disability Innovation Institute at UNSW.

In the 21 November announcement, President of the Academy Prof Stephen Garton said, “Each of our Fellows are working at the forefront of issues of national and international importance and exemplify why ethical, historical, creative and cultural knowledge and expertise is critical to better decision making for a resilient society.

“Fellows elected today are exemplary leaders working in critical spaces where Australia needs to be — building our understanding of Asia and the Pacific, truth-telling and shedding light on a shared history and shaping our national artistic and cultural identity.”

Prof Leach Scully’s research is influenced by feminist theory and investigates the socio-ethical impacts of technological innovation, especially for people with disability and other marginalised communities.

“I began my academic life as a molecular biologist, so it’s a special honour to be recognised by colleagues in the humanities, the area that has since become my home,” said Prof Leach Scully.

“It illustrates the open and interdisciplinary approach that I believe we need to flourish as a global community in the future.”

Learn more about Jackie’s research.

SEE ALSO

ADM+S Students present work at Australian Science and Technology Studies 2024 Conference

L-R: Vishnu Padinjaredath Suresh, Dante Aloni, Berwyn Kwek, Trang Le, Fan Yang and Emma Finlay
L-R: Vishnu Padinjaredath Suresh, Dante Aloni, Berwyn Kwek, Trang Le, Fan Yang and Emma Finlay

ADM+S Students present work at Australian Science and Technology Studies 2024 Conference

Author Natalie Campbell
Date 28 November 2024

ADM+S PhD Students from Monash University, QUT and the University of Melbourne recently travelled to Canberra to present their work at the Australian Science and Technology Studies 2024 (AusSTS 2024) Conference.

The 2024 event took place at Australian National University from 18-20 November.

The theme ‘(De-)Territorialising STS: Discipline, Place, Power’ invited participants to build on conversations convened in last year’s conference, thinking with the theory and practice of STS through the lens of ‘territory’.

ADM+S students from Monash University, Dante Alone, Trang Le and Berwyn Kwek attended the conference and presented their work in the unique ‘paper workshop’ format.

Dante explains, “In this format, each conference attendee is assigned a specific workshop to attend, where they are encouraged to provide feedback on works in progress.”

Dante and Berwyn presented their draft paper ‘Towards a theorisation of State-based decentralised verification systems’ in the ‘State and policy making’ workshop group, which questions why certain nation-states, such as Singapore and Korea, are seeking to incorporate Blockchain techniques of decentralised verification into their digital identity management assemblages.

“In contrast to other identity management technologies, such as facial recognition technology, that seek to digitally mediate a frictionless, unobservable, and unaccountable relationship between individual identity and state apparatuses, state-backed blockchain technologies organise the verification of identity around the active enrolment and participation of their citizenry,” says Dante.

Berwyn adds, “While blockchain technology promises individual control over one’s data, our work raises critical questions about how institutions, who often prioritize centralised oversight and control, re-appropriate these principles.”

Trang Le also presented a draft paper about her PhD research which explores how the pursuit of gender justice and the fight against gendered violence are increasingly shaped by the logic of smartness and the securitisation of the smart city.

This work is forthcoming and has been developed for a special issue on ‘Automated Space’ to be published in the ‘Environment and Planning F’ journal.

Trang reflected on the workshop format, explaining “The open flowing conversational format of the workshop was a refreshing change, and I found this setting incredibly generative.”

“I got some very valuable feedback from the session, which prompted me to explore further certain aspects of my research—such as the transnational logics underlying the phenomenon I’m examining.”

The conferences unique workshop format offers the opportunity for presenters to receive constructive criticism and advice for their ongoing research.

Berwyn said, “One key takeaway for me was recognising the immense potential of our work in studying how socio-technical systems shape the world.

“I was particularly grateful for the input from Dr Courtney Addison of Victoria University of Wellington, who provided valuable guidance on how we might shape our nascent work into something more substantial and suitable for publication.”

ADM+S members Emma Finlay and Dr Fan Yang form the University of Melbourne, and Vishnu Padinjaredath Suresh from QUT also attended the conference,

AusSTS 2024 was sponsored by Science, Technology, & Human Values, the Deakin Science and Society Network, and the Research School of Social Sciences at the ANU.

This experience was supported by the ADM+S Research Training and Development program.

SEE ALSO

ADM+S researcher awarded Best Director at Melbourne Queer Film Festival

Green fields with "Outpicker" text

ADM+S researcher awarded Best Director at Melbourne Queer Film Festival

Author Kathy Nickels
Date 28 November 2024

Lesley Luo, a PhD candidate at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) researching the intersection of recommender systems and queer representation, has been awarded the prestigious VicScreen Best Director of Australian Short Film at the Melbourne Queer Film Festival (MQFF).

The film, Outpicker, tells the poignant story of a queer immigrant navigating themes of belonging and community building through the act of litter picking across regional Victoria. The film, an Official Selection of MQFF 2024, has been celebrated for its authentic representation of queer women from diverse backgrounds and its compelling narrative of connection and resilience.

“I’m absolutely thrilled about this. Being awarded as Best Director made me feel so encouraged and how important representation is for queer multicultural communities – it’s a reminder of the power of storytelling to create visibility and connection,” said Ms Luo.

Lesley’s creative work is informed by her research, as the recommender systems not only shape the way she consumes content on digital platforms, but also impact her own content creation and engagement with the queer audiences and community in the digital realm.

Her focus lies in the recommender system and platform mechanism’s role in shaping the content production and consumption of queer women’s narrative short films and web series. This research explores how the platform mobilises or moderates the queer content, in a way, influencing cultural representation and community building.

The Melbourne Queer Film Festival, one of Australia’s premier LGBTQIA+ cultural events, showcases the best in queer cinema from around the globe. 

The VicScreen Best Director award celebrates Lesley’s innovative approach to filmmaking, her dedication to representation, and her ability to tell stories that resonate across cultural and geographic boundaries.

SEE ALSO

FILM RELEASE: ‘The Australian Ad Observatory: A web of invisible influence’

The Australian Ad Observatory: A web of invisible influence

FILM RELEASE: ‘The Australian Ad Observatory: A web of invisible influence’

Author ADM+S Centre
Date 28 November 2024

“Digital platforms play a critical role in Australia’s economy and society. Yet our capacities to understand and observe their activities is very limited. Australian’s see hundreds of ads a day, but who’s taking note of what they’re seeing?” – Dr Abdul Karim Obeid, Data Engineer on The Australian Ad Observatory

On 28 November 2024 the ARC Centre of Excellence for Automated Decision-Making and Society released ‘The Australian Ad Observatory: A web of invisible influence’, the second iteration in a series of project films offering an inside look at major research projects underway at the Centre.

The first phase of the Ad Observatory pioneered a way to observe the targeting of social media advertising across populations of users. With 1,909 participants, the team generated the largest known collection of Facebook ads in Australian history – 328,107 unique ads – and built world-first citizen-science research infrastructure.

In The Australian Ad Observatory: A web of invisible influence you’ll hear from project leaders Prof Christine Parker, Prof Daniel Angus, Prof Mark Andrejevic and Assoc Prof Nicholas Carah, data engineer Dr Abdul Karim Obeid, PhD student Lauren Hayden, and industry partners Dr Aimee Brownbill (Foundation for Alcohol Research and Education) and Chandni Gupta (Consumer Policy Research Centre). 

Prof Daniel Angus explains, “We hear a lot of debates around the role of digital media within our society, but the frustration is that these debates are not always evidenced by real findings about real-world experiences of people online.”

“Existing archives for digital advertising were made by the platforms and they had some serious limitations to them that hindered our ability to look at the ads being shown,” says Lauren Hayden.

“We had no ability to understand how ads were targeted, or who was actually seeing them.”

The first stage of research was conducted across key case studies, including political advertising, environmental claims and greenwashing, alcohol advertising, unhealthy food advertising, scam ads, and consumer finance advertising, allowing researchers to dig deep into the targeted algorithms behind automated advertising within a large platform like Facebook.

Collaborating with key industry partners such as ABC, CPRC, FARE, VicHealth and CHOICE, research findings have informed evidence-based policy submissions around illegal advertising practices, the advertising of harmful commodities, social media reforms, and more.

“Misleading and deceptive advertising by a business is unlawful,” says Prof Christine Parker.

“The problem is that when you’re advertising on social media there’s no way for a regulator to see what’s being advertised on people’s personal feeds.”

Phase 2 of the project kicked off in June 2024 and aims to address higher level research challenges around the observability of dark, ephemeral and synthetic digital media, with two significant changes to the research approach – but you’ll have to watch to find out.

“The role that we play is not in a sense just finding the evidence one time, but in engaging continuously in this conversation so that when Australians from different groups and sectors get together to talk about the power of digital platforms, we’re doing it from a much more informed point of view,” says Assoc Prof Nicholas Carah.

Watch ‘The Australian Ad Observatory: A web of invisible influence’ on YouTube.

SEE ALSO

What is Bluesky? Why tens of millions of people are heading for a ‘decentralised’ social media platform

Image: Anadolu/GettyImages
Image: Anadolu/GettyImages

What is Bluesky? Why tens of millions of people are heading for a ‘decentralised’ social media platform

Author  Jean Burgess
Date 27 November 2024

After Elon Musk bought Twitter (now rebranded X) in 2022, disaffected users began to seek alternatives. Alongside Meta’s Threads and the open source project Mastodon, Bluesky was one of several contenders.

Threads benefited from Meta’s existing user base but has failed to capture the popular imagination. Mastodon has proven complicated and difficult to grasp for most ordinary users and so use remains fragmented. Bluesky seemed promising but was in invite-only mode at the time and growth was muted.

But in recent weeks, the migration to Bluesky from X seems to have reached a tipping point, as large parts of the user community finally got fed up with X’s toxic culture and management. Following the recent US presidential election, in which Musk appeared to manipulate X’s algorithms to increase his own influence, these users found Bluesky’s doors wide open.

Since then, the user base has grown to more than 20 million users, a number that continues to climb. As others have noted, at least for the moment it feels a bit like early Twitter – a sandpit to explore new tools, a playful connection to the broader internet, and a relatively safe place to share personal thoughts and experiences, or to connect with friends and colleagues.

How is Bluesky different from X?

Bluesky looks very similar to X. Its azure butterfly icon bears obvious resemblances to Twitter’s blue bird, which Musk replaced with a stark white-on-black X.

Bluesky uses hashtags and users address one another using the @ symbol. Replies, quotes and reposts all work much as they do on X. This comforting resemblance is likely one explanation for the remarkable popularity of Bluesky in comparison to other decentralised platforms such as Mastodon.

Bluesky distinguishes itself from X through a rich set of features through which users can control their experience and shape the culture of the platform as a whole.

You can build multiple custom feeds based on your own interests and relationships then publicly share these feeds with others. This is a powerful mechanism to avoid the one-way “push” of algorithmic feeds and represents a more democratic approach to content curation.

Bluesky offers the ability to create custom “starter packs” – curated lists of suggested accounts related to topics, interests or locations. Starter packs can be shared publicly to help new users find people to follow. This is a novel feature that feels friendly and welcoming, and again doesn’t really rely on a top-down algorithm.

Bluesky’s settings menu also includes powerful content moderation tools that users can control. For example, you can create custom keyword lists to mute some types of content, and control who can interact with you.

This means if you don’t want to listen to certain political views, you don’t have to. It also means you can have a pleasant and sociable time without being subject to hate speech, bullying and harassment.

Critics argue these kinds of user controls will lead to “echo chambers” so the overall public sphere (or public square) is no longer a place for an exchange of differing views. But as I have previously argued, a public square owned by a billionaire that is full of shouting bullies does nothing to enable equal participation either.

How decentralised is Bluesky?

Bluesky began life in 2019 as an experimental project within Twitter, led by co-founder and former CEO Jack Dorsey.

The idea was to implement for social media a decentralised protocol – a system that prevents complete control by a single organisation and enables developers or users to build improvements. This would also enable Twitter to connect, or “federate”, with other decentralised platforms and services such as Mastodon.

Rather than being adopted by Twitter, Bluesky eventually became a standalone project and then corporation (Dorsey is no longer involved). There are debates as to how truly decentralised and interoperable it is: Bluesky uses its own AT Protocol (ATP), rather than the ActivityPub protocol commonly used throughout the broader “fedisphere” of decentralised social media. Critics argue this choice could limit Bluesky’s reach and hinder interaction across platforms. For example, a “bridge” is needed to connect Bluesky and Mastodon accounts.

Still, like other federated platforms, it is possible for users to host their accounts on their own servers or nodes. At least in principle, the platform, content, users and their relationships could continue to exist even if the Bluesky company were to disappear, or “exit”, in technical terms.

This is a big shift away from one private company owning all the servers, controlling all the algorithms and making all the rules, and so the next phase of Bluesky’s development will depend substantially on the actions of its users.

Blue skies ahead?

As Bluesky rapidly grows larger, familiar questions are beginning to emerge.

How will a small team relying primarily on community-led content moderationhandle adversarial swarms of political bots or child sexual abuse material? Will it accept responsibility for the spread of harmful misinformation or manipulation of political opinion? The company is already investing more in trust and safetybut more will be needed if Bluesky’s popularity continues to grow.

The organisation’s funding largely comes from libertarian-leaning cryptocurrency investors. The company has been clear advertising will not be part of the mix and has mentioned introducing paid services as an alternative revenue source. It is unclear whether such strategies will be enough to support a far larger operation, and whether the investors will remain neutral as difficult decisions on platform governance have to be made.

Growth may also bring more government interest. If Bluesky reaches more than 45 million EU users per month, it may be categorised as a “very large online platform”, and will face increased scrutiny.

Questions also remain about whether the “Xodus” to Bluesky will stick. A growth in new sign-ups is one thing, but a vibrant community that is actively posting, sharing and commenting is another matter entirely.

It is doubtful we will ever see a full replacement for Twitter in its heyday, and maybe that’s OK. As long as there is some interoperability between platforms and a healthy exchange of ideas, it may be better if we never again put all our little blue eggs in one basket.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S members secure funding in 2025 ARC Discovery Project round

System of neurons with glowing connections on black background

ADM+S members secure funding in 2025 ARC Discovery Project round

Author ADM+S Centre
Date 26 November 2024

The Australian Research Council (ARC) today announced more than $342 million in funding for 536 new projects under the 2025 ARC Discovery Projects scheme. 

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is proud to announce that twelve of the funded projects include contributions from its members, showcasing the Centre’s commitment to advancing impactful, multidisciplinary research.

ARC Acting Chief Executive Officer, Dr Richard Johnson, said the ARC Discovery Projects scheme supports excellent basic and applied research to expand Australia’s knowledge base and research capability. 

“Discovery grants support individual researchers and research teams in research projects that provide economic, commercial, environmental, social and/or cultural benefits to the Australian community,” Dr Johnson said. 

The twelve projects involving ADM+S members reflect research excellence across diverse fields, including AI ethics, digital citizenship, sustainable energy, healthcare transparency, and diversity studies.

Projects involving ADM+S members include the following (ADM+S researchers in bold):

A new “Treating Customers Fairly” law for Australia’s financial industry
Assoc Prof Andrew Schmulow; Ms Nicola Howell; ProfJeannie Paterson; ProfElise Bant; Prof Therese Wilson; Prof Jason Harris

Disability and Digital Citizenship
Prof Gerard Goggin; Prof Kathleen Ellis; Prof Jennifer Smith-Merry; Prof Simon Darcy; Prof Paul Harpur; Prof Bree Hadley; ProfMichael Kent; Associate Prof Dinesh Wadiwel; Dr Natasha Layton; Assoc Prof Mary-Ann O’Donovan; Prof Scott Avery; Professor Karen Soldatic; Prof Lorenzo Dalvit; Dr Kuansong Victor Zhuang; Assoc Prof Meryl Alper

Embedding Net Zero Carbon Emissions in Northern Australia
Assoc Prof Timothy Neale; Dr Kari Dahlgren; Prof Matthew Kearnes; Prof Teresa Lea; Dr Christopher Mayes; Assoc Prof Gisa Weszkalnys

Generative AI and Creative Industries: Ethical, Legal and Work Implications
Prof Paul Formosa; Assoc Prof Rita Matulionyte; Assoc ProfSarah Bankins; Dr Raphaël Millière; Prof Dr Alain Strowel

Generative AI and the future of academic writing and publishing
Assoc Prof Michelle Riedlinger; Prof Peta Mitchell; Dr Jake Goldenfein; Prof Jean Burgess; Dr Aaron Snoswell

‘No’ to Black Box: Towards Transparent and Safe AI in Healthcare
Assoc Prof Rita Matulionyte; Prof Farah Magrabi; Prof Amin Beheshti; Prof Jyh-An Lee; Dr Daria Kim

Proactive harm prevention for virtual and augmented reality technologies
Dr Joanne Gray; Assoc Prof Marcus Carter; Dr Ben Egliston

Public Understandings of Immunity Systems and Human-Microbial Relations
Prof Deborah Lupton; Assoc Prof Mark Davis; Dr Kerryn Drysdale

The Australian experience of automated advertising on digital platforms
Assoc Prof Nicholas Carah; Dr Thao Phan; Prof Mark Andrejevic; Dr Scott Wark

Transborder Electricity Infrastructures and Geopolitics
Prof Brett Neilson; Prof Ned Rossiter; Prof Teresa Lea; ProfAnna Cristina Pertierra; Dr Sean Dockray; Prof Jack Qiu; Prof Tetz Hakoda; Dr Myung Ho Hyun; ProfSandro Mezzadra; Prof Manuela Bojadzijev

Understanding Children’s Mobile Gamble-Play Cultures: Gateways to Gambling
Dr Jessica Balanzategui; Dr César Albarrán Torres; Prof Ingrid Richardson; Assoc Prof Jordy Kaufman

What does ‘doing diversity’ do, and how can it be done differently?
Prof Bronwyn Carlson; Assoc Prof Debbie Bargallie; Assoc Prof David Nolan; Dr Archie Thomas

For a full list of funded Discovery Projects for 2025, including a snapshot of funding by state and territory, please view the Grant Announcement Kit

For more information on the Discovery Projects scheme, please visit the ARC website

SEE ALSO

End of the Line: A short film on connectivity challenges in a remote first Nations Community

First nations Lala Gutchen holds mobile phone to the sky

End of the Line: A short film on connectivity challenges in a remote first Nations Community

Author Kathy Nickels
Date 25 November 2024

The ARC Centre for Automated Decision-Making and Society (ADM+S) and the Torres Strait Islander Media Association (TSIMA) are proud to announce the release of End of the Line, a compelling short film that explores the impact of patchy mobile and internet connectivity on the lives of First Nations people in remote Australia. 

The film captures the unique challenges faced by communities on Erub (Darnley Island), located at the eastern edge of Zenadth Kes (the Torres Strait Islands), where digital access plays a crucial role in preserving culture, ensuring safety, and enabling communication.

End of the Line follows Erub Meuram woman and NAIDOC Award winner Lala Gutchen, a First Language educator and cultural leader, as she navigates the challenges of maintaining cultural practices—such as fishing and language revitalisation—in a place where mobile access is unreliable. 

The film highlights how connectivity issues shape not only cultural traditions but also everyday safety and community life, especially in isolated areas where communication can be a lifeline.

The production of End of the Line is a collaboration between ADM+S and TSIMA as part of the ongoing Mapping the Digital Gap project, an initiative funded by Telstra and the ADM+S. This project seeks to understand and address the digital inclusion needs in remote First Nations communities across Australia. Lala Gutchen has served as a co-researcher since 2022, contributing her invaluable insights into the connectivity challenges faced by her community.

Launched at the ADM+S Symposium on October 16, the film’s premiere featured a panel discussion with Lala Gutchen and fellow Erub community member Nixon Mye, offering a deeper look into the realities captured on screen.

End of the Line was also screened on Wed 20 November at the 2024 Humanitech Summit in Melbourne, as part of a series of short films and case studies covering topics related to humanitarian action, innovation and beyond.

ADM+S and TSIMA extend their gratitude to the many contributors to this film, including Daniel Featherstone, who led the project with invaluable vision; Nixon Mye, for his participation; and Jimmy Thaiday, whose drone shots brought Erub’s breathtaking landscape to life. Special thanks also to editor Leah Hawkins, Jenny Kennedy, Lyndon Ormond-Parker and the Erub field team for their roles in production.

As this film reaches audiences nationwide, ADM+S hopes to raise awareness of the urgent need for reliable connectivity in First Nations communities, where limited access to mobile broadband and internet services continues to affect every aspect of daily life. End of the Line is now publicly available on YouTube for viewing and aims to inspire action and further advocacy for digital inclusion in remote Australia.

The Mapping the Digital Gap project is also hosting an online launch event for its 2024 Outcomes Report on Tuesday 3 December, which presents the most significant changes in digital inequity in remote First Nations communities found between 2022-2024. More details and registrations can be found at the Humanitix event listing Three years on: How the digital gap is changing in remote First Nations communities.

Watch End of the Line on YouTube.

SEE ALSO

The government has introduced laws for its social media ban. But key details are still missing

Young child looking at mobile phone screen
Ron Lach/Pexels

The government has introduced laws for its social media ban. But key details are still missing

Author Daniel Angus
Date 21 November 2024

The federal government today introduced into parliament legislation for its social media ban for people under 16 years.

Communications Minister Michelle Rowland said:

This is about protecting young people, not punishing or isolating them, and letting parents know we’re in their corner when it comes to supporting their children’s health and wellbeing.

Up until now details of how the ban would actually work have been scarce. Today’s bill provides a more complete picture.

But many ambiguities – and problems – still remain.

What’s in the bill?

Today’s bill is an amendment of the Online Safety Act.

It introduces a new definition for an “age-restricted social media platform” whose sole or significant purpose is to enable users to post material online and interact socially with other users.

This includes platforms such as Facebook, Instagram, TikTok and Snapchat, but also many more minor platforms and services. It includes an exclusion framework that exempts messaging apps such as WhatsApp, online gaming platforms and services with the “primary purpose of supporting the health and education of end-users” (for example, Google Classroom).

The bill will attempt to force owners of newly defined age-restricted platforms to take “reasonable steps” to prevent people under 16 from having a user account. This will include young people who have an existing account. There are no grandfather provisions so it is unclear how platforms will be required to manage the many millions of existing users who are now set to be excluded and deplatformed.

The bill is also vague in specifying how social media platforms must comply with their obligation to prevent under 16s from having an account – only that it “will likely involve some form of age assurance”.

Oddly, the bill won’t stop people under 16 from watching videos on YouTube or seeing content on Facebook – it is primarily designed to stop them from making an account. This also means that the wider ecology of anonymous web-based forums, including problematic spaces like 4chan, are likely excluded.

Age-restricted platforms that fail to prevent children under 16 accessing their platforms will face fines of nearly A$50 million.

However, the government acknowledges that it cannot completely stop children under 16 from accessing platforms such as Instagram and Facebook.

Australia should be prepared for the reality that some people will break the rules, or slip through the cracks.

The legislation will take effect “at least” 12 months after it has passed parliament.

How did we get to this point?

The government’s move to ban under 16s from social media – an idea other countries such as the United Kingdom are now considering – has been heavily influenced by News Corp’s “Let Them Be Kids” campaign. This campaign included sensitive news reports about young people who have used social media and, tragically, died by suicide.

The government has also faced pressure from state governments and the federal opposition to introduce this bill.

The New South Wales and South Australian governments last month held a summit to explore the impact of social media on the mental health of young people. However, Crikey today revealed that the event was purposefully set up to create momentum for the ban. Colleagues who attended the event were shocked at the biased and unbalanced nature of the discussion.

The announcement and tabling of the bill today also preempts findings from a parliamentary inquiry into the impact of social media on Australian society. The inquiry only tabled its report and recommendations in parliament this week. Notably, it stopped short of recommending a ban on social media for youth.

There are evidence-based alternatives to a ban

The government claims “a minimum age of 16 allows access to social media after young people are outside the most vulnerable adolescent stage”.

However, multiple experts have already expressed concerns about banning young people from social media platforms. In October more than 140 experts, me included, wrote an open letter to Prime Minister Anthony Albanese in which we said “a ‘ban’ is too blunt an instrument to address risks effectively”.

The Australian Human Rights Commission has now added its voice to the opposition to the ban. In a statement released today it said:

Given the potential for these laws to significantly interfere with the rights of children and young people, the Commission has serious reservations about the proposed social media ban.

In its report, the parliamentary inquiry into the impact of social media on Australian society made a number of recommendations to reduce online harm. These included introducing a “duty of care” onto digital platforms – a measure the government is also moving ahead with, and one which is more in line with best evidence.

The inquiry also recommended the government introduce regulations which ensure users of social media platforms have greater control over what content they see. This would include, for example, users having the ability to change, reset, or turn off their personal algorithms.

Another recommendation is for the government to prioritise the creation of the Children’s Online Privacy Code. This code will better protect the personal information of children online.

Taken together, the three measures above manage the risks and benefits of children’s digital media. They build from an evidence base, one that critically includes the voices and perspectives of children and parents. The concern then is how a ban undermines these efforts and possibly gives platforms a hall pass to avoid obligations under these stronger media policies.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The ADM+S Centre joins the conversation on Bluesky

The ADM+S Centre joins the conversation on Bluesky

Author ADM+S Centre
Date 19 November 2024

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has joined the Bluesky social media platform.

On Bluesky, we’ll be continuing to share:

  • Insights from our research on automated decision-making.
  • Updates on events, workshops, and opportunities to get involved.
  • Thought-provoking discussions about AI, Automated Decision-Making, and their impacts on society.

Join Us on Bluesky
If you’re already on the platform, follow us at @admscentre.org.au‬. If not, we’d love for you to join our growing community. 

There are some great set-up tips in these articles:

SEE ALSO

Safeguarding Fair Elections: Experts convene to address emerging threats of AI

A digital rendering of a hand dropping a voting card into a ballot box
AI Generated/Adobe Stock

Safeguarding Fair Elections: Experts convene to address emerging threats of AI

Author ADM+S Centre
Date 19 November 2024

In an era marked by cyber attacks to co-ordinated disinformation and AI-generated deep fakes, fair elections in Australia and around the world are facing unprecedented and complex threats. 

What do we need to know? What actions should citizens, journalists, policymakers, researchers, and politicians take to safeguard the integrity of our elections?

To address these challenges, the ARC Centre of Excellence for Automated Decision-Making has partnered with the University of Southern California’s (USC) Election Cybersecurity Initiative to convene a one-day workshop. This event will bring together leading experts from Australia and the United States to share insights and compare recent experiences in tackling election security threats in both countries.

Professor Daniel Angus, Director of the Digital Media Research Centre at QUT and Chief Investigator at ADM+S will be speaking at the event, emphasising that safeguarding elections is foundational for democracy. 

“Safeguarding fair elections is vital for democracy; it requires a collaborative effort that draws on the insights of diverse stakeholders and internal experts. By bringing together a wide range of voices—researchers, industry leaders, and policymakers—we can create a comprehensive understanding of the challenges and develop effective, actionable strategies. 

“When we protect the vote we are protecting the collective voice of the people. While emerging technologies challenge election security in new ways, it is also essential that we recognise and address underlying social factors as well. With insights from a range of fields we are far better placed to tackle the human and technological sides of election security, and protect the integrity of our democratic institutions,” said Professor Angus.

Speakers include Adam Clayton Powell III, Executive Director of the USC Initiative on Election Cybersecurity; Jeffrey Cole, Director of the Centre for the Digital Future; ABC’s US elections analyst Casey Briggs; as well as ADM+S researchers in the field.

Together, they will investigate the capabilities of current AI systems, the dynamics of digital media platforms, and the institutional, technical, and regulatory strategies necessary to protect elections now and in the future.

The ADM+S and USC invites Government (policy makers), industry (electoral officials/workers, regulatory), and journalists (political/election and AI focus) as well as interested general public to attend this workshop to engage in this crucial dialogue.

Alongside researchers from the ADM+S Centre: Prof Mark Andrejevic, Prof Daniel Angus, Assoc Prof Timothy Graham, Prof Christopher Leckie, Dang Nguyen, Prof Mark Sanderson, Distinguished Prof Julian Thomas, Fan Yang, and Prof Haiqing Yu.

Speakers include:

Dr Michelle Blom – Senior Research Fellow in the AI and Autonomy group of the School of Computing and Information Systems at The University of Melbourne. Dr Blom has diverse research interests that include election integrity (with a focus on post-election audits), combinatorial optimisation (with a focus on algorithms for solving large problems through decomposition, local search, and the use of mathematical programming), applications of reinforcement learning, and Explainable AI.

Casey BriggsData journalist and presenter with ABC News, and the ABC’s US Elections Analyst. He covers elections in Australia and around the world. His series ‘America, Are You OK?’ examined how US voters were feeling about their own democracy in the leadup to the presidential election. He is also a partner in the ARC Centre of Excellence for Automated Decision-Making & Society within the Australian Ad Observatory.

Jeffrey Cole – An expert in the field of technology and emerging media, Cole serves as an adviser to governments and leading companies around the world as they craft digital strategies. Jeffrey Cole has been at the forefront of media and communication technology issues both in the United States and internationally for the past three decades. 

Jung-hwa (Judy) Kang Special Project Manager at the University of Southern California’s Center on Communication Leadership and Policy, where she oversees program development, research, and event management for initiatives based at USC’s Washington, D.C. campus. Her work includes the USC Election Cybersecurity Initiative, which has held workshops worldwide and in all 50 U.S. states, as well as the Africa-U.S. Initiative, the Democratic Resilience series, and high-level discussions with officials from the Department of Defense and prominent journalists. Kang also leads public diplomacy forums in partnership with Public Diplomacy of America, where she serves on the Board.

Devi Mallal – Founding member and the Media and Research Lead at RMIT ABC Fact Check. In the lead up to the 2022 Federal Election, Devi co-directed the Mosaic Project, a collaboration between RMIT FactLab, the Judith Neilson Institute for Ideas and Global leaders in misinformation detection, the Institute for Strategic Dialogue. 

Adam Clayton Powell III – Executive Director of the USC initiative on election cybersecurity, in association with USC’s schools of business, engineering, law and public policy and the USC Dornsife College of Letters, Arts and Sciences. With support from Google, this bipartisan initiative provides in-state training in all 50 states to reinforce election integrity and build defence against digital attacks.

For further information and registration visit Are Fair Elections Possible In The Age Of AI?

SEE ALSO

Research reveals Facebook, alcohol and gambling companies target ads at Australians most at risk of harm

Research reveals Facebook, alcohol and gambling companies target ads at Australians most at risk of harm

Author ADM+S Centre
Date 14 November 2024

New research funded by VicHealth and the Foundation for Alcohol Research & Education and supported by the Australian Research Council Centre of Excellence for Automated Decision-Making and Society (ADM+S) has revealed how Facebook targets people most at risk with alcohol and gambling advertising.

The study piloted a novel digital donation method, the Australian Ad Observatory Mobile Toolkit, developed at the ADM+S Centre, for citizen scientists to collect “dark” ads that would otherwise be hidden from public view. Citizen scientists also provided a list of “ad interests” and a list of advertisers who had targeted their profile and participated in a co-analysis interview in which they were asked to interpret their flows of targeted advertising. 

Findings from the report How alcohol and gambling companies target people most at risk with marketing for addictive products on Facebook revealed that these companies shared substantial data with Facebook to aid in targeting users based on detailed advertising interests, despite many individuals actively trying to reduce their alcohol or gambling consumption.

Key findings:

  • Facebook tags people at risk of harm and trying to reduce their use of alcohol and gambling as interested in these addictive products to target them with advertising.
  • Alcohol and gambling companies uploaded data on people at risk of harm and trying to reduce their use of alcohol or gambling to fuel targeted marketing on Facebook. 
  • People who are trying to reduce their alcohol use or gambling are constantly faced with advertisements for these addictive products on social media.
  • People who are trying to reduce their alcohol use or gambling don’t want to be profiled and targeted for alcohol and gambling and can find it impossible to escape this advertising when they are on social media.

This method enabled researchers to gather insights from people who had experienced or were at risk of alcohol or gambling harm to observe alcohol, gambling and social media companies digital advertising targeting practices on Facebook. 

Dr Giselle Newton, Research Fellow from the ADM+S at the University of Queensland and Chief Investigator on the report, said, “Technology in development, such as the Australian Ad Observatory Mobile Toolkit, provides ways to further understand dark marketing of harmful and addictive products that otherwise remain hidden from sight”.

 “This report is the tip of the iceberg in terms of what we know about how alcohol and gambling companies collect and use people’s data to then target them with their harmful and addictive products.

“People who are trying to reduce their alcohol use or gambling don’t want to be targeted with ads selling these products, and can find it difficult to escape this advertising when they are on social media platforms like Facebook.”

Example of Facebook Advertising received on the mobile phone of "Miles" - a participant in the study. Image from the report: How alcohol and gambling companies target people most at risk with marketing for addictive products on Facebook.

Oliver, who participated in the research, said he’s frustrated that he sees so many alcohol ads when he’s using Facebook, and there’s no way to stop it.

“It’s everywhere, and it’s not just billboards, it follows me into my home through my phone. When I’m just trying to look at things – like I’m on Facebook Marketplace a lot – it even follows me there.

“The fact I’m being force fed alcohol ads everywhere is really frustrating, and there’s no opt out.” Oliver said.

Martin Thomas, CEO of the Alliance for Gambling Reform (AGR) said, “This report provides further evidence of the predatory marketing practices of gambling companies, and how platforms like Facebook enable them.

“Australians expect the Federal Government to do more to ensure people who are most at risk of harm aren’t constantly bombarded with ads for harmful and addictive products.”

Caterina Giorgi, CEO of the Foundation for Alcohol Research and Education (FARE), said the report released today further highlights the need for reform.  

“People should not be profiled and targeted for advertising based upon their vulnerabilities. It’s concerning to see alcohol, gambling and social media companies deliberately prey on people who are most susceptible to harm.

“We are calling on the Federal Government to implement protections that put the health and wellbeing of families and communities ahead of the interests of alcohol and gambling companies.”

If you (or someone you know) would like to take part in the second phase of The Australian Ad Observatory project, you can register your interest here: Expression of Interest Form.  The research team are particularly interested in the experiences of voters in marginal electorates, older Australians who may be targeted with scam ads, young people who received alcohol and gambling ads, and people planning pregnancy or parenting young children.

SEE ALSO

Australian AI Safety Forum 2024: A Landmark Event in Shaping AI Safety and Governance

ADM+S Members at the AI Safety Forum
ADM+S members (left to right) Dr Henry Fraser, Prof Kimberlee Weatherall, Dr Aaron Snoswell and Mariam Nadeem attend the 2024 AI Safety Forum. Photo by Melodie Heart Photography

Australian AI Safety Forum 2024: A Landmark Event in Shaping AI Safety and Governance

Author Kathy Nickels
Date 12 November 2024

The inaugural Australian AI Safety Forum 2024, hosted at and supported by the University of Sydney last week, marked a significant step in Australia’s growing involvement in the global dialogue around AI safety and governance. 

The two-day interdisciplinary event built on the momentum created by the establishment of state-backed AI Safety Institutes in the UK, US, and other countries, and the recent release of the Interim International Scientific Report on the Safety of Advanced AI.

The Forum took the Interim International Scientific Report on the Safety of Advanced AI as its scientific foundation, using its technical findings to frame and advance discussions on policy and governance within the Australian context.

The event brought together researchers, policymakers, and industry leaders to address the critical challenges posed by advanced artificial intelligence. 

Professor Kimberlee Weatherall, ADM+S Chief Investigator from The University of Sydney and one of the event organising committee members said, “introductory talks on the state of AI, AI safety and AI governance sought to create a common ground among researchers coming from very different disciplines and – if the enthusiasm and energy of the conversations in and outside the formal sessions is anything to go by – activated strong interest in understanding different perspectives on AI safety and governance questions.”

Distinguished speakers included:

  • Marcus Hutter, Australian National University
  • Hoda Heidari, Carnegie Mellon University
  • Johanna Weaver, Tech Policy Design Centre, Australian National University
  • Atoosa Kasirzadeh, Carnegie Mellon University
  • Ryan Kidd, ML Alignment & Theory Scholars (MATS)
  • Seth Lazar, Australian National University
  • Nitarshan Rajkumar, University of Cambridge
Audience at the 2024 AI Safety Forum
Researchers, policymakers, and industry leaders discuss the global challenges of AI safety and governance at the 2024 AI Safety Forum. Photo by Melodie Heart Photography

The Forum provided a platform for rigorous discussions on technical aspects of AI safety, including the science of AI alignment, safety engineering, and risk assessment. Participants also actively explored the governance challenges around regulating advanced AI technologies and how Australia specifically can contribute to the development of international AI safety frameworks.

By creating a space where technical researchers, legal experts, industry leaders, and policymakers can come together, the event sought to help Australia and Australians play a key role in the development of a safer, more responsible AI future.

The inaugural event was organised by the following committee members:

ADM+S Members Dr Sarah Erfani, Dr Henry Fraser, Miriam Nadeem, Dr Aaron Snoswell, Simon Taylor and Prof Kimberlee Weatherall made significant contributions through their participation in this event.

The Forum was supported by the University of Sydney Digital Sciences Initiative IGNITE scheme, the Faculty of Engineering and the Faculty of Arts and Social Sciences, as well as Open Philanthropy. 

SEE ALSO

ADM+S members present evidence for the Inquiry into Workplace Surveillance

ADM+S members present evidence for the Inquiry into Workplace Surveillance

Author Natalie Campbell
Date 8 November 2024

On 1 November 2024 ADM+S members Dr Jake Goldenfein and Lauren Kelly presented evidence to the Victorian Legislative Assembly Economy and Infrastructure Committee for the final hearing of the Inquiry into Workplace Surveillance.

The Inquiry was established in May 2024 to examine the extent to which surveillance data is being collected, shared, stored, disclosed, sold, disposed of and otherwise utilised in Victorian workplaces.

ADM+S Chief Investigator Dr Jake Goldenfein from the University of Melbourne presented evidence on behalf of ADM+S and as a co-author on the NTEU submission that made three key recommendations:

Recommendation 1: The Office of the Victorian Information Commissioner audits Victorian universities for their compliance with the Privacy and Data Protection Act 2018 (Vic) to be adequately resourced for this purpose.

Recommendation 2: The Victorian Parliament enacts a statute dedicated to regulating workplace surveillance.

Recommendation 3: The statute in Recommendation 2 be based on six Workplace Privacy Principles

  1. Comprehensiveness
  2. Transparency
  3. Freedom of association and the centrality of trade unions
  4. Legitimate purpose and proportionality
  5. Governance and accountability
  6. Effective compliance and enforcement

Dr Goldenfein said, “The commissioners were deeply concerned about the range of harms that surveillance causes to workers and Victoria’s inadequate regulatory regime.”

ADM+S PhD Student Lauren Kelly from RMIT University also spoke to the Committee, as an author on the United Workers Union submission, which highlighted case studies of workplace surveillance from secretive to biometric, working from home surveillance to medical surveillance, and more, demonstrating both beneficial and unacceptable uses.

The Committee’s final report, including findings and recommendations, will be tabled in parliament in 2025.

SEE ALSO

Finalists from the Young ICT Explorers competition showcase their tech projects at ADM+S at QUT

Finalists from the Young ICT Explorers competition showcase their tech projects at ADM+S at QUT

Author ADM+S Centre
Date 8 November 2024

On 11 October, finalists in the 2024 Young ICT Explorers state competition from East Brisbane State School (EBSS) presented their innovative technology projects, including the Baby Seat Booster and the Memory Keeper, at the QUT node of ADM+S, with the Baby Seat Booster team securing second place in the competition.

The Young ICT Explorers (YICTE) is a non-profit competition supported by CSIRO Digital Careers, The Smith Family, Kinetic IT and School Bytes. The annual competition encourages primary and high school students from years three to 12 to use their imagination and passion to create an invention that could change the world using the power of technology.

While at QUT, the students presented the following projects virtually to the YICTE judges.

Baby Seat Booster

The Baby Seat Booster is designed to prevent young children from being injured or fatally harmed whilst traveling in a vehicle. The team’s research uncovered that 38% of children who die in car accidents die because they were unbuckled and that approximately 5000 Australian children are left in cars each year unattended. The Baby Seat Booster device not only alerts the parent when their child is unbuckled but also when their mobile device leaves the vicinity of the vehicle while there is still a child in the seat. Using a heat sensor the device also indicates when the car is becoming overheated – when the temperature sensor reaches 35C – and sends an alert to the parents’ phone. To develop the device the team used Arduino load cell, load cell amplifier, reed switch, Bluetooth and a temperature sensor technologies. They prototyped the sensors and mobile phone app, calibrating it to ensure it can accurately detect a child when they are placed in the booster seat, and also when they are removed from this seat.

The Memory Keeper

The Memory Keeper project is a part of the 125-year celebration of East Brisbane State School, integrated into a larger website project commemorating the school’s history. The team focused on preserving daily life at EBSS by capturing audio recordings, photographs, and interviews. They recorded playground activities, the oval, the senior choir, and other school sounds to develop online materials to create a rich and immersive experience. 

The team conducted interviews with various individuals connected to EBSS, including five past students, teacher aides, the old groundsman, the former library monitor, the principal, and their local member of Parliament Dr Amy MacMahon. They documented their memories and stories, highlighting the school’s evolution from approximately 1951 to the 1980s.

To aid accessibility and engagement with the materials, they implemented QR codes at key locations during the 125-year celebration. These codes link to the website, allowing visitors to access audio and visual content. Speakers placed under the undercroft will replicate typical lunchtime sounds, immersing visitors in the school’s daily environment during the celebration event. This project addresses the lack of public awareness about the school’s internal activities, providing a comprehensive view of EBSS’s vibrant history and daily life. By preserving these memories digitally, the team has ensured that the legacy of EBSS is available for future generations to explore and appreciate.

Students from EBSS discuss research with researchers including Ned Watt and Daniel Whelan-Shamy from ADM+S

EBSS 125 Year Celebration

East Brisbane State School has an incredibly rich and powerful 125 year history. Despite this incredible history it is easy to walk past the school and not realise the stories and memories that it holds. To make this history visible, this team’s solution was to make a website about 125 years of EBSS – to be gifted to EBSS on the school’s 125-year celebration. The website includes a timeline of important moments in the school’s past, historical records, a link to a virtual walkthrough which includes virtual photos to allow anyone regardless of their mobility to access inside the school, including the areas such as the heritage-listed bell tower. 

The team said, “we hope that by creating this website we can help people acknowledge the past of our school. We also hope our work can be used in future for useful purposes and extended and updated.”

The Young ICT Explorers program encourages students to use creativity and innovation to gain a greater understanding of the diverse capabilities of technology, which was evident amongst the students visiting ADM+S. 

Professor Daniel Angus, Director of the Digital Media Research Centre at QUT and Chief Investigator at the ADM+S at QUT hosted the students during their visit. 

“It’s inspiring to see students taking an active role in learning about and shaping digital technology. Engaging young minds in ICT projects like these not only builds their skills but gives them a voice in the future of digital technology. Children bring fresh insights and creativity, reminding us all of the importance of involving them in the evolution of technology. They’re not passive users of digital tools, they’re potential innovators who can teach us a great deal about where technology should and could head if we care to listen.”

The students met with ADM+S higher degree researchers and learned about the Centre’s work in creating responsible, ethical and inclusive automated decision-making systems and how such research can shape future technologies.

Finalists of the 2024 competition were announced on November 7th, with the Baby Seat Booster team taking home second place.

SEE ALSO

Elon Musk’s flood of US election tweets may look chaotic. My data reveals an alarming strategy

Getty: Samuel Corum / Stringer

Elon Musk’s flood of US election tweets may look chaotic. My data reveals an alarming strategy

Author Timothy Graham
Date 6 November 2024

As voting booths in the United States close and the results of the presidential election trickle in, tech billionaire Elon Musk has been posting a flurry of tweets on his social media platform, X (formerly Twitter). So too has Republican presidential nominee Donald Trump.

At first glance these tweets might appear chaotic and random. But if you take a closer look, you start to see an alarming strategy behind them – one that’s worth paying very close attention to in order to understand the inner workings of the campaign to return Trump to the White House.

The strategy has two immediate aims. First, to overwhelm the information space and thereby manage attention. Second, to fuel the conspiracy theory that there is a coordinated campaign among Democrats, the media and big tech to steal this election.

But it’s important to understand that the strategy on X is part of a master strategy of Trump’s campaign: a backup plan in case of a Trump loss, designed to encourage the public to participate in a grand re-wiring of reality via the meta-narrative of widespread voter fraud.

Overwhelm the information space

Musk has long been a prominent user of X, even before he became the owner, chief technology officer and executive chairman of the platform.

But as I reported last week, since he endorsed Trump in July, engagement with his account has seen a sudden and anomalously large increase, raising suspicions as to whether he has tweaked the platform’s algorithms so his content reaches more people.

This trend has continued in recent days.

As well as posting on X, earlier today Musk also held a “freeform” live discussionon the platform about the election. It lasted for nearly one and a half hours. Around 1.3 million people tuned in. This is one of many live discussions he has hosted about the election over the past months, including notably with Trump.

In an information war, everything is about attention management. Platforms are designed to maximise engagement and user attention above and beyond anything else. This core logic of social media is highly exploitable: who controls attention controls the narrative. In Australia, the “Vote No” campaign during last year’s referendum on Indigenous representation in government was a masterclass in attention management.

By bombarding audiences, journalists, and other key stakeholders with a constant supply of allegations, rumours, conspiracy theories and unverifiable claims, Musk and the Trump campaign eat up all the oxygen of attention. When everyone is focussed on you and what you’re saying, they are distracted from what the other side is saying.

And Musk and Trump want people to focus on the idea that the election is going to be stolen.

Tech billionaire Elon Musk (right) has been a pivotal figure in the campaign to return former president Donald Trump to the White House. Evan Vucci/AP

Fuel the election fraud narrative

From the beginning of the year, the narrative that the US presidential election is at risk of being defrauded has been steadily gaining steam. But in the past week leading up to election day, it has gone gangbusters.

For example, starting on October 27, Trump started posting on X using the #TooBigtoRig hashtag. This refers to the idea that Trump will win the election by such a large margin that the result will be incontestable. Up to this point, the #TooBigToRig campaign was driven by Trump supporters. Now, Trump has officially joined – giving it the ultimate legitimacy.

There has also been a dramatic spike over the last week in posts using similarly themed hashtags such as #ElectionFraud, #ElectionInterference, #VoterFraud and #StopTheSteal.

Musk himself hasn’t been using these hashtags very much (although replies to him from other users are riddled with them). But he has been posting material that aligns with them. For example, earlier today he retweeted a post which claimed the electronic voting system in the US was insecure. Musk added: “Absolutely”.

He has also falsely accused Google of encouraging Americans to vote for Democratic nominee Kamala Harris.

And as some early results have started trickling in, Musk has posted about Trump’s odds of winning being nearly 70%.

“The prophecy has been fulfilled,” Musk wrote.

Participatory disinformation

In many ways this has all the hallmarks of participatory disinformation. This concept, developed by computer scientist Kate Starbird and colleagues, explains how both ordinary people as well as politicians and influential actors become active participants in spreading false narratives.

Unlike the top-down model of propaganda, participatory disinformation describes how grassroots activists and regular people – often with strong convictions and genuine intentions – contribute to spreading and evolving narratives that are not grounded in facts. It is a collaborative feedback loop involving both elite framing of issues and collective sensemaking and “evidence” gathering.

Before war breaks out, there are clear signs of what’s about to unfold, even if a country publicly denies they are preparing for battle. Blood supplies, troops and weaponry are transported to the border in preparation for an invasion.

The same thing is at play here, except the weapon is us.

The flood of tweets by Musk and Trump, in particular, is setting the stage for a full-blown participatory disinformation campaign to undermine the election results.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Tech billionaire Elon Musk’s social media posts have had a ‘sudden boost’ since July, new research reveals

Tech billionaire Elon Musk’s social media posts have had a ‘sudden boost’ since July, new research reveals

Author Mark Andrejevic and Timothy Graham
Date 1 November 2024

On July 13, shortly after Donald Trump was targeted by an assassination attempt, Elon Musk, the billionaire owner of X (formerly Twitter), tweeted to his more than 200 million followers:

“I fully endorse President Trump and hope for his rapid recovery.”

Musk’s efforts to influence who wins next week’s US presidential election have continued. For example, over the past three months, he has donated more than US$100 million to a political action committee called America PAC that’s promoting Trump.

But our new research (currently available in preprint form) indicates Musk may be wielding influence in other more subtle ways as well. However, the platform’s increasing opacity to researchers makes this difficult to say for certain.

This raises suspicions as to whether Musk has tweaked the platform’s algorithm to increase the reach of his posts in advance of the US presidential election. It also demonstrates the problems with how social media platforms like X are currently regulated around the world.

Not the first time

Musk has history when it comes to tweaking X’s algorithms so his own content reaches more people.

Last year, he reportedly mobilised a team of around 80 engineers to algorithmically boost his posts. This came after his tweet supporting the Philadelphia Eagles during the Super Bowl was outperformed by a similar one from President Joe Biden. Musk seemed to confirm this happened, posting a picture depicting one woman labelled “Elon’s tweets” forcibly bottle feeding another woman labelled “Twitter”.

To see whether Musk has done this again in the leadup to the US election, we compared Musk’s engagement metrics – such as the number of views, retweets and likes – with a set of other prominent political accounts on the social media platform. The data spans the period from January 1 2024 to October 25 2024.

Other political accounts that served as a basis of comparison include those of right-wing commentators Jack Posobiec, Tucker Carlson and Donald Trump Jr. Our study also examined accounts at the other end of the political spectrum, including those of US Representative Alexandria Ocasio-Cortez, US Senator Bernie Sanders and Vice-President Kamala Harris.

Shortly after Musk endorsed Trump’s presidential campaign, there was a statistically anomalous boost in engagement with his X account. Suddenly, his posts were getting much higher views, retweets and likes in comparison to other prominent political accounts on the platform.

A sudden and significant increase

Since July, engagement with Musk’s X account has seen a sudden and significant increase.

The view counts for his posts increased by 138%, retweets by 238%, and likes by 186%.

In contrast, other prominent political accounts on X saw more moderate increases: 57% in view counts, 152% in retweets, and 130% in likes.

This suggests that while engagement went up for all accounts after July, Musk’s metrics saw a particularly large boost, particularly in retweets and likes.

The myth of neutrality

The findings raise the question of the extent to which Musk’s influential social media platform is reinforcing its owner’s political agenda.

Musk, whose businesses have extensive government contracts, has made a public and financial spectacle of his unabashed support of Trump. The billionaire tech tycoon is reportedly Trump’s second-biggest financial donor. He also promoted Trump in a glitchy live interview on X and authored a stream of tweets promoting Trump’s campaign.

Musk is also handing out $1 million a day to selected registered voters. This plan (which has met with questions over its legality) apparently aims to boost voter registration among sympathisers in swing states.

Musk’s actions have torpedoed the fantasy that social media platforms such as X are neutral. Given he has previously tweaked X’s algorithm to amplify the reach of his posts, it would be surprising if he were not tilting the platform in favour of Trump, whom he believes is “the path to prosperity”.

For too long, social media platforms have enjoyed immunity for the information they selectively inject into users’ feeds. It’s time for governments to reconsider their approach to regulating the oligopolistic power over our information environment wielded by a handful of tech billionaires.

The research further found that since July, other conservative and right-wing X accounts have performed better in terms of visibility of posts compared to progressive and left-wing accounts.

The Conversation sought comment from X about the research, but did not receive a reply before deadline.

Without backstage access to the workings of the company, it is impossible to know for sure whether changes to its curation system are boosting its owner’s posts. The platform has limited the access it provides to researchers since Musk took over. This means there are restrictions on the amount of data we were able to collect for this study.

However, the Washington Post recently found that tweets from Republicans are far more likely to go viral, receiving billions more views than those from Democrats. Similarly, an investigation by the Wall Street Journal revealed that new users to the platform “are being blanketed with political content” that disproportionately favours Trump.

Since Musk’s purchase of the platform, it has become more congenial to figures on the right, including people who were previously banned for spreading harmful and false information.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Deaths linked to chatbots show we must urgently revisit what counts as ‘high-risk’ AI

Child lying in bed at night and looking at the glowing screen of smartphone.

Deaths linked to chatbots show we must urgently revisit what counts as ‘high-risk’ AI

Author Henry Fraser
Date 31 October 2024

Last week, the tragic news broke that US teenager Sewell Seltzer III took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot on the Character.AI website.

As his relationship with the companion AI became increasingly intense, the 14-year-old began withdrawing from family and friends, and was getting in trouble at school.

In a lawsuit filed against Character.AI by the boy’s mother, chat transcripts show intimate and often highly sexual conversations between Sewell and the chatbot Dany, modelled on the Game of Thrones character Danaerys Targaryen. They discussed crime and suicide, and the chatbot used phrases such as “that’s not a reason not to go through with it”.

A screenshot of a chat exchange between Sewell and the chatbot Dany. 'Megan Garcia vs. Character AI' lawsuit

This is not the first known instance of a vulnerable person dying by suicide after interacting with a chatbot persona. A Belgian man took his life last year in a similar episode involving Character.AI’s main competitor, Chai AI. When this happened, the company told the media they were “working our hardest to minimise harm”.

In a statement to CNN, Character.AI has stated they “take the safety of our users very seriously” and have introduced “numerous new safety measures over the past six months”.

In a separate statement on the company’s website, they outline additional safety measures for users under the age of 18. (In their current terms of service, the age restriction is 16 for European Union citizens and 13 elsewhere in the world.)

However, these tragedies starkly illustrate the dangers of rapidly developing and widely available AI systems anyone can converse and interact with. We urgently need regulation to protect people from potentially dangerous, irresponsibly designed AI systems.

How can we regulate AI?

The Australian government is in the process of developing mandatory guardrails for high-risk AI systems. A trendy term in the world of AI governance, “guardrails” refer to processes in the design, development and deployment of AI systems. These include measures such as data governance, risk management, testing, documentation and human oversight.

One of the decisions the Australian government must make is how to define which systems are “high-risk”, and therefore captured by the guardrails.

The government is also considering whether guardrails should apply to all “general purpose models”. General purpose models are the engine under the hood of AI chatbots like Dany: AI algorithms that can generate text, images, videos and music from user prompts, and can be adapted for use in a variety of contexts.

In the European Union’s groundbreaking AI Act, high-risk systems are defined using a list, which regulators are empowered to regularly update.

An alternative is a principles-based approach, where a high-risk designation happens on a case-by-case basis. It would depend on multiple factors such as the risks of adverse impacts on rights, risks to physical or mental health, risks of legal impacts, and the severity and extent of those risks.

Chatbots should be ‘high-risk’ AI

In Europe, companion AI systems like Character.AI and Chai are not designated as high-risk. Essentially, their providers only need to let users know they are interacting with an AI system.

It has become clear, though, that companion chatbots are not low risk. Many users of these applications are children and teens. Some of the systems have even been marketed to people who are lonely or have a mental illness.

Chatbots are capable of generating unpredictable, inappropriate and manipulative content. They mimic toxic relationships all too easily. Transparency – labelling the output as AI-generated – is not enough to manage these risks.

Even when we are aware that we are talking to chatbots, human beings are psychologically primed to attribute human traits to something we converse with.

The suicide deaths reported in the media could be just the tip of the iceberg. We have no way of knowing how many vulnerable people are in addictive, toxic or even dangerous relationships with chatbots.

Guardrails and an ‘off switch’

When Australia finally introduces mandatory guardrails for high-risk AI systems, which may happen as early as next year, the guardrails should apply to both companion chatbots and the general purpose models the chatbots are built upon.

Guardrails – risk management, testing, monitoring – will be most effective if they get to the human heart of AI hazards. Risks from chatbots are not just technical risks with technical solutions.

Apart from the words a chatbot might use, the context of the product matters, too. In the case of Character.AI, the marketing promises to “empower” people, the interface mimics an ordinary text message exchange with a person, and the platform allows users to select from a range of pre-made characters, which include some problematic personas.

The front page of the Character.AI website for a user who has entered their age as 17. C.AI

Truly effective AI guardrails should mandate more than just responsible processes, like risk management and testing. They also must demand thoughtful, humane design of interfaces, interactions and relationships between AI systems and their human users.

Even then, guardrails may not be enough. Just like companion chatbots, systems that at first appear to be low risk may cause unanticipated harms.

Regulators should have the power to remove AI systems from the market if they cause harm or pose unacceptable risks. In other words, we don’t just need guardrails for high risk AI. We also need an off switch.

If this article has raised issues for you, or if you’re concerned about someone you know, call Lifeline on 13 11 14.The Conversation

Henry Fraser, Research Fellow in Law, Accountability and Data Science, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

What is AI superintelligence? Could it destroy humanity? And is it really almost here?

What is AI superintelligence? Could it destroy humanity? And is it really almost here?

Author Flora Salim
Date 29 October 2024

In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence (AI) with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the idea that advanced AI systems – “superintelligences” more capable than humans – might one day take over the world and destroy humanity.

A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away. A year ago, Altman’s OpenAI cofounder Ilya Sutskever set up a team within the company to focus on “safe superintelligence”, but he and his team have now raised a billion dollars to create a startup of their own to pursue this goal.

What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But unpacking what that might mean in practice can get a bit tricky.

Different kinds of AI

In my view the most useful way to think about different levels and kinds of intelligence in AI was developed by US computer scientist Meredith Ringel Morris and her colleagues at Google.

Their framework lists six levels of AI performance: no AI, emerging, competent, expert, virtuoso and superhuman. It also makes an important distinction between narrow systems, which can carry out a small range of tasks, and more general systems.

A narrow, no-AI system is something like a calculator. It carries out various mathematical tasks according to a set of explicitly programmed rules.

There are already plenty of very successful narrow AI systems. Morris gives the Deep Blue chess program that famously defeated world champion Garry Kasparov way back in 1997 as an example of a virtuoso-level narrow AI system.

Levels of AI

Some narrow systems even have superhuman capabilities. One example is Alphafold, which uses machine learning to predict the structure of protein molecules, and whose creators won the Nobel Prize in Chemistry this year.

What about general systems? This is software that can tackle a much wider range of tasks, including things like learning new skills.

A general no-AI system might be something like Amazon’s Mechanical Turk: it can do a wide range of things, but it does them by asking real people.

Overall, general AI systems are far less advanced than their narrow cousins. According to Morris, the state-of-the-art language models behind chatbots such as ChatGPT are general AI – but they are so far at the “emerging” level (meaning they are “equal to or somewhat better than an unskilled human”), and yet to reach “competent” (as good as 50% of skilled adults).

So by this reckoning, we are still some distance from general superintelligence.

How intelligent is AI right now?

As Morris points out, precisely determining where any given system sits would depend on having reliable tests or benchmarks.

Depending on our benchmarks, an image-generating system such as DALL-E might be at virtuoso level (because it can produce images 99% of humans could not draw or paint), or it might be emerging (because it produces errors no human would, such as mutant hands and impossible objects).

There is significant debate even about the capabilities of current systems. One notable 2023 paper argued GPT-4 showed “sparks of artificial general intelligence”.

OpenAI says its latest language model, o1, can “perform complex reasoning” and “rivals the performance of human experts” on many benchmarks.

However, a recent paper from Apple researchers found o1 and many other language models have significant trouble solving genuine mathematical reasoning problems. Their experiments show the outputs of these models seem to resemble sophisticated pattern-matching rather than true advanced reasoning. This indicates superintelligence is not as imminent as many have suggested.

Will AI keep getting smarter?

Some people think the rapid pace of AI progress over the past few years will continue or even accelerate. Tech companies are investing hundreds of billions of dollars in AI hardware and capabilities, so this doesn’t seem impossible.

If this happens, we may indeed see general superintelligence within the “few thousand days” proposed by Sam Altman (that’s a decade or so in less scifi terms). Sutskever and his team mentioned a similar timeframe in their superalignment article.

Many recent successes in AI have come from the application of a technique called “deep learning”, which, in simplistic terms, finds associative patterns in gigantic collections of data. Indeed, this year’s Nobel Prize in Physics has been awarded to John Hopfield and also the “Godfather of AI” Geoffrey Hinton, for their invention of Hopfield Networks and Boltzmann machine, which are the foundation for many powerful deep learning models used today.

General systems such as ChatGPT have relied on data generated by humans, much of it in the form of text from books and websites. Improvements in their capabilities have largely come from increasing the scale of the systems and the amount of data on which they are trained.

However, there may not be enough human-generated data to take this process much further (although efforts to use data more efficiently, generate synthetic data, and improve transfer of skills between different domains may bring improvements). Even if there were enough data, some researchers say language models such as ChatGPT are fundamentally incapable of reaching what Morris would call general competence.

One recent paper has suggested an essential feature of superintelligence would be open-endedness, at least from a human perspective. It would need to be able to continuously generate outputs that a human observer would regard as novel and be able to learn from.

Existing foundation models are not trained in an open-ended way, and existing open-ended systems are quite narrow. This paper also highlights how either novelty or learnability alone is not enough. A new type of open-ended foundation model is needed to achieve superintelligence.

What are the risks?

So what does all this mean for the risks of AI? In the short term, at least, we don’t need to worry about superintelligent AI taking over the world.

But that’s not to say AI doesn’t present risks. Again, Morris and co have thought this through: as AI systems gain great capability, they may also gain greater autonomy. Different levels of capability and autonomy present different risks.

For example, when AI systems have little autonomy and people use them as a kind of consultant – when we ask ChatGPT to summarise documents, say, or let the YouTube algorithm shape our viewing habits – we might face a risk of over-trusting or over-relying on them.

In the meantime, Morris points out other risks to watch out for as AI systems become more capable, ranging from people forming parasocial relationships with AI systems to mass job displacement and society-wide ennui.

What’s next?

Let’s suppose we do one day have superintelligent, fully autonomous AI agents. Will we then face the risk they could concentrate power or act against human interests?

Not necessarily. Autonomy and control can go hand in hand. A system can be highly automated, yet provide a high level of human control.

Like many in the AI research community, I believe safe superintelligence is feasible. However, building it will be a complex and multidisciplinary task, and researchers will have to tread unbeaten paths to get there.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S Research Fellow presents research at King’s College London

Dang and collleagues at KCL
L-R: Jonathan Gray, Dang Nguyen, Elisa Oreglia

ADM+S Research Fellow presents research at King’s College London

Author Natalie Campbell
Date 29 October 2024

Dr Dang Nguyen from RMIT University recently returned from the United Kingdom having completed a two-week research visit with colleagues at King’s College London.

Hosted by Dr Jonathan Gray and the Department of Digital Humanities, Dr Nguyen was also invited to spend time with the Centre of Digital Culture, the Department of Media, Culture, & Creative Industries, and to learn about the work being done at the new Digital Futures Institute.

“The welcome I received at King’s College was wonderful from day one. Everyone was so generous and approachable, and it was a pleasure to connect over our shared research interests while discovering the impressive digital humanities research happening here,” said Dr Nguyen.

On 27 September the Department of Digital Humanities invited Dr Nguyen to present her research on Automated Informality in a talk titled ‘Anatomy of a phone farm: hardware, platforms, infrastructure’.

Drawing on preliminary fieldwork in Southeast Asia, the presentation demonstrated how phone farms form an integral part of the global platform economy, highlighting the extensive networks and infrastructures necessary for these farms to function.

The visit provided an opportunity to exchange current research projects and findings, and to explore potential collaborations with colleagues across the various social science departments.

“The connections I’ve built with colleagues at King’s College are incredibly valuable for my current research.

“Over the next few years, I’ll be collaborating closely with Dr. Jonathan Gray, Director of the Centre for Digital Culture at the Digital Futures Institute, to develop innovative research methods for studying platform economic informality.

“I’ll also be working with Dr. Elisa Oreglia, Dr. Funda Ustek Spilda, and Dr. Niki Cheong to expand emerging networks focused on digital Southeast Asia research,” she said.

This research visit was supported by ADM+S.

SEE ALSO

The ADM+S annual symposium shares insights on the future of AI and automated decision-making systems across the Mobilities

Afsaneh Hasanebrahimi presenting her research at the 2024 ADM+S Symposium.

The ADM+S annual symposium shares insights on the future of AI and automated decision-making systems across the Mobilities

Author ADM+S Centre
Date 28 October 2024

The ARC Centre of Excellence of Automated Decision-Making and Society (ADM+S) annual symposium brought together Centre researchers from across 9 Australian universities and collaborators to discuss and plan future research in the evolving landscape of AI and automated decision-making across the Centre’s mobilities focus area.

The main symposium featured insights on the current state and future of automated mobilities. Over the course of the three-day event, participants engaged in workshops and panel discussions focused on various mobilities sectors impacted by automation, including public transport, active transportation, and migration services. Attendees examined both the benefits—such as increased efficiency and accessibility—and the risks, including potential inequities and ethical challenges.

The symposium was led by ADM+S Mobilities focus area co-leaders Prof Flora Salim from UNSW and Prof Sarah Pink from Monash University.

One of the symposium’s key aims was to consolidate ongoing research efforts within the Mobilities Focus Area of ADM+S. Presenters showcased findings from the first phase of their research, highlighting innovative applications of automated decision-making across personal, shared, commercial, and public systems. 

“Collectively we have delivered, and continue to deliver an excellent range of projects to offer an unprecedented level of interdisciplinary insight into Australian automated and AI mobilities technologies and systems and the benefits, dangers and future possibilities associated with them,” said Prof Pink.

The symposium presented research across themed areas:

  • Privacy and Accountability
  • Data Capture and Behaviours
  • Accessibility and Inclusion
  • Systems and Deployment
  • Futures

Prof Salim said, “we designed each session to account for both the humanities and social science research papers and the technical computer science and engineering research papers thematically, bringing out a very interesting and engaging sociotechnical panel discussion at the end of each session.”

In addition to traditional academic outputs, the symposium encouraged creative formats for sharing research findings, including poster presentations, ethnographic documentary films and multimedia contributions. 

Wilson Wongso presenting at the ADM+S HDR/ECR poster presentation
Wilson Wongso presenting at the ADM+S HDR/ECR poster presentation

The symposium included research poster contributions from higher degree research students (HDR), early career researchers (ECR) and ADM+S researchers.

The Judges Award for the HDR/ECR poster competition was awarded to:

  • Leveraging LBSN check-ins for natural language user profiles to enhance next point-of-interest recommendation with Large Language Models – presented by Wilson Wongso from UNSW.

The People’s Choice Award was awarded to:

  • The role of human oversight in AI-assisted decision-making: Navigating the conceptual and regulatory complexities – presented by Emma Finlay from University of Melbourne (HDR/ECR category)
  • Optimising electric vehicle charging capability in NSW – presented by Lihuan Li from UNSW (ADM+S research category)

The symposium’s diverse contributions will be compiled into a publication offering valuable insights into the future of automated mobilities. This special issue is set to play a crucial role in fostering dialogue and collaboration within the field, paving the way for innovative advancements and shared understandings.

The event provided the opportunity for interdisciplinary researchers from across partner universities to connect, collaborate, and share research methods and tools that will continue to benefit advancing research in automated decision-making and AI conducted at the Centre.

We would like to thank all the researchers who made the effort to actively participate in the symposium. We also thank UNSW Engineering for co-sponsoring the event.

The Centre acknowledges the symposium organising committee: Prof Flora Salim, Prof Sarah Pink, Assoc Prof Michael Richardson, Dr Hao Xue, Kathy Nickels, Mathew Warren, Hanne Bjellaanes and Thi-Nga Ho.

View a selection of publicly available recorded sessions from the 2024 ADM+S Symposium

View event highlights.

SEE ALSO

Prof Deborah Lupton elected Fellow of the Australian Academy of Health and Medical Sciences

Deborah Lupton elected to AAHMS 2024
Prof Deborah Lupton, UNSW

Prof Deborah Lupton elected Fellow of the Australian Academy of Health and Medical Sciences

Author Natalie Campbell
Date 24 October 2024

On 24 October the Australian Academy of Health and Medical Sciences (AAHMS) announced 31 new Fellows, including ADM+S Chief Investigator and Health Focus Area leader, Prof Deborah Lupton.

Established in 2014, the AAHMS is an expert representative and independent voice for health and medical sciences in Australia, dedicated to engaging with the community, industry and government on pressing related to health.

Made up of Australia’s most influential experts in health and medicine – from universities, medical research institutes, health services, industry bodies, charities and public service – AAHMS Fellows are elected by their peers for outstanding achievements and contributions to the health and medical sciences in their respective fields.

2024 Fellows were welcomed in a ceremony at the Academy’s Annual Meeting in Adelaide on 24 October.

Academy President Prof Louise Baur said, “Our Fellowship represents the breadth and diversity of Australia’s health and medical expertise, allowing us to draw on independent, expert and evidence-based advice to drive change and improve health for all.

“Our new Fellows have a truly exceptional body of work, with each of them considered international leaders in their respective fields.”

Prof Deborah Lupton is SHARP Professor in the Faculty of Arts & Social Sciences, UNSW Sydney, working in the Centre for Social Research in Health and the Social Policy Research Centre, and leading the Vitalities Lab.  She is the author/co-author of 20 books and editor/co-editor of a further ten book collections, as well as over 240 book chapters and articles.

“As leader of the Health Focus Area of ADM+S, I am looking forward to productive discussions and collaborations on automation and AI in health with other Fellows of AAHMS,” said Prof Lupton.

Read more about Deborah’s research.

SEE ALSO

ADM+S submission cited in Australian Government’s Second Interim Report into Social Media and Australian Society

ADM+S submission cited in Australian Government’s Second Interim Report into Social Media and Australian Society

Author Natalie Campbell
Date 23 October 2024

The Joint Select Committee on Social Media and Australian Society, established by the Federal Government, has published its Second Interim Report on Digital Platforms and the Traditional News Media, citing contributions from the 28 June ADM+S submission.

The 22 October second interim report focusses on “the decision of Meta to abandon deals under the News Media and Digital platforms Mandatory Bargaining Code (Code) and the important role of Australian journalism, news, and public interest media on a healthy democracy in countering mis- and disinformation on digital platforms.”

The report regularly cites ADM+S research when discussing algorithmic transparency, fact-checking, misinformation, and the role and function of the News Media Bargaining Code.

It provides 11 recommendations informed by 217 submissions, including establishing a Digital Affairs Ministry, exploring alternative revenue mechanisms to supplement the Code, developing protocols for transparent distribution of revenue, new legislation to combat mis- and disinformation, and that the Australian Government establish a short-term transition fun to help news media businesses diversify and strengthen alternative income streams and news product offerings.

ADM+S Associate Investigator Assoc Prof James Meese from RMIT University, and lead author on the ADM+S submission said, “We’re pleased that our research has assisted the committee in their study of these important issues.

“ADM+S members from a variety of disciplines contributed to our submission, and it is outcomes like this which highlight the value of cross-disciplinary research and collaboration.”

The Committee is due to present its final report on or before 18 November 2024.

View the Interim Report

View the ADM+S Submission

SEE ALSO

Could a recent ruling change the game for scam victims? Here’s why the banks will be watching closely

hands typing

Could a recent ruling change the game for scam victims? Here’s why the banks will be watching closely

Author Jeannie Marie Paterson and Nicola Howell
Date 18 October 2024

In Australia, it’s scam victims who foot the bill for the overwhelming majority of the money lost to scams each year.

A 2023 review by the Australian Securities and Investments Commission (ASIC) found banks detected and stopped only a small proportion of scams. The total amount banks paid in compensation paled in comparison to total losses.

So, it was a strong statement this week when it was revealed the Australian Financial Conduct Authority (AFCA) had ordered a bank – HSBC – to compensate a customer who lost more than $47,000 through a sophisticated bank impersonation or “spoofing” scam.

This decision was significant. An AFCA determination is binding on the relevant bank or other financial institution, which has no direct right of appeal. It could have implications for the way similar cases are treated in future.

The ruling comes amid a broader push for sector-wide reforms to give banks more responsibility for detecting, deterring and responding to scams, as opposed to simply telling customers to be “more careful”.

Here’s what you should know about this landmark ruling, and what it might mean for consumers.

A highly sophisticated ‘spoofing’ scam

You might be familiar with “push payment” scams that trick the victim into paying money to a dummy account. These include the “mum I’ve lost my phone” scam and some romance scams.

The recent case concerned an equally noxious “bank impersonation” or “spoofing” scam. The complainant – referred to as “Mr T” – was tricked into giving the scammer access to his HSBC account, from which an unauthorised payment was made.

The scammer sent Mr T a text message, purportedly asking him to investigate an attempted Amazon transaction.

In an effort to respond to the (fake) unauthorised Amazon purchase, Mr T revealed security passcodes to the scammer, enabling them to transfer $47,178.54 from his account and disappear with it.

The fact Mr T was dealing with scammers was far from obvious – scammers had information about him one might reasonably expect only a bank would know, such as his bank username.

On top of this, the scam text message appeared in a thread of other legitimate text messages that had previously been sent by the real HSBC.

AFCA’s ruling

HSBC argued to AFCA that having to pay compensation should be ruled out under the ePayments Code, a voluntary code of practice administered by ASIC.

Under this code, a bank is not required to compensate a customer for an unauthorised payment if that customer has disclosed their passcode. The bank argued the complainant had voluntarily disclosed these codes to the scammer, meaning the bank didn’t need to pay.

AFCA disagreed. It noted the very way the scam had worked was by creating a sense of urgency and crisis. AFCA considered that the complainant had been manipulated into disclosing the passcodes and had not acted voluntarily.

AFCA awarded compensation covering the vast majority of the disputed transaction amount, lost interest charged to a home loan account, and $5,000 towards Mr T’s legal costs.

It also ordered the bank to pay compensation of $1,000 for poor customer service in dealing with the matter, including communication delays.


HSBC argued the complainant had given over his passcodes voluntarily, but AFCA disagreed. Mick Tsikas/AAP

Other cases may be more complex

In this case, the determination was relatively straightforward. It found Mr T had not voluntarily disclosed his account information, so was not excluded from being compensated under the ePayments Code.

However, many payment scams fall outside the ePayments Code because they involve the customer directly sending money to the scammer (as opposed to the scammer accessing the customer’s account). That means there is no code to direct compensation.

Still, AFCA’s jurisdiction is broader than merely applying a code. In considering compensation for scam losses, AFCA must consider what is “fair in all the circumstances”. This means taking into account:

  • legal principles
  • applicable industry codes
  • good industry practice
  • previous AFCA decisions.

Relevant factors might well include whether the bank was proactive in responding to known scams, as well as the challenges for individual customers in identifying scams.

Broader reforms are on the way

At the heart of this determination by AFCA is a recognition that, increasingly, detecting sophisticated scams can be next to impossible for customers, which can mean they don’t act voluntarily in making payments to scammers.

Similar reasoning has informed a range of recent reform initiatives that put more responsibility for detecting and responding to scams on the banks, rather than their customers.

In 2023, Australia’s banking sector committed to a new “Scam-Safe Accord”. This is a commitment to implement new measures to protect customers, including a confirmation of payee service, delays for new payments, and biometric identity checks for new accounts.

Changes on the horizon could be more ambitious and significant.

The proposed Scams Prevention Frameworklegislation would require Australian banks, telcos and digital platforms to take reasonable steps to prevent, detect, report, disrupt and respond to scams.

It would also include a compulsory external dispute resolution process, like AFCA’s, for consumers seeking compensation for when any of these institutions fail to comply.

Addressing scams is not just an Australian issue. In the United Kingdom, newly introduced rules make paying and receiving banks responsible for compensating customers, for scam losses up to £85,000 (A$165,136), unless the customer is grossly negligent.

Jeannie Marie Paterson is a Professor of Law at The University of Melbourne

Nicola Howell is a Senior lecturer at Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself

A woman works on his computer in front of Facebook sign

Is big tech harming society? To find out, we need research – but it’s being manipulated by big tech itself

Author Timothy Graham
Date 4 October 2024

For almost a decade, researchers have been gathering evidence that the social media platform Facebook disproportionately amplifies low-quality content and misinformation.

So it was something of a surprise when in 2023 the journal Science published a study that found Facebook’s algorithms were not major drivers of misinformation during the 2020 United States election.

This study was funded by Facebook’s parent company, Meta. Several Meta employees were also part of the authorship team. It attracted extensive media coverage. It was also celebrated by Meta’s president of global affairs, Nick Clegg, who said it showed the company’s algorithms have “no detectable impact on polarisation, political attitudes or beliefs”.

But the findings have recently been thrown into doubt by a team of researchers led by Chhandak Bagch from the University of Massachusetts Amherst. In an eLetter also published in Science, they argue the results were likely due to Facebook tinkering with the algorithm while the study was being conducted.

In a response eLetter, the authors of the original study acknowledge their results “might have been different” if Facebook had changed its algorithm in a different way. But they insist their results still hold true.

The whole debacle highlights the problems caused by big tech funding and facilitating research into their own products. It also highlights the crucial need for greater independent oversight of social media platforms.

Merchants of doubt

Big tech has started investing heavily in academic research into its products. It has also been investing heavily in universities more generally. For example, Meta and its chief Mark Zuckerberg have collectively donated hundreds of millions of dollars to more than 100 colleges and universities across the United States.

This is similar to what big tobacco once did.

In the mid-1950s, cigarette companies launched a coordinated campaign to manufacture doubt about the growing body of evidence which linked smoking with a number of serious health issues, such as cancer. It was not about falsifying or manipulating research explicitly, but selectively funding studies and bringing to attention inconclusive results.

This helped foster a narrative that there was no definitive proof smoking causes cancer. In turn, this enabled tobacco companies to keep up a public image of responsibility and “goodwill” well into the 1990s.

Vintage magazines with tobacco advertising from the sixties.
Big tobacco ran a campaign to manufacture doubt about the health effects of smoking.
Ralf Liebhold/Shutterstock

A positive spin

The Meta-funded study published in Science in 2023 claimed Facebook’s news feed algorithm reduced user exposure to untrustworthy news content. The authors said “Meta did not have the right to prepublication approval”, but acknowledged that The Facebook Open Research and Transparency team “provided substantial support in executing the overall project”.

The study used an experimental design where participants – Facebook users – were randomly allocated into a control group or treatment group.

The control group continued to use Facebook’s algorithmic news feed, while the treatment group was given a news feed with content presented in reverse chronological order. The study sought to compare the effects of these two types of news feeds on users’ exposure to potentially false and misleading information from untrustworthy news sources.

The experiment was robust and well designed. But during the short time it was conducted, Meta changed its news feed algorithm to boost more reliable news content. In doing so, it changed the control condition of the experiment.

The reduction in exposure to misinformation reported in the original study was likely due to the algorithmic changes. But these changes were temporary: a few months later in March 2021, Meta reverted the news feed algorithm back to the original.

In a statement to Science about the controversy, Meta said it made the changes clear to researchers at the time, and that it stands by Clegg’s statements about the findings in the paper.

Unprecedented power

In downplaying the role of algorithmic content curation for issues such as misinformation and political polarisation, the study became a beacon for sowing doubt and uncertainty about the harmful influence of social media algorithms.

To be clear, I am not suggesting the researchers who conducted the original 2023 study misled the public. The real problem is that social media companies not only control researchers’ access to data, but can also manipulate their systems in a way that affects the findings of the studies they fund.

What’s more, social media companies have the power to promote certain studies on the very platform the studies are about. In turn, this helps shape public opinion. It can create a scenario where scepticism and doubt about the impacts of algorithms can become normalised – or where people simply start to tune out.

This kind of power is unprecedented. Even big tobacco could not control the public’s perception of itself so directly.

All of this underscores why platforms should be mandated to provide both large-scale data access and real-time updates about changes to their algorithmic systems.

When platforms control access to the “product”, they also control the science around its impacts. Ultimately, these self-research funding models allow platforms to put profit before people – and divert attention away from the need for more transparency and independent oversight.The Conversation

Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Animals in the machine: why the law needs to protect animals from AI

Animals in the machine: why the law needs to protect animals from AI

Author Lev Bromberg, Christine Parker & Simon Coghlan
Date 1 October 2024

The rise of artificial intelligence (AI) has triggered concern about potentially detrimental effects on humans. However, the technology also has the potential to harm animals.

An important policy reform now underway in Australia offers an opportunity to address this. The federal government has committed A$5 million to renewing the lapsed Australian Animal Welfare Strategy. Consultation has begun, and the final strategy is expected in 2027.

While AI is not an explicit focus of the review, it should be.

Australians care about animals. The strategy could help ensure decision-makers protect animals from AI’s harms in our homes, on farms and in the wild.

Will AI harms to animals go unchecked?

Computers are now so developed they can perform some complex tasks as well as, or better than, humans. In other words, they have developed a degree of “artificial intelligence”.

The technology is exciting but also risky.

Warnings about the risks to humans include everything from privacy concerns to the collapse of human civilisation.

Policy-makers in the European Union, the United States and Australia are scrambling to address these issues and ensure AI is safe and used responsibly. But the focus of these policies is to protect humans.

Now, Australia has a chance to protect animals from AI.

Australia’s previous Animal Welfare Strategy expired in 2014. It’s now being revived, and aims to provide a national approach to animal welfare.

So far, documents released as part of the review suggest AI is not being considered under the strategy. That is a serious omission, for reasons we outline below.

Powerful and pervasive technology in use

Much AI use benefits animals, such as in veterinary medicine. For example, it may soon help your vet read X-rays of your animal companion.

AI is being developed to detect pain in cats and dogs. This might help if the technology is accurate, but could cause harm if it’s inaccurate by either over-reporting pain or failing to detect discomfort.

AI may also allow humans to decipher animal communication and better understand animals’ point of view, such as interpreting whale song.

It has also been used to discover which trees and artificial structures are best for birds.

But when it comes to animals, research suggests AI may also be used to harm them.

For example, it may be used by poachers and illegal wildlife traders to track and kill or capture endangered species. And AI-powered algorithms used by social media platforms can connect crime gangs to customers, perpetuating the illegal wildlife trade.

AI is known to produce racial, gender and other biases in relation to humans. It can also produce biased information and opinions about animals.

For example, AI chatbots may perpetuate negative attitudes about animals in their training data – perhaps suggesting their purpose is to be hunted or eaten.

There are plans to use AI to distinguish cats from native species and then kill the cats. Yet, AI image recognition tools have not been sufficiently trained to accurately identify many wild species. They are biased towards North American species, because that is where the bulk of the data and training comes from.

Algorithms using AI tend to promote more salacious content, so they are likely to also recommend animal cruelty videos on various platforms. For example, YouTube contains content involving horrific animal abuse.

Some AI technologies are used in harmful animal experiments. Elon Musk’s brain implant company Neuralink, for instance, was accused of rushing experiments that harmed and killed monkeys.

Researchers warn AI could estrange humans from animals and cause us to care less about them. Imagine AI farms almost entirely run by smart systems that “look after” the animals. This would reduce opportunities for humans to notice and respond to animal needs.

The unexpected impact of AI on animals with author Professor Peter Singer.

Existing regulatory frameworks are inadequate

Australia’s animal welfare laws are already flawed and fail to address existing harms. They allow some animals to be confined to very small spaces, such as chickens in battery cages or pigs in sow stalls and farrowing crates. Painful procedures (such as mulesing, tail docking and beak trimming) can be legally performed without pain relief.

Only widespread community outrage forces governments to end the most controversial practices, such as the export of live sheep by sea.

This has implications for the development and use of artificial intelligence. Reform is needed to ensure AI does not amplify these existing animal harms, or contribute to new ones.

Internationally, some governments are responding to the need for reform.

The United Kingdom’s online safety laws now require social media platforms to proactively monitor and remove illegal animal cruelty content from their platforms. In Brazil, Meta (the owner of Facebook and WhatsApp) was recently fined for not taking down posts that had been tagged as illegal wildlife trading.

The EU’s new AI Act also takes a small step towards recognising how the technology affects the environment we share with other animals.

Among other aims, the law encourages the AI industry to track and minimise the carbon and other environmental impact of AI systems. This would benefit animal as well as human health.

The current refresh of the Australian Animal Welfare Strategy, jointly led by federal, state and territory governments, gives us a chance to respond to the AI threat. It should be updated to consider how AI affects animal interests.The Conversation

Lev Bromberg, PhD Candidate and Research Fellow, The University of Melbourne; Christine Parker, Professor of Law, The University of Melbourne, and Simon Coghlan, Senior Lecturer in Digital Ethics, Centre for AI and Digital Ethics, School of Computing and Information Systems, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S launches new open access publications library

ADM+S launches new open access publications library

Author Kathy Nickels
Date 27 September 2024

The ARC Centre of Excellence for Automated Decision-Making and Society is proud to announce the launch of its new publications library, a new digital resource designed to make the Centre’s research in automated decision-making and AI more accessible.

The ADM+S Publications Library features a user-friendly interface with advanced search capabilities, making it easier for researchers, policymakers, and the public to access and interact with the Centre’s vast body of knowledge. 

Developed by Dr Jake Goldenfein, Chief Investigator at ADM+S at the University of Melbourne, and Dr Amanda Lawrence, an Affiliate of ADM+S from RMIT University, the publications collection reflects the Centre’s commitment to making its work more accessible and impactful. 

Dr Lawrence, an expert in library and information management, emphasises the collections role in fostering greater engagement with research.

“I’m thrilled to see the launch of the new ADM+S publications library, which not only demonstrates the amazing quality and diversity of research occurring at the Centre, but also our commitment to research access and impact. 

“Many of our publications are open access and we have worked hard to adapt the Centre’s Zotero library into an easy to use collection available directly from our website. Now anyone can browse and search the latest cutting-edge research on automated decision-making and society in one place,” said Dr Lawrence.

The Library provides a searchable library of the Centre’s extensive work, including research articles, reports, government submissions, and books. 

By enhancing transparency and accessibility, it aims to bridge the gap between academic research and practical application, underscoring the Centre’s dedication to advancing the field through innovation and collaboration.

The ARC Centre of Excellence for Automated Decision-Making and Society is committed to addressing the challenges and opportunities presented by automated decision-making technologies. The Publications Library stands as a testament to this commitment, offering a crucial resource for those seeking to understand and engage with this evolving field.

Explore the new ADM+S Publications Library

SEE ALSO

Digital Platform Regulators Forum paper on Multimodal Foundation Models: Implications for regulation and consumer safety

Digital Platform Regulators Forum paper on Multimodal Foundation Models: Implications for regulation and consumer safety

Author Kathy Nickels
Date 20 September 2024

The Australian Government’s Digital Platform Regulators Forum (DP-REG) has published their latest working paper on multimodal foundation models (MFMs) used in generative artificial intelligence (AI).

The paper “Examination of technology – Multimodal Foundation Models” examines MFMs. Unlike large language models (LLMs), which focus on text, MFMs are a type of generative AI capable of processing and producing various data types, including images, audio, and video. 

This extension from more commonly used LLMs to MFMs broadens the potential use cases for generative AI, allowing it to be used for a wider range of tasks. 

The paper discusses some of the implications of this advancing technology for consumer protection, competition, the media and information environment, privacy, and online safety within the digital platform context and assesses how these technologies affect the regulatory roles and responsibilities of each DP-REG member.

The working paper was prepared by representatives from the Australian Competition and Consumer Commission (ACCC), the Australian Communications and Media Authority (ACMA), eSafety and the Office of the Australian Information Commissioner (OAIC) in their capacity as members of the Digital Platform Regulators Forum.

Concerns around the use of MFMs raised by the DP-REG include scams and misleading conduct exacerbated by deepfake images and videos, the spread of misinformation and disinformation in Australia, generation of potentially harmful and illegal content and the potential use of personal information in unexpected ways outside the control of the individual.

With the rapid expansion of generative AI into diverse fields, including image and video generation, the paper seeks to explore the broader implications of these technologies.  

This paper supports DP-REG’s 2024–26 strategic priorities, which include a focus on understanding, assessing and responding to the benefits, risks and harms of technology, including AI models. It aims to complement and inform broader government work on AI that is underway.

The Digital Platform Regulators Forum (DP-REG) is an important information-sharing and collaboration initiative between Australian independent regulators focused on fostering a safe, trusted, fair, innovative and competitive digital economy in Australia.

The paper acknowledges the contribution made by experts from the ARC Centre of Excellence for Automated Decision-Making and Society stating that, “their insights on generative AI have been instrumental in preparing this paper.”

The following ADM+S members are acknowledged for their contributions to the paper. 

SEE ALSO

‘Side job, self-employed, high-paid’: behind the AI slop flooding TikTok and Facebook

Shutterstock/AI Generated

‘Side job, self-employed, high-paid’: behind the AI slop flooding TikTok and Facebook

Author Jiaru Tang and Patrik Wikström
Date 19 September 2024

TikTok, Facebook and other social media platforms are being flooded with uncanny and bizarre content generated with artificial intelligence (AI), from fake videos of the US government capturing vampires to images of shrimp Jesus.

Given its outlandish nature and tenuous relationship with reality, you might think this so-called “AI slop” would quickly disappear. However, it shows no sign of abating.

In fact, our research suggests this kind of low-quality AI-generated content is becoming a lucrative venture for the people who make it, the platforms that host it, and even a growing industry of middlemen teaching others how to get in on the AI gold rush.

When generative AI meets profiteers and platforms

The short explanation for the prevalence of these baffling videos and images is that savvy creators on social media platforms have worked out how to use generative AI tools to earn a quick buck.

But the full story is more complex. Platforms have created incentive programs for content that goes viral, and a whole ecosystem of content creators has arisen using generative AI to exploit these programs.

Much of the conversation around generative AI tools focuses on how they enable ordinary people to “create”. Many earlier digital technologies have also made it easier to participate in creative activities, such as how smartphones made photography ubiquitous.

But generative AI takes this a step further, as it can generate tailored images or videos from a simple text prompt. It makes content creation more accessible – and also opens the floodgates to mass production on social media.

To take just one example: if you search “pet dance motorcycle” on TikTok, you will find hundreds of AI-generated videos of animals doing the “motorbike dance”, all animated using the same AI template. Some accounts post dozens of videos like this every day.

Creators and platforms are making money

You may wonder why such repetitive, unimaginative content can go viral on TikTok. The answer lies in the platform’s own advice to aspiring creators: if you want your videos to be promoted, you should “continuously share fresh and diverse content” that “doesn’t require a big production budget”.

You may also wonder why some platforms don’t ban AI accounts for polluting the platform’s content stream. Other platforms such as Spotify and YouTube, which police intellectual property rights more aggressively than TikTok, invest considerable resources to identify and remove AI-generated content.

TikTok’s community guidelines do ban “inaccurate, misleading, or false content that may cause significant harm”, but AI-generated content – at least for now – does not qualify as causing “significant harm”.

Instead, this kind of content has become important for platforms. Many of those “pet dance motorcycle” videos, for example, have been viewed tens of millions of times. As long as users are scrolling through videos, they are getting exposed to the ads that are the platforms’ primary source of income.

Inside the AI ‘gold rush’

There is also a growing industry of people teaching others how to make money using cheap AI content.

Take Xiaonan, a social media entrepreneur we interviewed who runs six different TikTok accounts, each with more than 100,000 followers. As he revealed in a live-streaming tutorial with more than 1,000 viewers, Xiaonan earned more than US$5,500 from TikTok in July alone.

Xiaonan also hosts an exclusive chatting group where, for a fee, he reveals his most effective AI prompts, video headlines and hashtags tailored for different platforms including YouTube and Instagram. Xiaonan also reveals tricks for standing out in the platforms’ recommendation game and avoiding platform regulations.

Xiaonan says he established his “AI side job” after being laid off by an internet company. He now works with two partners selling classes and tutorials on making AI-generated videos and other types of spam for profit.

Creators posting AI content may not be the kind of people we expect. As Xiaonan told us, many of the people taking his AI tutorial – entitled “Side job, self-employed, high-paid” – are housewives, unemployed people and college students.

“Some of us also do Uber driving or street vending,” one creator told us. AI-generated content has become the latest trend for earning side income.

The rise of AI has coincided with global unemployment trends and the growth of the gig economy in the post-pandemic era.

Making AI-generated content is more pleasant work than driving passengers or delivering food, according to a creator who is also a stay-at-home mother. It’s easy to learn, almost zero cost, and can be done any time at home with just a phone.

As Xiaonan says, his method is to use AI to “earn from productivity gap” – that is, by producing far more content than people who don’t use AI .

The global AI-generated content factory

Our observations indicate many of these creators are from non-Western countries, such as India, Vietnam and China.

As one Chinese social media influencer told us:

China’s short video market is nearing saturation, which means you need to seek data traffic [viewers] on overseas platforms.

For these entrepreneurs, AI is the secret sauce not only for creating viral content but also for circulating already-viral videos across different countries and platforms.

An effective strategy mentioned by one creator is a kind of platform arbitrage involving popular videos from Douyin, the counterpart of TikTok in mainland China.

A creator will take one of these videos, add AI-generated translation, and post the result on TikTok. Despite clunky AI dubbing and error-riddled subtitles, many of these videos garner hundreds of thousands or even millions of views.

Creators often mute the original video and add AI-generated narration, translating the content into various languages, including French, Spanish, Portuguese, Indonesian and Swedish. These creators often manage several or even dozens of accounts, targeting viewers in different countries in a strategy known as an “account matrix”.

This is only the beginning

We are only at the dawn of mainstream AI-generated content culture. We will soon face a situation in which content is effectively infinite, but human attention is still limited.

For platforms, the challenge will be balancing the engagement these AI-driven trends bring with the need to maintain trust and authenticity.

Social media platforms will soon respond. But before that, AI-generated content will continue to grow wildly – at least for a while.The Conversation

Jiaru Tang, PhD student, Digital Media Research Centre, Queensland University of Technology and Patrik Wikström, Professor of Computational Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

A new law aims to tackle online lies – but it ignores expert advice and doesn’t go nearly far enough

Blue scales with computer coding terms

A new law aims to tackle online lies – but it ignores expert advice and doesn’t go nearly far enough

Author Daniel Angus
Date 13 September 2024

The federal government this week introduced a new bill into parliament aimed at cracking down on the spread of misinformation and disinformation on the internet.

The government also this week announced plans to ban young people from social media platforms and improve privacy protections. These moves have been criticised by experts, who say bans are ineffective, and privacy reforms fall short of what is required in the digital age.

The government published a draft of the new misinformation and disinformation bill last year for public consultation. It received more more than 24,000 responses (including from my colleagues and me).

The new version of the bill suggests the government listened to some expert recommendations from the consultation process, but ignored many others.

What’s in the bill?

The government has adopted an “information disorder” definition of misinformation and disinformation.

Misinformation is content that contains information that is reasonably verifiable as false, misleading or deceptive. It’s spread on a digital service and reasonably likely to cause or contribute serious harm.

What makes disinformation different is the intent behind it. If there are reasonable grounds to suspect a person disseminating it intends to deceive, or if there is “inauthentic behaviour” such as the use of fake accounts, it may be disinformation.

Speaking to the ABC, Minister for Communications Michelle Rowland said the new bill:

goes to the systems and processes of the platforms and says they need to have methods in place to be able to identify and do something about [misinformation and disinformation].

The design of social media platforms means misinformation and disinformation can spread rapidly. The new bill, which is yet to be voted on, requires platforms to publish a report which assesses this inherent risk. It also requires them to publish a media literacy plan and their current policies about misinformation and disinformation.

The bill also provides stronger powers for the Australian Communications and Media Authority (ACMA). These powers would enable ACMA to make specific directives to platforms and impose penalties if they do not comply.

For example, ACMA could require platforms to implement media literacy tools and submit reports on their efforts to combat harmful content.

The new bill does not aim to regulate all misinformation and disinformation. Instead, its focus is on the kind of misinformation and disinformation which is “reasonably likely to cause or contribute to serious harm”.

The definition of serious harm includes:

  • harm to the operation or integrity of the electoral or referendum process
  • harm to public health
  • vilification of a group or individual based on factors such as race, religion, sex or disability
  • intentionally inflicted physical injury to an individual in Australia
  • imminent damage to critical infrastructure or disruption of emergency services
  • imminent harm to the Australian economy.

If a platform breaches the bill, it could face civil penalties of up to 5% of its annual global turnover. For a company such as Meta, which owns Facebook, this could easily run to billions of dollars.

What’s good about the bill?

It is good to see a focus on improving transparency and accountability for social media platforms. However, there is no explicit provision that data platforms share with ACMA be made available to researchers, academics or civil society.

This limits the potential for transparency and accountability.

One significant criticism of the draft legislation was that it had real potential to limit free speech. The bill remains cautious, with protections for political discourse and public interest communication. For example, there are protections for satire and humour, professional news content, and content for academic, artistic, scientific or religious purposes.

The reasonable application of these powers will also be reviewed regularly to assess the impact of the bill on freedom of expression.

Proposed limitations which would have meant the bill did not apply to electoral and referendum matters have also been removed.

This is a vitally important change. Misleading information played a significant role in the recent Voice referendum, and in other elections.

The bill also better addresses instances of coordinated activity under a definition of inauthentic behaviour. This begins to address circumstances where problematic activity is less about the truthfulness of the individual content, rather that it is part of a collective action to artificially amplify the reach of the content.

What’s bad about the bill?

The bill maintains a distinction between misinformation, which is spread by accident, and disinformation, which is spread deliberately.

As my colleagues and I argued in our submission to the government’s draft legislation last year, this distinction isn’t helpful or necessary. That’s because intent is very hard to prove – especially as content gets reshared on digital platforms. Regardless of whether a piece of false, misleading or deceptive content is spread deliberately or not, the result is usually the same.

The bill also won’t cover mainstream media. This is a problem because some mainstream media outlets such as Sky News are prominent contributors to the spread of misinformation.

Notably this has included climate change denial, which is a widespread and pressing problem. The bill does not include climate misinformation in its scope. This greatly diminishes its relevance in addressing the harm done by misinformation.

This bill makes many of the same mistakes as the government’s other recent attempts to reduce online harms. It goes against expert advice and neglects important issues. As a result, it’s unlikely to achieve its goals.The Conversation

Daniel Angus, Professor of Digital Communication, Director of QUT Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Tips for spotting a fake image: Insights from ABC News Story Lab’s guide

Image by Dr T.J. Thomson

Tips for spotting a fake image: Insights from ABC News Story Lab’s guide

Author Kathy Nickels
Date 16 September 2024

Struggling to determine if images online are fake or real? As technology behind image manipulation becomes increasingly sophisticated, distinguishing between authentic and fabricated images is more challenging than ever.

To help spot what’s real and what’s fake, ABC News Story Lab team has created an interactive guide titled How to spot a fake. The guide features insights from Dr T J Thomson, a senior lecturer at RMIT and affiliate of the ARC Centre of Excellence for Automated Decision-Making and Society.

Dr Thomson demonstrates key methods for identifying image manipulation including the use of digital forensics tool – WeVerify, reverse image search, examining contextual clues, and spotting logical inconsistencies.

Dr Thomson also warns that as AI tools evolve, detecting fake images may become even more difficult, with generative AI increasingly capable of creating convincing images without obvious flaws. 

Whether images have been doctored using technology invented last century or last week, the same rules apply.

In addition to visual cues in the image itself it is helpful to also consider the broader context, such as who posted the image and whether other sources corroborate the event depicted.

Read the full article, How to spot a fake, for more detailed tips and insights.

SEE ALSO

Advance Australia uses old news to spread malinformation through advertising campaigns on Facebook

Advance Australia uses old news to spread malinformation through advertising campaigns on Facebook

Author Kathy Nickels
Date 12 September 2024

For the past two months, conservative lobby group Advance Australia’s “Election News” Facebook page has used targeted advertising to publish hundreds of links to older news articles that cast the Greens party in a deeply negative light.

ARC Centre of Excellence of Automated Decision-Making and Society digital communications expert Professor Daniel Angus said the practice is a textbook example of a “malinformation campaign”, where true information is removed from its appropriate context and then spread with the intent to cause reputational harm or sow confusion.

Professor Angus said that the use of old news articles is a very common tactic in spreading malinformation.  

“Common tactics can be to share news about older crimes, past scandals, or other negative news, presenting them as current to inflame emotions, stoke fear, or manipulate public perception. 

“The strategy capitalises on how audiences may only read headlines and lead paragraphs but fail to check publication dates and appreciate that these issues may have long been resolved.”

“The decontextualisation of the original information makes it highly insidious, as it uses facts in ways that seek to erode trust and foster harm in the target, and there are no straightforward antidotes,” he said.

Hidden advertising on digital platforms

The links from Advance Australia are being posted as ads on Facebook, meaning they only appear on the news feeds of target audiences and are largely hidden from public scrutiny. 

Whilst Meta provides “transparency” libraries where the ads placed by advertisers on the platform can be viewed. The accessibility, searchability and durability of these archives vary with extremely limited information for advertising that falls outside of the category of political advertising.

Improving observability of online advertising

Professor Daniel Angus has been working with fellow researchers at the ADM+S to tackle this issue through the Australian Ad Observatory project.

Through the project ADM+S researchers have developed a range of computational methods for improving the observability of platform-based advertising that extend existing ad data transparency initiatives. This enables forms of observation that could facilitate advertising accountability by industry, regulators, researchers, and civil society. 

The latest work of the ADM+S Australian Ad Observatory team was published this week in the field-leading Journal of Advertising, as part of a special issue that examines the latest advances in approaches to study computational advertising. 

“No single method alone solves the limitations of platform-provided transparency initiatives, so we have designed a suite of tools to address advertising observability across four key benchmarks.”

The four key benchmarks for methods of advertising observability:

  • Enabling the systematic observation of ads published;
  • Collecting advertiser information and meta-data; 
  • Assembling user demographic information associated with each ad; 
  • Observing patterns of ads as they unfold within social media feeds.

Professor Angus urged Meta to “strengthen its policies regarding the placement of ‘misleading and illegal advertising'”.

These and countless other examples that we have discovered through our work on the Australian Ad Observatory reveal how Meta’s ad environment is regularly used to spread scam, misleading, and illegal advertising.

“If Meta are unwilling to clean up their advertising ecosystem, the government needs to step in with regulation and enforcement actions to force their hand,” he said.

SEE ALSO

ADM+S research delivers value for remote First Nations community partners

Co-researcher Floyd King conducting survey with elder John Duggie in Tennant Creek
Co-researcher Floyd King conducting survey with elder John Duggie in Tennant Creek

ADM+S research delivers value for remote First Nations community partners

Author Leah Hawkins and Assoc Prof Daniel Featherstone
Date 11 September 2024

The Mapping the Digital Gap research project has released its final community report from 2023, showcasing findings on communications access and digital inclusion for First Nations people in Tennant Creek in Northern Territory.

The report is an update of the Tennant Creek 2022 Outcomes Report and is the 21st in a series of comprehensive outcomes and update reports tailored to the 11 remote First Nations communities and local partner organisations visited since 2022.

The reports are part of the project’s commitment to Indigenous data sovereignty, providing survey data and analysis of interviews and research findings back to the participating communities.

The reports identify barriers to digital inclusion in each site and outline suggestions of strategies to address these barriers, and support local advocacy, planning and partnerships with government and industry stakeholders.

Over three years, the research team has engaged deeply with the unique context of each community, developing mutually beneficial partnerships with local agencies and stakeholders to deliver reciprocal outcomes.

“The partnership with RMIT for [Mapping the Digital Gap research] has been awesome.” said Madeline Gallagher-Dahn, CEO of Kalumburu Aboriginal Corporation.

“It’s basically highlighted a lot of areas that [the] majority of us wouldn’t think of on a day-to-day basis.  It also highlights the remoteness of Kalumburu and what is needed for these parts of the world and what can be done, what needs to be improved, and what still is a possibility.”

The community reports propose Digital Inclusion Plans for each community, tracking agency involvement and progress on place-based solutions to digital inclusion challenges.

Kerry Legge, CEO of Laynhapuy Homelands Aboriginal Corporation in East Arnhem Land, said “I found it really valuable because it feels like [the research team is] not just focused on this project, [they’re] actually helping Laynhapuy”.

“We really need technology here to run the organisation and then we’ve also got members who want it for their own purposes as well… that’s the story we’ve got to tell Australia [about the challenges in] closing the gap … to recognise that it is a gap.”

In a world that is experiencing accelerating digital transformation in many aspects of economic and social life, reliable and affordable communication technologies are necessary for accessing many services and managing daily life.

Through planning, advocacy, and local knowledge, First Nations community organisations are critical to the successful delivery of place-based digital inclusion initiatives in remote communities.

“Communication is really the vital key in any community,” said Darrin Atkinson, REDI.E team leader in Wilcannia, “especially Indigenous remote communities where we’ve got to bring ourselves up to date now and take advantage of the new technology that’s available …  we’ve got to grab it with both hands.”

Mapping the Digital Gap is a supplementary project of the Australian Digital Inclusion Index (ADII), established through the ADM+S Centre in partnership with Telstra since 2021. It seeks to address a lack of quantitative and qualitative data on digital inclusion in remote First Nations communities in Australia.

In 2023, the ADII and Mapping the Digital Gap project found a significant digital gap between First Nations people and other Australians of 7.5, which widens significantly with remoteness to 24.4 for remote First Nations people and 25.4 for very remote.

All community reports published so far are accessible via the Mapping the Digital Gap webpage. Mapping the Digital Gap will be releasing another 11 community update and outcomes reports from 2024 research trips, as well as a 2024 annual outcomes report later this year.

Mapping the Digital Gap has been renewed for another four years and will be tracking digital inclusion outcomes in an additional eight remote First Nations communities from 2025-2028.

SEE ALSO

Dr Tegan Cohen wins prestigious Outstanding Doctoral Thesis Award at QUT

Tegan Cohen

Dr Tegan Cohen wins prestigious Outstanding Doctoral Thesis Award at QUT

Author ADM+S Centre
Date 6 September 2024

Dr Tegan Cohen, postdoctoral research fellow from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has been awarded the prestigious Outstanding Doctoral Thesis Award (ODTA) at QUT.

The awards are provided to the top five per cent of HDR doctoral students annually the year after they graduate, with nominations from examiners reviewed by faculty committees and ultimately decided by the Research Degrees Committee.

Dr Tegan Cohen, a Wiradjuri woman, was awarded for her thesis “The datafied polity: Voter privacy in the age of data-driven political campaigning” which spans the regulation of digital platforms and artificial intelligence, privacy law and theory, and the laws of democracy and electoral politics. It was completed with the Faculty of Business and Law.

Since completing her thesis in 2023, Tegan has been a member of the ADM+S researching the governance and regulation of AI. 

Her focus is on legal responses to automation in the private rental sector. Her research also explores the ‘democratisation’ of AI governance, particularly the development of legal rights and mechanisms for effective public participation and contestability.

SEE ALSO

Australian Government proposes ‘Mandatory Guardrails’ for High-Risk AI

Australian Government proposes ‘Mandatory Guardrails’ for High-Risk AI

Author ADM+S Centre
Date 6 September 2024

The Australian Government has proposed ‘mandatory guardrails’ for high-risk AI, including human oversight and mechanisms to challenge AI decisions, in a discussion paper developed the Department of Industry, Science and Resources assisted by a leading AI Expert Group.

Professor Kimberlee Weatherall from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at the University of Sydney is member of the AI Expert Group.

“I’m really pleased to see this paper out now for discussion, with the government recognising that to make the most of the potential benefits of AI, we also need to ensure it is used responsibly”, said Professor Kimberlee Weatherall. 

The Proposals paper for introducing mandatory guardrails for AI in high-risk settings,  released by Industry and Science minister Ed Husic outlines options the Australian Government is considering to mandate guardrails on those developing and deploying AI in Australia in high-risk settings. 

“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails,” said the Minister for Industry and Science Ed Husic.

The proposal provides a risk-based approach with emphasis on measures including testing, transparency and accountability, consistent with developments in other jurisdictions. 

It includes the following key elements:

  • A proposed definition of high-risk AI.
  • Ten proposed mandatory guardrails.
  • Three regulatory options to mandate these guardrails.

The three regulatory approaches could be: 

  • Adopting the guardrails within existing regulatory frameworks as needed
  • Introducing new framework legislation to adapt existing regulatory frameworks across the economy.
  • Introducing a new cross-economy AI-specific law (for example, an Australian AI Act).

The AI Expert Group, which led the development of this proposal, was appointed earlier this year following an undertaking in the Government’s interim response to the Safe and Responsible AI in Australia report.

The group includes members from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) Professor Kimberlee Weatherall, Professor Nicolas Suzor and Professor Jeannie Paterson as well as Bill Simpson-Young from ADM+S collaborating organisation Gradient. 

Consultation on the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings is open for four weeks, closing 5pm AEST on Friday 4 October 2024. The Government also released a Voluntary AI Safety Standard designed to assist businesses to ensure safe and responsible AI.

For more information on the Proposals Paper, including how to have your say, go to consult.industry.gov.au/ai-regulatory-guardrails

SEE ALSO

We asked Melburnians about shared e-scooters. Their responses point to alternatives to the city council’s ban

We asked Melburnians about shared e-scooters. Their responses point to alternatives to the city council’s ban

Author Hiruni Nuwanthika Kegalle, Danula Hettiachchi, Flora Salim & Mark Sanderson
Date 4 September 2024

Melbourne City Council recently decided to ban shared e‑scooters. The council cited concerns for the safety of e‑scooter riders, other road users and pedestrians. The city still permits private e‑scooters.

However, another major concern for many has been where riders park the scooters, often blocking the footpath. Our recent analytical study in Melbourne showed a large proportion of e‑scooter trips start and end on footpaths, which pedestrians also use.

Research on e‑scooter trip data in two US cities found a strong link between e‑scooter use and busier urban areas, particularly in commercial districts.

In our ongoing research, we have interviewed e‑scooter riders, pedestrians, cyclists, the service provider Lime and local council members from four councils that are part of the e‑scooter trial with commercial operators. (These interviews were conducted before the ban was announced.) We wanted to gather their opinions and experiences – including about where e‑scooters should be parked.

This article explores these responses. Based on what the study participants told us, we recommend designated parking points as a condition of permitting shared e‑scooters to operate. The allocation of parking zones should be dynamic, so locations can change as local conditions and needs change.

Pedestrians want clear footpaths

Keeping footpaths clear is the priority for pedestrians.

When e‑scooters are abandoned on footpaths, they become obstacles. This makes it difficult for people to move freely. The problem particularly affects children, the elderly, vision-impaired and those who walk while looking at their phones.

It may be surprising that pedestrians also recognised that if parking zones for e‑scooters were too far away, it could make them less practical for riders and harder to find.

Riders value ease and convenience

Riders prefer to park e‑scooters close to their destinations.

Our analysis of trip data indicates e‑scooters are often used for morning commutes. A higher percentage of trips start in residential areas and end in office zones during peak commuting times. This trend also suggests riders are using e‑scooters as a first-mile/last-mile solution at either end of their commute.

Further analysis of e‑scooter parking patterns across different path types —footpaths, cycle lanes and shared paths — reveals footpaths are the most heavily used for parking.

E‑scooter companies driven by data

E‑scooter companies want to make it convenient and easy for potential riders to find and use their e‑scooters. That makes sense, as more uses mean more profit.

Guided by data analysis of past patterns of use, the companies deploy their e‑scooters in areas of high demand to ensure they’re available where trips most commonly start. They also work with local event organisers, such as those hosting sporting matches, to position e‑scooters at key locations when needed and ensure they are safely parked.

The operators accept that rider behaviour presents a major challenge. They run programs to educate riders on local rules and encourage them to obey these rules.

They have also proposed the use of designated parking areas. This would mean riders are allowed to park only at certain locations. Preferably, these would be places with enough space such as wide street corners, near public transport stops and close to existing bike racks.

According to service providers, a trial of designated parking zones over eight streets in Melbourne was successful.

A Lime employee told us:

The trial, we’ve had 98% compliance and 78% of people on their first try trying to end the ride on the pin, which is huge compared to the rest of the world. So, Melbourne riders are definitely very compliant and they’re willing to do that.

E-scooters lined up on the edge of a wide footpath in Melbourne
Operators say trials of designated e‑scooter parking areas in Melbourne were successful.
ben bryant/Shutterstock

Local councils concerned about parking

Local council employees, responding to pedestrian complaints, suggest using underutilised urban spaces for e‑scooter parking. They propose these areas should be easy to access but carefully positioned to avoid causing new congestion issues.

A City of Port Philip employee told us:

Parking is something that we will need to consider – how we allocate space for proper parking for these devices going forward. And that will help resolve a lot of issues for pedestrians using that footpaths space, [including] persons with a disability that may be finding some difficulties. We may have geofenced an area and said that this is a no‑parking or this is a no‑riding zone. So we try to adapt and learn from our community as well as just from our own instinct.

They added:

We are all learning; this is new. It’s all about finding the right balance.

Cyclists see the parallels

Cyclists suggested setting up designated parking zones similar to bike racks.

Overseas cities, such as Mitte in Germany, Vancouver in Canada and San Francisco in the United States, have introduced designated parking points for e‑scooters. These cities insist on a docking system as a condition of permitting shared e‑scooters.

E-scooters at a designated parking point in San Francisco
Some cities overseas, including San Francisco, make designated parking points and docking systems a condition of operating shared e‑scooters.
Daniel L. Locke/Shutterstock

Finding a balance

It is important to strike a balance between having parking zones close to popular destinations and keeping footpaths and public spaces clear.

To achieve this, we need to look beyond just patterns of e‑scooter use. There’s a need to investigate factors like how people and vehicles move, nearby attractions and public transport links. By considering all these elements, we can choose parking spots that are both convenient and safe for everyone.

We also recommend allocated parking zones be changeable in response to factors like the time of day, weekdays, special events and seasonal changes. A dynamic system can better respond to riders’ varying needs, providers’ operational requirements and pedestrians’ safety concerns.

The mobile app could then guide riders to these designated parking zones. This will ensure e‑scooter parking remains both convenient and safe for everyone.The Conversation

Hiruni Nuwanthika Kegalle, PhD Candidate in Computer Science, RMIT University; Danula Hettiachchi, Lecturer, School of Computing Technologies, RMIT University; Flora Salim, Professor, School of Computer Science and Engineering, inaugural Cisco Chair of Digital Transport & AI, UNSW Sydney, and Mark Sanderson, Dean of Research and Professor of Information Retrieval, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S researcher presents evidence for the inquiry into the Digital Transformation of Workplaces

Dr Walkowiak with colleagues Kobi Lens, Natalie Sheard and Mehdi Raigeign at the public hearing
Dr Walkowiak and colleagues at the 2 September hearing

ADM+S researcher presents evidence for the inquiry into the Digital Transformation of Workplaces

Author ADM+S Centre
Date 4 September 2024

ADM+S Affiliate Dr Emmanuelle Walkowiak recently presented evidence to the House Standing Committee on Employment, Education and Training for the Inquiry into the Digital Transformation of Workplaces.

The public hearing took place on Sunday 2 September. 

Dr Walkowiak was invited to sit on the expert academic panel alongside colleagues Dr Kobi Leins, Dr Natalie Sheard and Dr Mehdi Rajaeian, to discuss the opportunities and risks of implementing AI in the workplace.

At the hearing, Dr Walkowiak shared key points from her 19 June submission to the Inquiry. She has provided the following statement (via RMIT University). 

“Generative AI has properties that are very different from other technologies and will transform our labour market in new ways.

The time to act to ensure that GenAI works for workers is now. My previous research has demonstrated that once technological and organisational choices are implemented, firms do not reverse their choices. Once GenAI is adopted and AI risk mitigation is decided, these choices will not be reversed.

There are two important aspects that I want to flag which differentiate GenAI from other technologies.

Firstly, during previous waves of technology adoption, we have observed a polarisation of income and opportunities on the labour market. What is different this time is that GenAI can enhance the productivity of less skilled and less experienced workers. My research shows GenAI can enhance productivity of less skilled and less experienced workers – positively impacting all workers. Immediate productivity gains for less experienced or less skilled workers was not something observed for other technologies. This is a unique opportunity presented by AI; appropriate policies need to be designed to unlock these opportunities.

Secondly, my recent research into AI risks shows that the transformation of jobs is driven by the inseparability between productivity gains and new AI risks. I analysed the exposure of the Australian Labour Market to eight AI risks (such as privacy or cybersecurity). These AI risks are new types of occupational risks. Mapping of AI risks exposure can help prioritise policy intervention to mitigate these risks.

What does it mean for the labour market? GenAI is a new type of economic agent, involving new types of productivity gains, new forms of learning and new types of risks. The major impact of technological change is a creative destruction of jobs and skills. In the US, research shows that 60% of job titles that existed in 2018 did not exist in the 40s. The creation of new jobs reflects not just automation, but the enabling of new capabilities and services that were not previously feasible without technologies. Policies must be designed and implemented to shape the adoption of GenAI that works for all workers and favour the creations of these new jobs. Upskilling is central to prepare the workforce for the next generation of jobs.

In conclusion, GenAI is about to bring transformational changes in the workplace. These changes can be shaped by adequate policy-making to ensure adoption of AI works for all workers, and there is an opportunity to make AI work for less skilled workers. To ensure that we maximise the benefits of AI adoption, we need a better understanding of emerging risks for workers, which need to be managed with adequate regulations and policies in order to protect workers.”

Dr Walkowiak is a Vice-Chancellor Senior Research Fellow in Economics, from the department of Economics, Finance and Marketing and is a research affiliate of the Blockchain Innovation Hub. 

Her research primarily focuses on technology driven inclusion at work and the changing nature of work in a digital economy.

Learn more.

SEE ALSO

ADM+S partners with Victorian Women’s Trust for Rural Women Online

rural women online
Participants and mentor at Rural Women Online in Shepparton

ADM+S partners with Victorian Women’s Trust for Rural Women Online

Author Natalie Campbell
Date 30 August 2024

In 2024 the ADM+S Australian Digital Inclusion Index team are partnering with the Victorian Women’s Trust for Rural Women Online, a series of free, public events designed in consultation with community representatives to develop digital skills and confidence for women living in regional Victoria.

The program, taking place in Shepparton and then Yackandandah, feature hands-on workshops, drop-in digital support services and presentations from local organisations to develop digital literacy skills. Sessions are run by local facilitators to encourage community learning and ongoing support.

The ADM+S Australian Digital Inclusion (ADII) research team is collaborating with the Victorian Women’s Trust to study the impact of the program and its ability to help close the gap of digital inclusion in regional areas, through surveys and interviews with participants.

ADM+S research fellow and ADII team member Dr Kieran Hegarty explains, “We’re speaking to participants about how they found the programs, how it impacted their confidence and motivation, and their level of skills, and how these kind of place based programs can be developed and strengthened and rolled out in other communities across Australia.”

Central to the methodology of the ADII, citizens’ feedback is crucial in determining the impact of initiatives aimed at improving digital inclusion, and identifying areas that need improvement.

“As a researcher, these kind of programs and getting involved are really critical because it helps translate the research we do around digital inclusion into tangible actions that can benefit communities,” he said.

As part of the Shepparton program, on 8 August ADM+S director Prof Julian Thomas presented a keynote titled ‘Challenges and Opportunities of the Digital Era’, which focussed on digital inclusion with reference to the ADM+S Mapping the Digital Gap project, and the ADII.

Considering the crucial role digital technologies play in everyday life, Prof Thomas’ talk framed digital inclusion as a human right in the information age, highlighting both the opportunities this presents, as well as the risks posed by barriers to digital society, especially for women in regional and rural areas.

Chair of the Victorian Women’s Trust Alana Johnson said, “Nothing’s going to happen to reduce that digital divide unless we take action. We can’t sit back and expect the NBN, or the government, or the local council or whoever, to make it all right for us.

“We have to do that for community, with community and by community.”

Established in 1985, the Victorian Women’s Trust (VWT) is a proudly independent feminist organisation which supports women, girls and gender diverse people through social change projects and campaigns, thought-provoking events, mentorship opportunities, and grants for vital grassroots projects.

Learn more about the initiative in this video.

Rural women online video

Watch Prof Thomas’ keynote.

SEE ALSO

DECRA funding awarded to examine Australia’s hydrogen hub model and its impact on regional communities

DECRA funding awarded to examine Australia’s hydrogen hub model and its impact on regional communities

Author Kathy Nickels
Date 26 August 2024

Dr Kari Dahlgren, an Associate Investigator at the ADM+S at Monash University will critically examine the hydrogen hub model and its impact on regional communities with funding received through the Australian Research Council’s (ARC) Discovery Early Career Researcher Award (DECRA) scheme

The DECRA scheme supports innovative and high-impact research and is awarded to Australia’s leading early-career researchers with demonstrated capacity for high-quality research and emerging capability for leadership and supervision.

The project Hydrogen Hub aims to assist Australia’s developing hydrogen industry deliver its potential decarbonization, economic and social benefits, by critically examining the hydrogen hub model and its impact on regional communities. 

“The hydrogen hub model is seen as a key way to quickly scale hydrogen’s development, but this model also promises significant concentration of benefits and impacts within regional communities,” said Dr Dahlgren. 

This research will generate new knowledge by being the first ethnographic study of Australia’s emerging hydrogen industry. 

Key outcomes of this project include enhanced understanding of the consequences of the hydrogen hub model and its impacts for regional communities, theoretical development in the social sciences of industrial decarbonisation, a documentary film for research dissemination, and policy recommendations for hydrogen development planning that take into account community concerns and desires.

“Too often community consultation happens too late in the development process to significantly influence it. This research aims to generate diverse possibilities early on in hydrogen’s development, so that it can better align with communities’ visions for the future of their regions,” said Dr Dahlgren.

SEE ALSO

ADM+S Student establishes future collaborations with North American colleagues

Kaixin Ji visits Northeastern University, Boston
Kaixin Ji visits Northeastern University, Boston

ADM+S Student establishes future collaborations with North American colleagues

Author Natalie Campbell
Date 26 August 2024

ADM+S PhD student Kaixin Ji has returned from a research visit to North America, collaborating with peers and industry experts in Maryland, Washington DC, Boston and New York.

Kaixin travelled to SIGIR conference in Washington DC to present her paper ‘Characterizing Information Seeking Processes with Multiple Physiological Signals’, which aims to characterise user emotions and cognitive changes measured by physiological signals during the search process.

This is the first study that explores user behaviours in a search by using the nuanced quantitative analysis of physiological signals.

After SIGIR, Kaixin was invited to visit University of Maryland where she reconnected with Prof Doug Oard who visited ADM+S at RMIT in early 2024, and had the opportunity to present her research to the Maryland IR network.

“Extending on my SIGIR paper, the presentations at Maryland focused on my overall thesis progress, measuring confirmation bias in information seeking with multi-modal physiological sensors,” said Kaixin.

“Because the audiences were not the communities I usually present to, I received many good questions and discussions – especially related to implications – that I haven’t thought of before and was really appreciated.”

Through the networks of colleagues at ADM+S, Kaixin also had the opportunity to visit the Ubiquitous Computing for Health and Well-being (UbiWell) Lab at Northeastern University in Boston, where she was hosted by Dr Varun Mishra, and lucky enough to attend Prof Gregory Abowd’s SIGCHI research lifetime award talk.

This visit prompted discussion around future collaborations, integrating Kaixin’s expertise of information retrieval in a ‘LLM for sensemaking in healthcare’ project, in collaboration with A. Prof Varun Mishra and Akshat Choube.

While in Boston, Kaixin also visited Whoop Inc., a company which specialise in wearable sensors for tracking health, and another connection she made through SIGIR.

“I thought there was no intersection between Information Retrieval and Ubiquitous Computing (for healthcare) communities, and was surprised there is, because of LLM.

“I visited their office in Boston, and we discussed the potential of having them as a collaborator on the UbiWell lab project,” she said.

From a connection made at UbiWell, Kaixin was then invited to visit ADM+S partner organisation Cornell Tech in New York City to further discuss synergies between her work and projects underway at Cornell. These conversations transpired into a new collaboration for ‘Cognitive Bias in Health Insurance LLM Assistant’, with PhD student Dan Adler.

“Apart from making new friends which is always a highlight, being exposed to different communities and different working environments from industry to research, computer science and schools – I gained career and life advice from researchers of different generations.

“More importantly, I learned how Information Retrieval is increasingly integrating with healthcare, and how my research aligns with this trend. These discussions help me rethink the implications of my research and what I want to do in the future.”

This research visit was supported by ADM+S and Google Conference Scholarships.

SEE ALSO

Investigating artificial intelligence risks for the Australian workforce

Investigating artificial intelligence risks for the Australian workforce

Author Kathy Nickels
Date 26 August 2024

Dr Robbie Fordyce, an Affiliate of the ARC Centre of Excellence for Automated Decision-Making and Society at Monash University, has been awarded funding through the Australian Research Council’s (ARC) Discovery Early Career Researcher Award (DECRA) scheme to investigate the risks artificial intelligence (AI) poses to the Australian workforce.

Titled Investigating artificial intelligence risks for the Australian workforce, this research will  examine the way that office software is increasingly used to gather data from Australian workers to train the artificial intelligence that may replace them. The project expects to produce new knowledge on the consequences of artificial intelligence for workers and businesses through surveys and interviews of digital workers and businesses. 

Expected outcomes include a report identifying the risks to workers’ jobs in sectors most dependent on office software, and recommendations for potential retraining needs for affected workers. Benefits include a better understanding of potential social and economic consequences of artificial intelligence driven job losses.

“The ARC Discovery Program has an impressive track record in generating new knowledge that addresses a significant problem or gap in knowledge, and it offers exciting opportunities for Australia’s promising early career researchers to develop in supportive environments,” ARC Acting Chief Executive Officer, Dr Richard Johnson said. 

The DECRA scheme supports innovative and high-impact research and is awarded to outstanding early-career researchers with demonstrated capacity for high-quality research and emerging capability for leadership and supervision.

SEE ALSO

Human Touch in a Digital World: Public film screening event

Human Touch in a Digital World: Public film screening event

Author Kathy Nickels
Date 26 August 2024

The intersection of technology and everyday life takes centre stage at an upcoming event hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) that promises to captivate and inspire. 

“Human Touch in a Digital World” presents four compelling short films that will be screened at the Science Theatre at UNSW on Wednesday 16 October. 

This event offers a unique opportunity for the public to engage with thought-provoking documentary films that explore how those people who do not usually have a voice in technology design and roll out  live with automated technologies. How do diverse people and communities seek to ensure that digital and automation technologies are shaped to support their real life needs? 

Professor Sarah Pink, Chief Investigator at the Monash University of ADM+S has led the curation of the event. Professor Pink is a futures anthropologist and documentary filmmaker, and investigates ethical, sustainable and inclusive futures with attention to people, emerging technologies and environment.

“There is clear evidence that dominant visions which treat automated technologies and AI as solutions to individual and societal problems are often built on unrealistic hype and can cause more harm than benefit. 

“These outstanding films tell more realistic stories, showing us where the gaps are and now everyday people are seeking to turn around our technology futures towards inclusivity, respect and support,” said Professor Pink.

This free, public event will feature a curated selection of four compelling short films, each providing a distinct perspective on the role of technology in life. Following each screening, attendees will have the chance to participate in a Q&A session with the filmmakers, delving deeper into the stories and themes presented.

The evening’s lineup includes:

Superbots (8 min)
This inspiring film showcases students from Brentwood Secondary College participating in the ‘Superbots’ program, where they design and test their own voicebot personalities. Co-designed by Monash Tech School and Monash University’s Faculty of IT, the program challenges students to consider ethics and gender stereotypes in tech. The film is part of the ADM+S AI Rewired project, highlighting community-driven uses of AI for social justice.

Signal (15 min)
A poignant look at the challenges faced by First Nations communities in remote Australia due to unreliable connectivity. The film follows Lala Gutchen, a NAIDOC Award-winning linguist and educator, as she navigates the intersection of cultural preservation and modern communication hurdles on Erub Island in the Torres Strait.

Non-Human Supports Used by Autistic People for Connection, Health, and Wellbeing (10 min) This film sheds light on how autistic individuals utilize various digital and non-digital supports to enhance their well-being. From high-tech gadgets to beloved pets, the film features personal stories from Meg, Yssy, and Sophie about their creative and comforting strategies for connection and care.

I Am Not a Number (20 min)
Explore the complexities of interacting with digital government systems through the experiences of seven individuals affected by algorithm-driven support planning. This film, created by Jeni Lee in collaboration with Georgia van Toorn, reveals the often overlooked challenges and impacts of digital governance on real lives. The film is part of the ADM+S AI Rewired project, highlighting community-driven uses of AI for social justice.

The event will take place at the Science Theatre at UNSW Kensington Campus. Refreshments will be available from 6pm and screenings begin at 6:30pm. 

Join us for an evening of thought-provoking documentary films and meaningful dialogue as we explore the profound ways technology intersects with our everyday lives.

Book your seat here

SEE ALSO

Prof Flora Salim joins Day of AI’s inaugural Dolphin Tank competition to judge AI Proposals for Climate Solutions

Image credit: Day of AI

Prof Flora Salim joins Day of AI’s inaugural Dolphin Tank competition to judge AI Proposals for Climate Solutions

Author Natalie Campbell
Date 23 August 2024

ADM+S CI Prof Flora Salim recently joined Day of AI Australia to judge students’ ideas for AI interventions in climate change, in the inaugural Dolphin Tank competition.

As part of National Science Week, the Dolphin Tank took place on 14 August 2024.

The Dolphin Tank competition invited students to get creative and propose ideas for how AI could help tackle climate change and the UN sustainability goals.

Earlier in the year students from across Australia were invited to submit ideas, with 6 teams being selected to collaborate with experts from Rokt and UNSW’s AI Institute and School of Computer Science and Engineering to refine their idea and pitch a prototype to an expert panel as part of the Day of AI program.

ADM+S CI Prof Flora Salim joined experts Claire Southey from Rokt and Scientia Professor Toby Walsh from UNSW to judge the final pitches.

“The teams who made it through to the final round presented very clear and passionate pitches on their innovative ways to tackle climate change using AI,” said Prof Salim.

“We received more than 190 entries from students from all across Australia. From rubbish sorting systems, smart greenhouses to ideas for improving commercial fish stocks to predicting drought – innovative, creative and practical solutions where students had really thought about how AI could be used for positive impact,” said Day of AI Australia Program Director Natasha Banks.

“The six finalists who stepped into the Dolphin Tank were all exceptional.”

The innovative ideas of Fountain College students (junior competition) and North Sydney Boys School (senior competition) earned first prize.

The student entries were complimented by lightening talks delivered by the experts, inspiring students with stories of creative ways people are using AI in everyday life, work, and mobility.

“Having experts involved as both expert mentors for the six finalist teams, as judges, and delivering lightning talks really makes it evident for students that the opportunities for interesting, challenging and meaningful work is something they can be part of,” said Ms Banks.

Open to students from year 5-10, the Day of AI initiative encourages students to develop critical skills and knowledge for their increasingly digital future. The program covers topics including What is AI? How do machines learn? AI in Careers and Industries, and ethics and responsible use of AI. So far more than 60,000 primary and high school students have participating in the Day of AI Australia program.

Day of AI’s Dolphin Tank 2024 was supported by Rokt, UNSW Sydney and CSIRO .

SEE ALSO

What is ‘model collapse’? An expert explains the rumours about an impending AI doom

Concept of disintegration, pixel mosaic textures with simple square particles in bright colours of yellow, red, blue, green and purple.
Virinaflora/Shutterstock

What is ‘model collapse’? An expert explains the rumours about an impending AI doom

Author Aaron Snoswell
Date 19 August 2024

Artificial intelligence (AI) prophets and newsmongers are forecasting the end of the generative AI hype, with talk of an impending catastrophic “model collapse”.

But how realistic are these predictions? And what is model collapse anyway?

Discussed in 2023, but popularised more recently, “model collapse” refers to a hypothetical scenario where future AI systems get progressively dumber due to the increase of AI-generated data on the internet.

The need for data

Modern AI systems are built using machine learning. Programmers set up the underlying mathematical structure, but the actual “intelligence” comes from training the system to mimic patterns in data.

But not just any data. The current crop of generative AI systems needs high quality data, and lots of it.

To source this data, big tech companies such as OpenAI, Google, Meta and Nvidia continually scour the internet, scooping up terabytes of content to feed the machines. But since the advent of widely available and useful generative AI systems in 2022, people are increasingly uploading and sharing content that is made, in part or whole, by AI.

In 2023, researchers started wondering if they could get away with only relying on AI-created data for training, instead of human-generated data.

There are huge incentives to make this work. In addition to proliferating on the internet, AI-made content is much cheaper than human data to source. It also isn’t ethically and legally questionable to collect en masse.

However, researchers found that without high-quality human data, AI systems trained on AI-made data get dumber and dumber as each model learns from the previous one. It’s like a digital version of the problem of inbreeding.

This “regurgitive training” seems to lead to a reduction in the quality and diversity of model behaviour. Quality here roughly means some combination of being helpful, harmless and honest. Diversity refers to the variation in responses, and which people’s cultural and social perspectives are represented in the AI outputs.

In short: by using AI systems so much, we could be polluting the very data source we need to make them useful in the first place.

Avoiding collapse

Can’t big tech just filter out AI-generated content? Not really. Tech companies already spend a lot of time and money cleaning and filtering the data they scrape, with one industry insider recently sharing they sometimes discard as much as 90% of the data they initially collect for training models.

These efforts might get more demanding as the need to specifically remove AI-generated content increases. But more importantly, in the long term it will actually get harder and harder to distinguish AI content. This will make the filtering and removal of synthetic data a game of diminishing (financial) returns.

Ultimately, the research so far shows we just can’t completely do away with human data. After all, it’s where the “I” in AI is coming from.

Are we headed for a catastrophe?

There are hints developers are already having to work harder to source high-quality data. For instance, the documentation accompanying the GPT-4 release credited an unprecedented number of staff involved in the data-related parts of the project.

We may also be running out of new human data. Some estimates say the pool of human-generated text data might be tapped out as soon as 2026.

It’s likely why OpenAI and others are racing to shore up exclusive partnerships with industry behemoths such as Shutterstock, Associated Press and NewsCorp. They own large proprietary collections of human data that aren’t readily available on the public internet.

However, the prospects of catastrophic model collapse might be overstated. Most research so far looks at cases where synthetic data replaces human data. In practice, human and AI data are likely to accumulate in parallel, which reduces the likelihood of collapse.

The most likely future scenario will also see an ecosystem of somewhat diverse generative AI platforms being used to create and publish content, rather than one monolithic model. This also increases robustness against collapse.

It’s a good reason for regulators to promote healthy competition by limiting monopolies in the AI sector, and to fund public interest technology development.

The real concerns

There are also more subtle risks from too much AI-made content.

A flood of synthetic content might not pose an existential threat to the progress of AI development, but it does threaten the digital public good of the (human) internet.

For instance, researchers found a 16% drop in activity on the coding website StackOverflow one year after the release of ChatGPT. This suggests AI assistance may already be reducing person-to-person interactions in some online communities.

Hyperproduction from AI-powered content farms is also making it harder to find content that isn’t clickbait stuffed with advertisements.

It’s becoming impossible to reliably distinguish between human-generated and AI-generated content. One method to remedy this would be watermarking or labelling AI-generated content, as I and many others have recently highlighted, and as reflected in recent Australian government interim legislation.

There’s another risk, too. As AI-generated content becomes systematically homogeneous, we risk losing socio-cultural diversity and some groups of people could even experience cultural erasure. We urgently need cross-disciplinary research on the social and cultural challenges posed by AI systems.

Human interactions and human data are important, and we should protect them. For our own sakes, and maybe also for the sake of the possible risk of a future model collapse.The Conversation

Aaron J. Snoswell, Research Fellow in AI Accountability, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

Row of EU Flags in front of the European Union Commission building in Brussels
VanderWolfImages/Shutterstock

A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

Author Rita Matulionyte
Date 14 August 2024

Around the world, governments are grappling with how best to manage the increasingly unruly beast that is artificial intelligence (AI).

This fast-growing technology promises to boost national economies and make completing menial tasks easier. But it also poses serious risks, such as AI-enabled crime and fraud, increased spread of misinformation and disinformation, increased public surveillance and further discrimination of already disadvantaged groups.

The European Union has taken a world-leading role in addressing these risks. In recent weeks, its Artificial Intelligence Act came into force.

This is the first law internationally designed to comprehensively manage AI risks – and Australia and other countries can learn much from it as they too try to ensure AI is safe and beneficial for everyone.

AI: a double edged sword

AI is already widespread in human society. It is the basis of the algorithms that recommend music, films and television shows on applications such as Spotify or Netflix. It is in cameras that identify people in airports and shopping malls. And it is increasingly used in hiring, education and healthcare services.

But AI is also being used for more troubling purposes. It can create deepfake images and videos, facilitate online scams, fuel massive surveillance and violate our privacy and human rights.

For example, in November 2021 the Australian Information and Privacy Commissioner, Angelene Falk, ruled a facial recognition tool, Clearview AI, breached privacy laws by scraping peoples photographs from social media sites for training purposes. However, a Crikey investigation earlier this year found the company is still collecting photos of Australians for its AI database.

Cases such as this underscore the urgent need for better regulation of AI technologies. Indeed, AI developers have even called for laws to help manage AI risks.

Clearview AI logo seen in front of a screen of blurred faces
ClearviewAI breached privacy laws in Australia by scraping photographs from social media profiles.
Ascannio/Shutterstock

The EU Artificial Intelligence Act

The European Union’s new AI law came into force on August 1.

Crucially, it sets requirements for different AI systems based on the level of risk they pose. The more risk an AI system poses for health, safety or human rights of people, the stronger requirements it has to meet.

The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China.

Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements.

For example, these systems must have their own risk management plan, be trained on quality data, meet accuracy, robustness and cybersecurity requirements and ensure a certain level of human oversight.

Lower risk AI systems, such as various chatbots, need to comply with only certain transparency requirements. For example, individuals must be told they are interacting with an AI bot and not an actual person. AI-generated images and text also need to contain an explanation they are generated by AI, and not by a human.

Designated EU and national authorities will monitor whether AI systems used in the EU market comply with these requirements and will issue fines for non-compliance.

Other countries are following suit

The EU is not alone in taking action to tame the AI revolution.

Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law.

Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks.

Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors.

Australia can learn – and lead

In Australia, people are deeply concerned about AI, and steps are being taken to put necessary guardrails on the new technology.

Last year, the federal government ran a public consultation on safe and responsible AI in Australia. It then established an AI expert group which is currently working on the first proposed legislation on AI.

The government also plans to reform laws to address AI challenges in healthcare, consumer protection and creative industries.

The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.

However, a single law on AI will never be able to address the complexities of the technology in specific industries. For example, AI use in healthcare will raise complex ethical and legal issues that will need to be addressed in specialised healthcare laws. A generic AI Act will not suffice.

Regulating diverse AI applications in various sectors is not an easy task, and there is still a long way to go before all countries have comprehensive and enforceable laws in place. Policymakers will have to join forces with industry and communities around Australia to ensure AI brings the promised benefits to the Australian society – without the harms.The Conversation

Rita Matulionyte, Associate Professor in Law, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Prof Jason Potts presents ‘The Origin and Nature of Digital Economies’

Prof Jason Potts presenting at the RMIT Distinguished Lecture series
Image credit: RMIT Professional Academy

Prof Jason Potts presents ‘The Origin and Nature of Digital Economies’

Author Natalie Campbell
Date 14 August 2024

On Monday 5 August ADM+S CI Distinguished Prof Jason Potts delivered a lecture on The Origin & Nature of Digital Economies as part of the RMIT Professional Academy’s Distinguished Lecture Series.

“We are today in the early phases of a profound transition to a digital economy,” said Prof Potts.

“My argument is that a digital economy does not mean computers everywhere, but is the transition to digital institutions.”

In his talk, Prof Potts shares provocations of a new type of economy – a digital economy, fundamentally different from an industrial economy in the way these institutions (digital money and assets, digital markets, contracts and platforms) are composable to coordinate economic actions and compute value.

“The cheap new resource in a digital economy is not data per se, but rather the ability to spin-up a full stack economy from within civil society.

“This new institutional capability is the most disruptive factor of our time.”

Prof Potts is co-founder and Director of the RMIT Blockchain Innovation Hub. His research examines the institutional causes of technological change and innovation, with a current focus on crypto-economics and the economics of generative AI.

The RMIT Professional Academy was established in 2018 to bring together RMIT’s best minds in research, education, and engagement with community, business, government, and the public, to provide strategic advice, stimulate important discussions, and advocate for impactful value creation.

Watch the lecture on Youtube.

SEE ALSO

ADM+S Research Fellow secures one of three 2024 ACCAN grants

ACCAN 2024 grants announced

ADM+S Research Fellow secures one of three 2024 ACCAN grants

Author Natalie Campbell
Date 14 August 2024

Congratulations to ADM+S Research Fellow Dr Kieran Hegarty, co-investigator on the recently announced ACCAN-funded project, Social infrastructure for digital skills development.

Dr Hegarty’s project is one of only three successful grants, out of 69 applications in the 2024 funding round, and will be led by Dr Ellen van Holstein from RMIT University. Dr Nicky Dulfer from University Melbourne is also a Co-Investigator.

Social infrastructure for digital skills development progresses research funded by ACCAN in the 2021, which analysed digital inequalities amongst public housing residents.

The 2021 research revealed the pivotal role neighbourhood centres play in digital skill acquisition and troubleshooting for people who face barriers to being digitally included.

Social infrastructure for digital skills development will develop insights into best practices for training, identify barriers to digital inclusion, and develop strategies to overcome to these barriers.

In a 11 July media release, ACCAN CEO Carol Bennet said, “Grants projects inform ACCAN’s work and contribute to the broader evidence base for consumers, regulators and service providers in the telecommunications market.

“This year’s grantees will make a real difference to the experience of Australian consumers, and we look forward to working with the successful applicants as they undertake these exciting projects,” Ms Bennett concluded.

The research team is partnering with Neighbourhood Houses Victoria, Farnham Street Neighbourhood Learning Centre, and Carlton Neighbourhood Learning Centre to understand the role played by neighbourhood houses in supporting digital inclusion.

Dr Hegarty explains, “Along with public libraries, neighbourhood houses form part of the social infrastructure needed for connected communities, and play a key role in supporting an inclusive digital society.

“These organisations provide internet access and digital skill development for those at risk of being digitally excluded, including public housing residents and low-income households.”

The project will commence in 2025 and will culminate with the delivery of a digital pedagogies workshop to neighbourhood centre staff to communicate insights from the project to help strengthen the crucial role they play in supporting digital inclusion in their communities.

The Australian Communications Consumer Action Network (ACCAN) is Australian’s peak communications consumer advocacy group, working towards achieving trusted, inclusive and accessible communications services.

The ACCAN Independent Grants program funds projects to enable research on telecommunications issues, represent telecommunications consumers, or create educational tools which empower consumers to understand telecommunications products and services and make decisions in their own interests.

SEE ALSO

Dang Nguyen inaugural scholar of Yale Law School’s Majority World Initiative

Dr Dang Nguyễn, RMIT University
Dr Dang Nguyễn, RMIT University

Dang Nguyen inaugural scholar of Yale Law School’s Majority World Initiative

Author Natalie Campbell
Date 13 August 2024

Dr Dang Nguyen from RMIT University is one of only eight scholars in the inaugural cohort of Yale Law School’s Majority World Initiative (MWI), supporting social media scholars from the Global South by amplifying their work and thinking, and drawing them into the global scholars’ community.

The MWI was launched in November 2022 by the Information Society Project (ISP), an intellectual centre at Yale Law School supporting a community of interdisciplinary scholars who explore issues at the intersection of law, technology, and society.

“Being part of the inaugural cohort has allowed me to connect with, and learn from, so many scholars doing really important work in this area. This initiative is a vital step toward ensuring that Majority World perspectives are not just included but are central to global discussions on social media and its impacts,” said Dr Nguyen.

‘Majority World’ (coined by Bangladeshi photographer Chahidul Alam) refers to the region traditionally knows as the Global South or developing world, which encompasses the majority of humankind.

“To understand or discuss the global networked public sphere, we need global thought leadership that is focused, context-driven and detail-oriented,” said Chinmayi Arun, Executive Director of the ISP.

“This is only possible if Majority World scholars join minority world scholars and work with them on an equal footing on thinking through the networked public sphere.”

The inaugural cohort includes lawyers, academics and policymakers collaborating to understand what resources will enable scholars in the region to focus on key issues.

The group recently published a series of  essays around propaganda and social media governance in their respective Majority World area of expertise.

Dr Nguyen’s essay, Automated Propaganda as Platform Imperative? The Case of Instant Articles, examines propaganda as a persisting media-dependent phenomenon and argues that it is time we moved beyond examining isolated instances of automated propaganda.

“Instead, we should orient our collective efforts around understanding how automated communication is being shaped by the broader political economy, technological development, and regulatory environment in which media industries and systems operate,” Dr Nguyen explains.

“As we build on the work we’ve started, I’m eager to see how our collective insights will push the boundaries of current scholarship. The opportunity to collaborate with such a diverse group of thinkers has the potential to drive meaningful change in how social media governance is understood and implemented globally.”

Dr Nguyen’s research investigates the social implications of technology by bringing together methods from a range of disciplines and by looking beyond Western contexts.

Her current research examines the digitality of knowledge-making and its implications on the information environment, the conditions of possibility of contemporary technological cultures, and automated informality and its moral economies.

SEE ALSO

ADM+S at RMIT welcomes DIGITAUS from Tsinghua University

Members from Tsinghua University’s DIGITAUS visits RMIT University
Members from Tsinghua University’s DIGITAUS visits RMIT University

ADM+S at RMIT welcomes DIGITAUS from Tsinghua University

Author Natalie Campbell
Date 9 August 2024

On Friday 2 August, ADM+S members from the RMIT School of Computing Technologies (SCT) welcomed a delegation of Tsinghua University’s DIGITAUS Social Practice Team.

The 14 DIGITAUS representatives included esteemed faculty members, doctoral candidates and undergraduate students.

“DIGITAUS are seeking to understand the digital transformation practices of Australia’s key industries, with a specific interest in how university laboratories are contributing to the technological innovation in industry,” said ADM+S AI Dr Damiano Spina, who coordinated the visit.

RMIT hosts showcased the RMIT AWS Supercomputing (RACE) Hub, and the Virtual Experiences Lab (VXLab), where the group saw the NOVA Helicopter Simulator (HeliSim) in action.

Prof George Buchanan (Deputy Dean of Research, SCT) gave a presentation about the research capabilities and existing industry engagement at the School, and ADM+S PhD student Sachin Cherumanal introduced the group to Walert – a customised chatbot designed to answer questions related to SCT programs.

“The Walert interaction promoted awareness about the limitations and risks of LLM-based conversational assistants, including intent-based and Retrieval-Augmented Generation (RAG) systems,” said Dr Spina.

Visiting the ADM+S office, the group was excited to hear about the impacts of research being carried out by ADM+S stakeholders in academia, industry and communities, flicking through the ADM+S annual reports to understand the expanse of our network and engagement.

The visit wrapped up with an insightful display and discussion around ‘Information Retrieval on Country’ (2023),  a commissioned Indigenous artwork by Dr Treahna Hamm (Firebrace), which seeks to bridge the gap between heritage and innovation, fostering a profound appreciation for the timeless connection between Elders, land, and the wealth of knowledge embedded within their intertwined stories.

SEE ALSO

Federal Government considers major reforms in gambling advertising: insights from research

Federal Government considers major reforms in gambling advertising: insights from research

Author Kathy Nickels
Date 9 August 2024

The Australian federal government is due to respond to a parliamentary inquiry into online gambling that recommended phasing out gambling advertising over three years leading to a total ban.

Reports suggest that the government is considering a ban on gambling advertising across social media and other digital platforms, alongside imposing caps on such ads in broadcast media.

Dr César Albarrán-Torres, an Affiliate of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S) from Swinburne University has been studying gambling advertising on social media platforms as part of the Australian Ad Observatory project

His research highlights significant concerns regarding the prevalence and regulation of these advertisements and the complexities of enforcing a ban on gambling advertising on digital media.

“In order to enact such a ban, the social media platforms need to be willing, and have the mechanisms, to enact it,” said Dr Albarrán-Torres.

“Through the Ad Observatory project we observed that gambling advertisements don’t just come from main players such as Sportsbet but they also come from overseas casinos advertising illegally in Australia.”

After publishing these findings, Dr Albarrán-Torres found that there was little that authorities could do without digital platforms such as Meta having the mechanisms to identify and stop this advertising.

Their research found other types of gambling ads appearing on social media platforms such as giveaways, and rewards clubs that do not legally classify as gambling but have gambling components.

Dr Albarrán-Torres emphasised that this is not just a regulatory challenge but a significant social and political issue. 

“It requires coordinated commitment and action from the government, digital platforms, media companies, and the gambling industry to effectively address the problem.”

Dr Albarrán-Torres recently discussed these issues on RTR FM 92.1 with Fiona Bartholomaeus. Listen to the full interview from 29 minutes.

SEE ALSO

A US judge just called Google the ‘highest quality search engine’. But how do we determine ‘quality’?

A US judge just called Google the ‘highest quality search engine’. But how do we determine ‘quality’?

Author Mark Sanderson
Date 8 August 2024

In his landmark ruling against Google earlier this week, United States district judge Amit Mehta said the tech giant has built “the industry’s highest quality search engine”.

Judge Mehta made clear this was partly because Google had an illegal monopoly over the market. Nonetheless, Google was keen to promote the praise it received for its flagship product. Its president of global affairs, Kent Walker, said:

This decision recognizes that Google offers the best search engine, but concludes that we shouldn’t be allowed to make it easily available.

But is the Google search engine as good as the company (and Judge Mehta) says it is? And by what metric do we measure whether Google has the “best” search engine in the world?

To answer these thorny questions, it’s important to think about the broader context of the internet – and, in particular, the powerful place of advertising.

Search engines are an expensive business

On September 4 1998, computer scientists Larry Page and Sergey Brin launched Google. In the 26 years since, the company has radically transformed our ability to find information.

Its search engine currently processes 8.5 billion queries per day – 15% of which have never been made before.

People expect the search engine to rapidly deliver accurate answers to every one of those queries. To fulfil this expectation, Google must keep the index up to date by regularly scanning and re-scanning the internet.

This huge task requires thousands of staff – and is therefore very expensive.

One edge Google has over its competitors when it comes to delivering relevant results is its large customer base. They can tune their algorithms based on customer clicks to be more accurate and cover a broader range of queries.

Crucially, however, they wouldn’t have as large a customer base were it not for them having an illegal monopoly over the market.

Advertising is key

A good way to measure the quality of Google’s search engine is by tracking the presence of advertisements to see how much they affect peoples’ ability to find the information they are looking for.

Advertising has long been a key part of Google.

The company doesn’t appear to keep copies of its search result pages. However with some sleuthing, examples of how ads in search have changed over the years can be found on the Internet Archive’s Wayback Machine. The picture that emerges indicates the line between high-quality search results and sponsored content is increasingly blurred.

The first page captured in the year 2000 shows only two adverts at the very top of the page. These are clearly identified by different coloured boxes and the prominently displayed message “sponsored link”.

Screen capture from 2000 of Google search results for the query ‘hotel’.
Supplied

The next example, taken from 2013, shows many more ads. But they are clearly labelled in a coloured box and in a separate column on the right.

Screen capture from 2013 of Google search results for the query ‘hotel’.
Supplied

In 2016, the column has disappeared and the ads at the top lose their distinctiveness from Google’s main result list, for which Google receives no money.

Screen capture from 2016 of Google search results for the query ‘hotel’.
Supplied

Finally, the capture of the Google result today shows sponsored links occupying much screen space before the main results can be seen at the bottom of the page.

Screen capture from 2024 of Google search results for the query ‘hotel’.
Supplied

There are other problems impacting the quality of Google’s search engine – as well as its competitors’. In a study published earlier this year, German researchers found that spam and other low quality content is very prevalent among the top results for product review searches on Google, Bing and DuckDuckGo.

They concluded:

We find that search engines do intervene and
that ranking updates, especially from Google, have a temporary positive effect,
though search engines seem to lose the cat-and-mouse game that is [search engine optimisation] spam.

So, what’s the fix?

The impact of forcing Google to give up some of its market share might increase competition, which could push Google to improve the experience search engine users have by reducing the volume and display of advertising.

However, reducing the search engine’s customer base too much might impact on the search engine’s ability to deliver high quality results, because the number of customer clicks that help tune the search engine algorithm would drop.

Apart from breaking up a monopoly, are there other ways to improve search quality?

The most promising approach at the moment is to incorporate artificial intelligence (AI) behind the scenes.

A recent leak of how the Google algorithm works found that a generative AI system was being used to judge the quality of web pages.

Microsoft has also applied an “AI model to our core Bing search ranking engine, which led to the largest jump in relevance in two decades.”

Hopefully this works. Because with multiple disruptions from the courts and AI innovations including Chatbots, the sedate changes in the quality of search results are about to accelerate.The Conversation

Mark Sanderson, Dean of Research and Professor of Information Retrieval, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Developing best practice strategies for Australian Centres of Excellence

COE PD day
CoE staff at the communications and outreach workshop

Developing best practice strategies for Australian Centres of Excellence

Author Natalie Campbell
Date 6 August 2024

On 23-25 July 2024 professional staff members working across the Australian Research Council’s Centre of Excellence scheme gathered at QUT Kelvin Grove for three days of professional development, best practice discussion, and networking.

Bringing together more than 120 staff members from 19 Centre’s around Australia, the event was facilitated by the Queensland-based ARC Centre of Excellences.

On Tuesday 23 July, Centre Directors and Chief Operating Officers kicked off proceedings with a program designed to share valuable knowledge on research translation, program governance, and Equity, Diversity and Inclusion (EDI) strategies.

The leadership group was joined by Prof Alistair McEwan, Executive Director for Centres of Excellence at the Australian Research Council, and Prof Kerrie Wilson, Queensland’s Chief Scientist, who provided insights and ideas from their respective roles in the research sector.

ADM+S Chief Operating Officer Nicholas Walsh shared his experience of the ARC mid-term review as part of a panel designed to equip other Centre leaders with expectations of the process.

ADM+S director Prof Julian Thomas was also featured in the program, speaking about the ADM+S Centres EDI EDI strategy and initiatives, alongside colleagues from OzGrav and CEVAW.

The annual all-staff program took place on Wednesday 24 July, and aimed to build connections across the COE network for staff to co-develop best practice strategies for challenges and opportunities unique to Centres of Excellence.

“The week presented lots of terrific opportunities for sharing insights, developing practical skills, and building valuable networks to enhance our contributions to the field,” said ADM+S Chief Operating Officer Nick Walsh.

Members from different Centre’s presented on various topics of their expertise, including First Nations engagement, leadership development, and workplace wellbeing.

Additional workshops were run on Thursday 25 July, addressing specialised topics such as finance management, communications and outreach, and research infrastructure optimisation.

SEE ALSO

I studied how rumours and misleading information spread on X during the Voice referendum. The results paint a worrying picture

Entry to polling place for the 2023 Voice Referendum at Ballina Coast High School
Aliceinthealice, CC BY-SA 4.0 via Wikimedia Commons

I studied how rumours and misleading information spread on X during the Voice referendum. The results paint a worrying picture

Author Timothy Graham
Date 7 August 2024

When the Australian public voted last year on whether to change the Constitution to establish an Indigenous Voice to parliament, it came after months of intense and sometimes bitter campaigning by both the “yes” and “no” camps.

Polling conducted 12 months before the referendum showed majority public support for the proposed constitutional change. But ultimately the polls flipped and 60.06% of Australians voted “no”.

Why? Factors included a lack of bipartisan support, a growing distrust in government, confusion about the proposal’s details and enduring racism in Australian society.

However, my research, published in the journal Media International Australia, highlights how misinformation and conspiratorial narratives on social media platforms – in particular, X (formerly known as Twitter) – also played a key role.

The findings paint a striking picture. There is a new type of political messaging strategy in town – and it needs urgent attention.

A bird’s-eye view of campaign messaging

I collected 224,996 original posts on X (excluding reposts) containing search terms relevant to the referendum (for example, “Voice to Parliament” or #voicereferendum). The data collection spanned all of 2023 up to the referendum date on October 14. It included more than 40,000 unique user accounts.

First, I categorised posts based on the presence of partisan hashtags. This enabled the identification of the top 20 keywords associated with each campaign.

The results provide an aerial view of each campaign’s messaging strategy. They also reveal that keywords associated with the “no” campaign dominated on the platform.

Keywords from the “yes” campaign included, for example, “constitutional recognition”, “inclusive”, “closing the gap” and “historic moment”.

Keywords from the no campaign included, for example, “division”, “expensive”, “bureaucratic”, “Marxist”, “globalist” and “Trojan”.

I found the “no” campaign keywords occurred more than four times as often in the dataset as the “yes” campaign’s, with the “not enough details” and “voice of division” narratives most prevalent of all.

The “yes” campaign only had two of the top ten keywords in campaign messaging on the platform.

How did the ‘no’ campaign manage attention on X?

I categorised each post in the dataset according to its dominant theme or topic. The top ten most prevalent topics covered the majority of the dataset (64.1% of all posts).

Next, I examined which of the top ten topics gained most attention on X – and which X users were the most influential.

Across the board, the posts that received the most engagement (that is, the number of replies and reposts with an attached original message) were from politicians, news media and opinion leaders – not bots, and not trolls.

In line with the keyword analysis, the “no” campaign messaging dominated the topics of discussion, but not because everyone agreed with it.

Several of the topics featured core “yes” campaign messaging, emphasising First Nations representation and equality, opportunities to make a difference and historical facts.

But most of the discussion from “yes” campaigners was drawing attention to and critiquing the “no” campaign’s core messaging around fear, distrust and division.

Rather than blatant falsehoods or full-blown conspiracy theories, the most widely discussed posts from “no” campaigners were characterised by rumours, unverified information and conspiratorial assertions.

 

Prominent “no” campaigners portrayed the Voice as divisive, implying or arguing it would lead to drastic social changes such as apartheid. It was positioned as part of an alleged secret agenda to consolidate elite privilege and erode Australian democracy through risky constitutional changes.

Such claims are indisputably conspiratorial because they assert that powerful actors are hiding malevolent agendas, and because they lack credible and verified empirical evidence.

These claims were supported by collaborative work by No campaigners to find what they believed to be evidence.

This type of “just asking questions” and “do your own research” approach stood in contrast to the journalistic fact-checking and traditional expertise predominantly drawn on by the “yes” campaign.

Yet, my study’s results show that the more the “yes” campaign tried to counter misrepresentations and confusion around the Voice proposal, the more they fuelled it.

A post-truth referendum

What Australia witnessed in October 2023 was a thoroughly post-truth referendum.

To be clear: it was not a referendum that lacked truth, but one in which traditional political messaging simply didn’t cut it in a fast, free-flowing and predominantly online media environment.

The “no” campaign’s messaging strategy was all about constructing a “truth market” in the public sphere. In other words, they created an environment where multiple – often conflicting – versions of the truth competed for dominance and where emotional resonance received more attention than reasoned debate.

We can’t really call it “disinformation” because most of it didn’t involve outright falsehoods.

Instead, a near-constant supply of contrived media events and rumour bombs attracted 24/7 news attention and fostered participatory discussions on platforms such as X from actors across the partisan divide. Examples of rumours spread during the campaign included the Indigenous Voice to parliament would divide Australia and was a land grab for globalist elites.

In trying to counter the “no” campaign’s messaging on X, many “yes” campaigners entered into a “defensive battle”. This drowned out their core message. It also amplified the fear and division narratives of the “no” campaign.

It’s a classic example of the oxygen of amplification.

Targeted messaging designed to exploit social media and elicit reliable outrage from different segments of the population is not new. It has a name: propaganda.

Propaganda is not a bad word, despite the reputation it has developed since the second world war. It is simply a more accurate and principled way to understand what happened during the Voice referendum debate, and for political campaigning more generally.

What is new, however, is the current information environment: the speed of digital networks and the collaborative and social dimensions of how people engage with information.

Properly diagnosing the problem is the first step to remedying it.The Conversation

Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Social media algorithms are shrouded in secrecy. We’re trying to change that

A man hand holding iphone with new logo of instagram application.

Social media algorithms are shrouded in secrecy. We’re trying to change that

Author Daniel Angus
Date 6 August 2024

Over the past 20 years, social media has transformed how we communicate, share information and form social connections. A federal parliamentary committee is currently trying to come to grips with these changes, and work out what to do about them.

The social media platforms where we spend so much time are powered by algorithms that exercise significant control over what content each user sees. But researchers know little specific detail about how they work, and how users experience them.

This is because social media companies closely guard information about their algorithms and operations. However, in recent weeks my colleagues and I announced a new national infrastructure project to help us find out what they are up to.

Our project, the Australian Internet Observatory, will investigate how social media users interact and the content on their feeds. But the federal government can also help by forcing tech companies to let some light in to the closed black boxes that power their business.

Resistance to data access

To understand the impact of social media, we need to first understand its inner workings. This requires observing the content shared by users and the algorithms that control what content is visible and recommended.

We must also observe how users interact with these platforms in an everyday setting.

This is important because social media is personal and increasingly ephemeral. Content differs for every user and quickly disappears from feeds.

This makes it challenging to draw general conclusions about the experiences of users and the broader impact of social media on society.

But the companies behind social media platforms refuse to let the public peer under the hood. They often cite privacy concerns and competitive interests as reasons for limiting data access.

These concerns are possibly valid. But they are often cynically deployed. And they should not preclude the possibility of more transparent and ethical research data access.

As a result, my colleagues and I have had to be inventive to gain insights into the inner workings of social media. We use methods such as scraping public data, platform audits and other forensic methods.

However, these methods are often limited and fraught with legal risk.

The Australian Internet Observatory

In the absence of direct platform data access, we are also using other methods, such as data donation, to understand how social media platforms operate.

Data donation enables people to voluntarily share specific parts of their social media experience for independent study conducted under strict ethical guidelines. This provides invaluable insights while respecting user privacy and autonomy.

Two data donation projects have already improved our understanding of internet search and targeted advertising in Australia.

Over the next four years we will rapidly expand the scope of data donation through the new Australian Internet Observatory. This research infrastructure will collect and analyse the data of users of social media platforms such as Facebook, TikTok and YouTube.

This will shed new light not just on on how people interact on social media platforms but also on what content they see and how it is distributed. This enhanced visibility will improve our knowledge of the algorithms that power social media platforms – and their impact on society.

For example, since its launch in 2021, the Australian Ad Observatory has amassed nearly 800,000 Facebook ad donations from over 2,100 ordinary Australians.

This significant corpus of Facebook advertising data has allowed us to uncover illegal gambling advertising and track the prevalence of scam ads. We have also used this evidence to inform inquiries into unhealthy food advertising and “green washing”.

More than just being able to uncover what forms of advertising are prevalent and to whom they are targeted, this work has also helped us uncover details about the algorithmic targeting process itself.

The Australian Internet Observatory aims to further deepen our understanding of this and similar processes across many more platforms. We will soon be inviting members of the public to donate data from their social media platforms to help us achieve this.

Legislating data access in Australia

The Australian government has attempted to regulate various aspects of the internet and social media.

The Online Safety Act and recently proposed legislation targeting misinformation and disinformation illustrate the government’s concern over the influence of digital platforms.

However, these regulatory efforts have been flawed. Crucially, they are often proceeding without a comprehensive understanding of the actual activities and interactions taking place online.

Without this knowledge, regulations risk being either too broad, impacting legitimate expression and access, or too narrow, failing to address the root causes of online harms.

To strengthen the efforts of researchers to understand the impact social media platforms are having on society, it’s essential the Australian government follow the lead of the European Union by passing legislation which compels social media platforms to provide access to crucial data.

This would allow increased platform accountability. It would also empower researchers to conduct vital, independent, public-interest research with the transparency and support necessary to safeguard our digital future.The Conversation

Daniel Angus, Professor of Digital Communication, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Australian social media inquiry: Researchers highlight key issues ahead of Government’s interim report

Woman with a heart reflected in her eyes. - stock photo

Australian social media inquiry: Researchers highlight key issues ahead of Government’s interim report

Author Kathy Nickels
Date 5 August 2024

The Joint Select Committee on Social Media and Australian Society, established by the Federal Government, is due to release their interim report highlighting the influence and effects of social media on Australian society.

As part of its inquiry, the ARC Centre of Excellence for Automated Decision Making and Society (ADM+S) submitted research findings to the Committee, presenting insights into the impacts of social media in Australia.

ADM+S Associate Investigator at RMIT University and lead author on the submission, Associate Professor James Meese, said, “The launch of the inquiry provided a great opportunity to synthesise all the critical research happening across our centre, and provide the committee with a series of evidence-based findings to consider.”

The ADM+S submission provided research on the limited effectiveness of facial recognition technologies for age verification, the decision of Meta to abandon deals under the News Media Bargaining Code, and discussed the relationship between Australian journalism and the presence of  mis and disinformation on digital platforms.

It also provided an overview of the impact of algorithms, recommender systems and corporate decision making of digital platforms in influencing what Australians see, describing the use of novel research data donation methods to identify a range of harmful, and potentially illegal, advertising practices.

Professor Kimberlee Weatherall, Chief Investigator at the University of Sydney node of the ADM+S and a contributing author on the submission, “ADM+S research is providing much-needed transparency, and rigorous evidence on what advertising Australians encounter in their social media feeds, and how personalised feeds mean different people see very different kinds of ads. It highlights the importance of independent, interdisciplinary research to inform public policy.” 

The Joint Select Committee on Social Media and Australian Society was formed in May 2024 in an effort to enable Parliament to make social media companies more transparent and accountable to the Australian public. 

“Parliament needs to understand how social media companies dial up and down the content that supports healthy democracies, as well as the anti-social content that undermines public safety,” said the Minister for Communications, the Hon Michelle Rowland MP.

The Committee is due to present an interim report on or before 15 August 2024, and its final report on or before 18 November 2024.

The ADM+S submission was a collaborative process involving contributions from RMIT University, Swinburne University, QUT, University of Melbourne, University of Queensland and the University of Sydney.

The entire ADM+S submission to the Joint Select Committee on Social Media and Australian Society can be viewed online (submission number 120).

Its contributing researchers were: Assoc Prof James Meese, Dr Cesar Albarran-Torres, Prof Kath Albury, Prof Daniel Angus, Prof  Axel Bruns, Prof  Jean Burgess, Assoc Prof Nicholas Carah, Dr Robbie Fordyce, Dr Jake Goldenfein, Assoc Prof Timothy Graham, Lauren Hayden, Dr Ariadna Matamoros Fernandez, Prof Christine Parker, Dr Zahra Stardust, Prof Nicolas Suzor, Prof Kimberlee Weatherall.

View the ADM+S Submission to the Joint Select Committee on Social Media and Australian Society

SEE ALSO

Australians like facial recognition for ID but don’t want it used for surveillance, new survey shows

Australians like facial recognition for ID but don’t want it used for surveillance, new survey shows

Author Mark Andrejevic
Date 30 July 2024

Automated facial recognition is becoming widespread in Australia. The technology has already been used by retail outlets, sport stadiums and casinos around the country. And in November, the Australian government’s digital identification system will be expanded, after new laws passed parliament earlier in the year.

As the technology becomes less expensive and more powerful, it will lend itself to a growing range of applications, such as a proposed age estimation tool.

To find out what Australians think about this fast-growing technology, my colleagues and our team conducted a representative national survey of more than 2,000 people. Our results, which we have just launched, indicate an overall lack of knowledge about the technology – and a range of attitudes towards it.

Crucially, these findings can help policymakers ensure the benefits of facial recognition technology are maximised – and the harms limited.

How does facial recognition technology work?

There are two main uses of facial recognition technology. The first, known as one-to-one use, ensures someone is who they say they are – as in the case of unlocking a smartphone. This can make life much more convenient. Instead of carrying around multiple forms of ID, people might simply submit to a face scan.

By contrast, one-to-many uses of the technology enable the identification of an unknown suspect or a face in the crowd.

In both cases, the technology works by creating a template from a photograph of a known individual. New photos can then be compared to the template to see if there is a match.

This match is given as a probability, not as a definitive yes or no – as in other cases of biometric identification. The technology can still be fooled by masks or disguises, but its ability to overcome these challenges is improving.

Attitudes vary, depending on how facial recognition is used

Overall, our research revealed that almost three quarters of Australians say they know little about facial recognition technology. Only one in 20 felt they knew “a lot” about it.

Our survey also found Australians are more comfortable with one-to-one uses of the technology. For example, a majority of respondents said they supported the use of the technology for accessing government services (57%).

This support might be good news for the country’s new digital ID system. It envisions a role for biometric technologies – most likely facial recognition – to allow Australians to prove their identity when accessing government and financial services.

A majority of respondents (75.2%) also supported the use of facial recognition technology for identifying criminal suspects. And there was strong support (80%) among respondents for using facial recognition technology to help verify the identities of people who lose their credentials during disasters or war.

There was, however, much less support for other uses.

For instance, the majority (60%) of survey respondents did not support its use in the workplace for tracking the location of workers. They also did not support its use for tracking and targeting shoppers. There was a strong sense facial recognition technology should not be used for commercial benefit.

We also surveyed perceptions of the accuracy of facial recognition tech. A majority of survey respondents felt facial recognition technology is either “accurate” or “very accurate”. In reality, however, there is a range of different systems in use and accuracy can vary widely.

For example, the technology has been shown to be less accurate when used on certain demographic groups, raising issues of racial bias. Misidentification can have serious consequences for those who are wrongly arrested and treated as criminals.

Crucial for survey respondents was notification and consent. 90% of Australians said they wanted to know when and where the technology was being used on them. They also wanted the opportunity to consent to its use.

Governments need to listen to the public – and respond

Automated facial recognition technology is a powerful form of surveillance that raises significant questions around privacy and liberty.

In 2019, the federal parliament proposed the use of a national face recognition database for law enforcement. This plan was deferred in part because of concerns that public response to its more widespread use might limit enrolment in digital ID programs.

More recent legislation restricts one-to-many matching using the national facial recognition database. However, individual states have their own databases from public records. The Australian Federal Police reportedly continue to rely on an agency that uses facial recognition provided by the controversial company Clearview AI.

Given the recent history of data breaches, there should be concern about the capability of both the government and private sector to safely store and manage people’s data.

But automated facial recognition technologies can undoubtedly be useful. We urgently need better public education about the technology and the issues it raises, to ensure the responsible and democratic use of facial recognition tech.

And, as former Australian Human Rights Commissioner Edward Santow argues, we also need legislation dedicated to minimising the risk of creating an automated surveillance society.The Conversation

Mark Andrejevic, Professor, School of Media, Film, and Journalism, Monash University, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Recordings available from RE/FRAMING – Creativity / Culture / Computation

Recordings available from RE/FRAMING – Creativity / Culture / Computation

Author Natalie Campbell
Date 1 August 2024

On 2-4 July 2024, researchers, artists, industry partners and collaborators joined forces at RMIT University to explore the transformation of creativity and the creative fields by generative artificial intelligence tools at RE/FRAMING – Creativity / Culture / Computation.

Event organiser and ADM+S Affiliate Dr Daniel Binns said, “The key objectives of Re/Framing were to bring together a bunch of interesting and interested people to talk about generative AI and creativity, but also its impacts on creative industries and art practice.

“I wanted to demystify generative AI systems, to play and experiment with them and to use these experiences to think pragmatically and realistically about what we can achieve with these tools, and what new knowledge we can generate in this space.”

The roundtables, workshops and experimental modes of idea generation encouraged participants to incubate ideas around AI’s capabilities to solve complex creative and cognitive challenges, and provided space for groups to devise new methods for considering, reading, using, and analysing AI-generated media.

The program featured Bhautik Joshi, Senior Research Engineer from Canva, Jessie Hughes, Senior Creative Technologist and artist-in-residence at Leonardo.AI, as well as a number of expert academics working in the space.

“We played with Suno, the music generation tool, we also played with Leonardo.Ai, the image and motion generator, and we heard from a diverse group of people around the very real creative, economic, industrial, material and environmental challenges that this technology presents us with,” said Dr Binns.

“There’s definitely an appetite for this kind of work and collaboration. We’re currently organising two online seminars (details to come), where we’ll hear from some of the Re/Framing attendees about their research and interests, and in November one of the Re/Framing delegates Meg Herrmann has organized a spiritual follow-up event called Artificial Visionaries.

Recordings from the RE/FRAMING event are now available to stream on ADM+S YouTube.

SEE ALSO

ADM+S Artist-Researchers Featured in 2024 Now or Never Festival

Joel Stern/Machine Listening performance 2023
Joel Stern/Machine Listening performance 2023

ADM+S Artist-Researchers Featured in 2024 Now or Never Festival

Author Natalie Campbell
Date 26 July 2024

ADM+S Associate Investigator Dr Joel Stern, ADM+S Affiliate Assoc Prof James Parker, and Machine Listening collaborator Dr Sean Dockray will showcase a new exhibition and performance in Melbourne as part of the upcoming 2024 Now or Never Festival.

On 31 August Machine Listening will perform a newly commissioned work titled Songbook (5-x), the first Australian iteration of a project premiered at Unsound 2023 in Krakow, with support from ADM+S.

The collective will present a suite of new songs exploring techniques of automatic reading, writing, recitation, composition, and decomposition as part of the Soft Centre Program at the Trades Hall Building in Carlton.

“The Machine Listening Songbook performance at Soft Centre is a chance for us to continue developing the work we began last year at Unsound in Krakow,” said Dr Stern.

“We want to think playfully and critically about what a ‘song’ might mean in the context of generative AI, platform economies and data capitalism. We’re interested in exploring how automated technologies around sound and music might be used in politically reflexive, subversive, contradictory and revealing ways.”

Established in 2020, Machine Listening is a platform for collaborative research and artistic experimentation, focused on the political and aesthetic dimensions of the computation of sound and speech.

Dr Stern and Dr Dockray have also curated ‘This Hideous Replica’, an experimental project featuring artworks, performances, screenings, workshops, a ‘replica school’ and other uncanny encounters to be exhibited at RMIT Gallery, Capitol Theatre, and other Melbourne venues.

The exhibition will be open from 22-31 August as part of the Now or Never program and will continue at RMIT Gallery until 16 November.

Lifting its title from a misheard line in a 1980 song by The Fall about a reclusive dog breeder whose ‘hideous replica’ haunts industrial Manchester, this experimental project adopts monstrous replication as a tactic, condition, and curatorial framework for exploring algorithmic culture, simultaneously alienating, seductive and out-of-control.

It features works by Debris Facility, Heath Franco & Matthew Griffin, Josh Citarella, Liang Luscombe, Mochu, Diego Ramírez, Masato Takasaka, Anna Vasof, Loren Adams and many more.

Dr Stern explains, “This Hideous Replica is an opportunity to bring together artists, writers, researchers, musicians and others to share ideas, methods and creative practices for dealing with a world that is increasingly bewildering and basically weird as it overflows with content, information, perspectives, and things.”

Registration for various exhibits in “This Hideous Replica” are now open:

  • Mochu: Great Chain of Stains or Incompatible Rationalities on the Web reading group
    1:00pm – 4:00pm, 28 Aug 2024, First Site Gallery
    An unscripted conversation, watching-and-reading group with artist and writer Mochu exploring the possibilities and impossibilities of experimental writing after the internet.
  • Jennifer Walshe: 13 Ways of Looking at AI, Art and Music workshop
    11:00am – 1:00pm, 4 Sep 2024, First Site Gallery
    “AI is not a singular phenomenon. We talk about it as if it’s a monolithic identity, but it’s many, many different things – the fantasy partner chatbot whispering sweet virtual nothings in our ears, the algorithm scanning our faces at passport control, the playlists we’re served when we can’t be bothered to pick an album. The technology is similar in each case, but the networks, the datasets and the outcomes are all different.”
  • A Hacker Manifesto at 20: A reading group with McKenzie Wark
    2:00pm – 4:00pm, 4 Sep 2024, First Site Gallery
    Writer, theorist, and raver McKenzie Wark leads a reading and discussion group on her influential text, A Hacker Manifesto, 20 years after its publication by Harvard University Press in 2004.
  • This Hideous Replica: McKenzie Wark and Jennifer Walshe at The Capitol
    6:00pm – 8:00pm, 5 Sep 2024, the Capitol
    McKenzie Wark: From Automatic to Automated Writing
    A public lecture by writer and theorist McKenzie Wark rethinking historical avant-garde debates on the ‘conceit of the author’ through the prism of AI and generative text.

This Hideous Replica is produced by RMIT Culture with support from the ADM+S Centre, RMIT Design and Creative Practice Enabling Impact Platforms.

The Now or Never Festival celebrates creativity, inquiring minds, and exploration, with a focus on art, ideas, sound and technology.

The theme for the 2024 event is Look through the Image’, inviting audience members to interrogate what’s in front of them, explore deeper meanings, contemplate layers of symbolism and question reality from AI-generated narratives and visual distortion works to cinematic and augmented reality experiences.

SEE ALSO

Research finds Large Language Models are biased – but can still help analyse complex data

Digital generated image of abstract AI chat icons over digital surface.

Research finds Large Language Models are biased – but can still help analyse complex data

Author Kathy Nickels
Date 18 July 2024

In a pilot study, researchers have found evidence that Large Language Models (LLMs) have the ability to analyse controversial topics such as the Australian Robodebt scandal in similar ways to humans – and sometimes exhibit similar biases.

The study found that LLM agents (GPT-4 and Llama 2) could be prompted to align their coding results with human assignments, through thoughtful instructions: ‘Be Sceptical!’ or ‘Be Parsimonious!’. At the same time, LLMs can also help identify oversights and potential analytical blindspots for human researchers.

LLMs are promising analytical tools. They can augment human philosophical, cognitive and reasoning abilities, and support ‘sensemaking’ –– making sense of a complex environment or subject –– by analysing large volumes of data with a sensitivity to context and nuance absent in earlier text processing systems.

The research was led by Dr Awais Hameed Khan from the University of Queensland node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S).

“We argue that LLMs should be used to assist — and not replace — human interpretation.

“Our research provides a methodological blueprint for how humans can leverage the power of LLMs as iterative and dialogical, analytical tools to support reflexivity in LLM-aided thematic analysis. We contribute novel insights to existing research on using automation in qualitative research methods,” said Dr Khan.

“We also introduce a novel design toolkit — the AI Sub Zero Bias cards, for researchers and practitioners to further interrogate and explore LLMs as analytical tools.”

The AI Sub Zero Bias cards help users structure prompts and interrogate bias in outputs of generative AI tools such as Large Language Models. The toolkit comprises of 58 cards across categories relating to structure, consequences and output.

Drawing on creativity principles, these provocations explore how reformatting and reframing the generated outputs into alternative structures can facilitate reflexive thinking.

An example of the 58 AI Sub Zero Bias cards developed for researchers and practitioners to further interrogate and explore LLMs as analytical tools.

This research was conducted by ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) researchers Dr Awais Hameed Khan, Hiruni Kegalle, Rhea D’Silva, Ned Watt, Daniel Whelan-Shamy, under the guidance of Dr Lida Ghahremanlou, Microsoft Research, and Associate Professor Liam Magee, from the ADM+S node at Western Sydney University.

This research group began their collaboration at the 2023 ADM+S Hackathon where they developed the winning project Sub-Zero. A Comparative Thematic Analysis Experiment of Robodebt Discourse Using Humans and LLMs.

Associate Professor Liam Magee has been mentoring the group since first meeting them at the Hackathon.

“The ADM+S Hackathon was instrumental in bringing together these researchers from across multiple disciplines and universities,” said Associate Professor Magee.

“The research has been a tremendous group contribution, and I’d like to acknowledge both the efforts of the team and the logistical support of Sally Storey and ADM+S in making this possible.”

The paper Automating Thematic Analysis: How LLMs Analyse Controversial Topics has been accepted into the Microsoft Journal for Applied Research (MSJAR), an industry publication, and will be published in Volume 21, in August 2024.

Access the AI Sub Zero Bias toolkit here

SEE ALSO

2024 Future You Summit features interactive Ad Observatory workshop

2024 Future You Summit features interactive Ad Observatory workshop

Author Kathy Nickels
Date 16 July 2024

The Future You Summit, hosted by the Queensland University of Technology (QUT), attracted over 300 Year 11 and 12 students from 175 schools, making this year’s summit the largest yet. Attendees engaged in transformative learning experiences, exploring diverse academic disciplines and gaining a firsthand look into university life.

Among the standout features of this year’s summit was the workshop Behind the Screen: A Critical Workshop on Online Advertising Practices. With social media advertising shaping much of the world’s digital landscape, this workshop provided students with a unique opportunity to explore the underlying mechanisms of online advertising and its impact on everyday life.

The workshop was created by ADM+S Chief Investigator and Director of QUT’s Digital Media Research Centre, Professor Daniel Angus based on the research conducted by the Australian Ad Observatory project at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

Through the Behind the Screen workshop, students learned how social media advertising influences their personal experiences and what role data plays in the process. 

Students sorting advertising cards on table
An advertising sorting task where students developed personas based on targeting cues such as visual style, message type, and sector.

The workshop allowed participants to analyse their own social media feeds, using both personal data and insights from the Australian Ad Observatory, to understand how advertising practices are tailored to specific audiences. Students discovered the ways advertisers target consumers through social media platforms such as Facebook, Instagram, and TikTok.

“The session was engaging, informative, and insightful,” said one student. “The practical activities helped me understand how ads are tailored to my personal interests, and I was fascinated by how the data was used to target different audiences.”

SEE ALSO

Why an ‘AI health coach’ won’t solve the world’s chronic disease problems

Cybernetic heart symbol hologram in electric circle on digital background.
Skorzewiak / Shutterstock

Why an ‘AI health coach’ won’t solve the world’s chronic disease problems

Author Jathan Sadowski
Date 12 July 2024

Last week, two big names in the artificial intelligence (AI) and wellness industries announced a collaboration to develop a “customised, hyper-personalised AI health coach that will be available as a mobile app” to “reverse the trend lines on chronic diseases”.

Sam Altman (head of OpenAI, maker of ChatGPT) and Arianna Huffington (a former media executive who runs a high-tech wellness company called Thrive Global) announced their new company, Thrive AI Health, in a Time magazine advertorial.

Health is an appealing direction for an AI industry that has promised to transform civilisation, but whose huge growth of the past couple of years is beginning to look like it’s stalling. Companies and investors have pumped billions into the technology, but it is still often a solution looking for problems.

Meanwhile, venture capitalists Sequoia and the investment bank Goldman Sachs are wondering out loud whether enough revenue and consumer demand will ever emerge to make this bubble feel more solid.

Enter the next big thing: AI that will change our behaviour, for our own good.

Personalised nudges and real-time recommendations

Altman and Huffington say Thrive AI Health will use the “best peer-reviewed science” and users’ “personal biometric, lab and other medical data” to “learn your preferences and patterns across the five behaviours” that are key to improving health and treating chronic diseases: sleep, food, movement, stress management and social connection.

Whether you are “a busy professional with diabetes” or somebody without “access to trainers, chefs and life coaches” — the only two user profiles the pair mention — the Thrive AI Health coach aims to use behavioural data to create “personalised nudges and real-time recommendations” to change your daily habits.

Soon, supposedly, everybody will have access to the “life-saving benefits” of a mobile app that tells you — in a precisely targeted way — to sleep more, eat better, exercise regularly, be less stressed and go touch grass with friends. These “superhuman” technologies, combined with the “superpowers” of incentives, will change the world by changing our “tiny daily acts”.

Despite claims that AI has unlocked yet another innovation, when I read Altman and Huffington’s announcement I was struck by a sense of déjà vu.

Insurance that manages your life

Why did Thrive AI Health and the logic behind it sound so familiar? Because it’s a kind of thinking we are seeing more and more in the insurance industry.

In fact, in an article published last year I suggested we might soon see “total life insurance” bundled with “a personalised AI life coach”, which would combine data from various sources in our daily lives to target us with prompts for how to behave in healthier, less risky ways. It would of course take notes and report back to our insurers and doctors when we do not follow these recommendations.

In a related article, my colleagues Kelly Lewis and Zofia Bednarz and I took a close look at the theories of behavioural risk that might power such products. A model of insurance based on managing people’s lives via digital technology is on the rise.

We examined a company called Vitality, which makes behavioural change platforms for health and life insurance. Vitality frames itself as an “active life partner with […] customers”, using targeted interventions to improve customer well-being and its own bottom line.

Similar projects in the past have had questionable results. A 2019 World Health Organization report on digital health intervention said:

The enthusiasm for digital health has also driven a proliferation of short-lived implementations and an overwhelming diversity of digital tools, with a limited understanding of their impact on health systems and people’s wellbeing.

Hyper-personalisation

Altman and Huffington say AI-enabled “hyper-personalisation” means this time will be different.

Are they right? I don’t think so.

The first problem is there is no guarantee the AI will work as promised. There is no reason to think it won’t be plagued by the problems of bias, hallucination and errors we see in cutting-edge AI models like ChatGPT.

However, even if it does, it will still miss the mark because the idea of hyper-personalisation is based on a flawed theory of how change happens.

An individualised “AI health coach” is a way to address widespread chronic health problems only if you envision a world in which there is no society – just individuals making choices. Those choices turn into habits. Those habits, over time, create problems. Those problems can be rooted out by individuals making better choices. Those better choices come from an AI guardian nudging you in the right direction.

And why do people make bad choices, in this vision? Perhaps, like middle-class professionals, they are too busy. They need reminders to eat a salad and stretch in the sunshine during their 12-hour workday.

Or – again from the AI health coach perspective – perhaps, like disadvantaged people, they make bad choices out of ignorance. They need to be informed that eating fast food is wrong, and they should instead cook a healthy meal at home.

The social determinants of healthcare apps

But individual lifestyle choices aren’t everything. In fact, the “social determinants of health” can be far more important. These are the social conditions that determine a person’s access to health care, quality food, free time and all the things needed to have a good life.

Technologies like Thrive AI Health are not interested in fundamental social conditions. Their “personalisation” is a short-sighted view that stops at the individual.

The only place society enters Altman and Huffington’s vision is as something that must help their product succeed:

Policymakers need to create a regulatory environment that fosters AI innovation […] Health care providers need to integrate AI into their practices […] And individuals need to be fully empowered through AI coaching to better manage their daily health […]

And if we don’t bend society to fit the AI models? Presumably we will only have ourselves to blame.

Jathan Sadowski, Senior Research Fellow (ARC DECRA), Emerging Technologies Research Lab and CoE for Automated Decision-Making and Society, Monash University, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Junk food is promoted online to appeal to kids and target young men, our study shows

Junk food is promoted online to appeal to kids and target young men, our study shows

Authors Tanita Northcott & Christine Parker
Date 12 July 2024

The Australian government has been investigating whether we should ban unhealthy food advertising online, and how it could work. In the United Kingdom, a ban on unhealthy food and drink advertising online will start in October 2025.

We recently used the Australian Ad Observatory to investigate targeted junk-food ads on Facebook in Australia. Our study finds that unhealthy food and drinks are promoted in ways designed to appeal to parents and carers of children, and children themselves. Additionally, young men in our study were being targeted by fast-food ads.

Kids, young people and parents should be aware of the strategies online advertisers use to normalise unhealthy eating patterns. We should all demand a more healthy digital environment.

Our work supports ongoing calls for a ban on junk food advertising online.

What did we see in the ads?

The Australian Ad Observatory has created the world’s largest known collection of the targeted ads people encounter on Facebook. Our 1,909 volunteers have donated 328,107 unique ads from their social media feeds. This gives researchers an unprecedented opportunity to examine what ads Australians see on social media and how they are being targeted.

We searched the database for ads promoting the top-selling unhealthy food and drink brands. These are “discretionary” or “sometimes” foods that tend to be high in fats and sugars. They include fast-food meals, confectionery, sugary drinks and snacks. (To identify unhealthy food and drink categories, we used government guidance on healthy food and drinks.)

We also looked at online food delivery companies because of their popularity on digital platforms. They play a likely role in promoting unhealthy foods.

We found nearly 2,000 unique ads by 141 separate advertisers observed about 6,000 times by individuals. Ads for fast-food brands made up half of the unhealthy food ad observations in our study.

Fast-food giants KFC and McDonald’s combined accounted for roughly 25% of all unhealthy food ad observations. Snack and confectionery brands, like Cadbury, featured in a third of the ad observations. Soft drink brands such as Coca-Cola were promoted in 11% of observations.

About 9% of ads promoted online food delivery companies, and typically promoted fast-food options. Other advertisers we might not think of as junk food brands, such as Coles supermarkets and 7-Eleven convenience stores, also regularly promoted junk foods.

The power of junk food

The vulnerability of children to junk food ads is well established. Children’s exposure to food marketing has been associated with what types of food they prefer and ask their parents to purchase. When they develop preferences for unhealthy foods, this contributes to unhealthy habits and related health concerns.

But it’s not only children who are susceptible to unhealthy food marketing. Junk food advertising also shapes the food norms and attitudes of young people aged 18 to 24.

Our experiences online and digital technologies more generally can impact our health. These are known as “digital determinants of health”.

Food advertisers use the vast amounts of data collected about individuals to target specific audiences. They can seamlessly integrate advertising into everyday life.

Our study shows junk food advertising is disproportionately served to young people, especially young men. Young men are seeing a much higher proportion of fast food ads (71%) compared to the sample overall (50%), suggesting fast food is marketed to them more aggressively. Many ads promoted special “app-only” deals, including free delivery, especially for fast food.

The ‘halo effect’

We also found examples of ads aimed at busy parents, painting fast food as something that saves parents time, quietens children and feeds families.

Even though Facebook accounts are available only to people 13 and over, junk-food ads still use child-oriented themes, such as characters and games. Many appear to be designed to appeal directly to children. This included ads promoting “healthy” foods, such as vegetables, in kids’ meals.

The most insidious marketing tactics we found connect junk foods, and the brands synonymous with junk foods, to wholesome or popular activities. This creates a “halo effect”.

For example, many ads use “sports-washing” to associate unhealthy foods with healthy sports activities or pleasurable spectator sports. Sports in junk-food marketing can appeal to a broad audience, including young people.

While not all of these sport-related ads promoted or displayed unhealthy food products directly, the sport provided the focal point of ads with strong brand-specific elements, therefore forging the connection.

Other ads used “mental health-washing”, including ads for chocolate bars, packaged snacks or fast food co-promoting community mental health organisations.

A grid of junk food ad images featuring sports alongside several major brands.
Examples of online ads found during our research.
Author provided

Unhealthy food advertising should be banned

Last week a Parliamentary Inquiry into Diabetes in Australia repeated calls for the government to restrict the marketing and advertising of unhealthy food to children on television, radio, in gaming and online.

The federal government should soon issue its report on how best to limit unhealthy food marketing to children. Our study supports the government’s proposal to ban all unhealthy food and drink advertising online.

The proposed ban should cover not just unhealthy food itself, but also any mention of the brands synonymous with those foods. This is because mentioning these brands brings such foods instantly to mind.

We also recommend the government should include all types of promotions. This includes ads from online food delivery companies, supermarkets and sports clubs that cross-promote unhealthy foods.

Many are concerned about the impact of social media and its algorithmic content feeds on children and young people. Our study highlights the food and drink ads targeting children, young people and harried parents can also create an unhealthy digital environment.The Conversation

Tanita Northcott, Research Fellow, Melbourne Law School, The University of Melbourne and Christine Parker, Professor of Law, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S researchers to reveal findings from the largest collection of online targeted advertisements

Colourful advertisements swiping past person

ADM+S researchers to reveal findings from the largest collection of online targeted advertisements

Author Kathy Nickels
Date 8 July 2024

The Australian Ad Observatory at the ARC Centre of Excellence for Automated Decision-Making and Society has generated the most extensive known collection of targeted ads that people encounter on Facebook in Australia and built world-first research infrastructure that involved citizens in doing so. 

The project has uncovered harmful or illegal advertising content that includes unlawful scam content, harmful and, in some cases, potentially unlawful gambling advertising, and concerning patterns of alcohol and unhealthy food advertising.

This project pioneered a way to observe the targeting of social media advertising across populations of users. With access to 357,849 unique ads from more than 2,000 participants collected between July 2021 and December 2023.

The Australian Ad Observatory benefits our understanding of platform-based advertising and has enabled independent research into the role that algorithmically targeted advertising plays in society.  

While many social media platforms have policies against serving unlawful and harmful advertising content to Australians, in some cases, advertising that should not be allowed by the social media platforms’ policy and is or may be unlawful, such as scam content and some gambling advertising, has still been served to Australians. In other cases potentially harmful advertising, such as for alcohol, gambling and unhealthy food, has been targeted at vulnerable consumers such as young people. 

Professor Daniel Angus is a Chief Investigator at the QUT node of the ADM+S and co-researcher on the Australian Ad Observatory project. 

“The project has enabled observability and accountability of online advertising in a way that has not been possible through reliance on platform-provided transparency tools,” said Professor Angus.

In collaboration with the ABC, CHOICE, Centre for AI and Digital Ethics (CAIDE), Consumer Policy Research Centre (CPRC), Foundation for Alcohol Research and Education (FARE), and VicHealth, the Australian Ad Observatory has led to significant findings and impacts as it has uncovered hidden advertising practices on Facebook.

“Our work is revealing how sequences of ads are ‘tuned’ to work in tandem with people’s identities and daily rhythms, leading to a more sophisticated understanding of potential issues that may result within this computational advertising ecosystem.”

Professor Christine Parker is a Chief Investigator at the University of Melbourne node of the ADM+S and co-researcher on the Australian Ad Observatory. 

“The recommender systems that drive targeted advertising on digital platforms affect the wellbeing of Australians. 

“For example, alcohol advertising has a significant impact on our wellbeing. Greenwashing affects consumer capacity to reduce their impact on the environment. And unhealthy food ads can normalise unhealthy eating patterns,” said Professor Parker.

With the emergence of new forms of automated advertising, including Generative AI, this project continues to play a crucial role in the observability and accountability of online advertising. 

Responding to significant recent and ongoing developments in automated advertising the Australian Ad Observatory has developed new approaches for studying contemporary media and information environments, where there are no longer either shared flows of content, nor stable texts.

At this upcoming webinar event, The Australian Ad Observatory: Key Insights and Future Plans, the Australian Ad Observatory research team will share new methods and approaches behind their research and how they managed to uncover hidden patterns of advertising on Facebook. 

The team will share next steps in their research as they combine citizen science with data collection to provide visibility into the targeting of harmful products to particular groups and further explore experiences of advertising to understand its cultural impact.

Key findings from the Australian Ad Observatory: 

  • Alcohol companies publish almost 40,000 unique ads on Meta platforms per year and use use the algorithmic advertising models of digital platforms to more frequently target Australians who drink at high-risk levels;
  • Unhealthy food advertising frequently uses child-oriented themes and appears to be designed to appeal directly to children, whilst some ads are ads designed to appeal to parents and carers who need a quick convenient snack or meal for their children;
  • Up to 40 commercial sectors were identified as making environmental claims via social media ads. A substantial amount of these claims were false, unsubstantiated or vague; 
  • The Australian Ad Observatory has identified more than one hundred unlawful scam ads featuring photoshopped images of celebrities and advertising unlawful ‘get rich quick’ style scams; 
  • The Australian Ad Observatory uncovered illegal offshore gambling advertising in Australia and ‘grey zone’ gambling ads that have an uncertain degree of compliance with Australian law; and
  • Some credit related financial advertising appears to target Australians on the basis of protected characteristics, with women disproportionately targeted by Buy Now Pay Later ads while men are disproportionately targeted with credit card ads. Targeted advertising of particular credit products also demonstrates concerning trends for financially vulnerable cohorts. 

The Australian Ad Observatory: Key Insights and Future Plans will be held Thursday 11 July 11am to 1pm. Register here to join this webinar event.

SEE ALSO

ADM+S Associate Investigator Sarah Erfani receives Women in AI Award for Defence and Intelligence

Sarah Erfani Women in AI Awards 2024
Sarah Erfani receives a 2024 Women in AI Award

ADM+S Associate Investigator Sarah Erfani receives Women in AI Award for Defence and Intelligence

Author Natalie Campbell
Date 4 July 2024

Congratulations to Assoc Prof Sarah Erfani from the University of Melbourne, who was awarded the 2024 Women in AI Award for the ‘Defence and Intelligence’ division.

Announced at the 28 June ceremony, the awards are the most prestigious recognition for Women in Artificial Intelligence (AI) in the Asia-Pacific region, honouring those working, leading, researching, creating, or innovating in the field of AI.

“I am honoured to receive this award. This recognition reinforces my commitment to fostering trust and confidence in AI systems among the general public, policymakers and stakeholders.” Associate Professor Erfani said.

“When individuals feel assured that AI technologies are developed and deployed responsibly, they are more likely to embrace their use and adoption.”

Sarah is an ARC DECRA Fellow in the School of Computing and Information Systems, where her research focuses on promoting transparency in AI systems and guaranteeing their accuracy, enabling stakeholders to trust and validate the reasoning behind AI-driven outcomes, and confidently use them in their daily tasks.

Her work on safe and reliable AI have made important theoretical and practical contributions that are used by practitioners in various domains such as telecommunication, health, and energy.

At ADM+S, Sarah is a researcher on the Generative Authenticity project, and the GenAI Sim project.

Women in AI is a non-profit organisation founded in 2016 to work towards inclusive AI that benefits global society and promotes empowerment, knowledge and active participation through education, research, events and blogging.

SEE ALSO

If Meta bans news in Australia, what will happen? Canada’s experience is telling

Female Using a Smartphone for Reading Latest News and Articles about Technology. She is Lying on a Couch at Home Living Room.

If Meta bans news in Australia, what will happen? Canada’s experience is telling

Author Axel Bruns
Date 2 July 2024

At a parliamentary hearing late last week, Meta once again suggested it could ban links to news on Facebook and Instagram in Australia.

This would repeat the ban it enacted for more than a week in February 2021. That ban was in response to the introduction of the News Media Bargaining Code, an Australian law designed to force digital platforms to pass on some of their advertising earnings to news publishers.

A similar law – based on this code – was passed in Canada last year. As a result, in Canada news has been blocked from Meta platforms since August 2023.

This has produced strongly negative results for Canadian news outlets. Not only has the Canadian law failed to produce revenue flows from Meta to news producers, it severely reduced the incoming user traffic to their websites from Meta’s social media platforms.

What happened after the news ban in Canada?

The ongoing news ban in Canada has had several key effects. First, the removal of direct links to news articles meant a collapse in user visits to news sites. Those who once occasionally clicked on a news link in their feed can no longer do so.

This has especially affected regional and local news sites, for whom Facebook is often a key source of audience traffic. At a time when regional and rural areas of both Canada and Australia are already in danger of turning into “news deserts”, this is particularly concerning.

News outlets and audiences have worked around the bans to some extent. They’ve found circumvention techniques, such as posting article content without links, or article screenshots.

But such tricks can never fully replace the audience attention that has been lost. They also don’t help news outlets generate revenue for their content (as website traffic does through ads).

Instead, the main replacement for news coverage on Facebook has been political discussion that doesn’t directly reference or link to the news it draws on. This disconnection also opens the door for the circulation of well-meaning misinformation or deliberate disinformation.

Ultimately, the users of Meta’s platforms who suffer the most are those who are least interested in the news and who believe “news will find them”.

Highly invested news consumers will always find the news somewhere else. Those who see news only when people in their networks share articles will miss out, and may not even notice what they’re missing.

News is already hard to find on social media

Social media users are on these platforms for many other purposes than to follow the news. Most Australians don’t actually care much for news in the first place.

According to this year’s Digital News Report Australia, 68% of Australians actively avoid the news, and 41% suffer from news fatigue. After years of wall-to-wall reporting about pandemic, ecological, domestic violence, financial and military crises, this is hardly surprising.

Australia’s News Media Bargaining Code was conceived with a flawed assumption that social media play a central role as a conduit to news content, and that Facebook wouldn’t follow through on its threats to ban news.

But Facebook’s parent company Meta did exactly that, and shows no signs of changing that approach. Indeed, even where it doesn’t actively ban news content altogether, it is now substantially reducing news visibility in the feeds of its users.

This is because news has long tended to be more trouble for Meta than it’s worth. Not only is news a minute subset of all Facebook content, but it also generates an out-sized amount of unhappiness and controversy that requires costly moderation.

Meta also knows that reducing the visibility of news on its platforms doesn’t substantially impact on user experience. By its own calculations, only some 3% of the posts Facebook users see in their feeds contain links of any kind.

This can’t be independently verified without greater data access for independent researchers than the company currently provides, but certainly aligns with the everyday experience of ordinary Facebook users. Even of these 3% of posts, only a fraction link to news sources, let alone Australian news sources.

Our own analysis during the brief Australian news ban in February 2021 showed only a very minor impact on the posting and engagement patterns on Australian Facebook pages. Many users may not even have noticed news was suddenly missing from their feeds.

What can Australia do now?

In 2021, the news ban was temporarily resolved by Meta agreeing to voluntarily make some payments to a select few Australian news organisations.

In exchange, the then Morrison government elected to not “designate” Meta under the bargaining code, meaning the provisions didn’t apply to Meta’s platforms. These agreements are now coming to an end and Meta has already stated it has no interest in renewing them.

This gives the Albanese government the choice between applying the code to Meta after all, or allowing the agreements to expire without consequence. The latter would effectively kill off the News Media Bargaining Code as a meaningful piece of legislation.

Formally “designating” Meta to make it pay news publishers is likely to backfire. Meta is building an obvious argument here: if its platforms carry only a limited amount of Australian news content, why should it be forced to share revenue with Australian news publishers?

Both in the court of public opinion and in any legal proceedings it may pursue, such an argument is likely to prove highly persuasive.

A smarter solution to support local news

Australian news media need financial support, but the bargaining code was always severely flawed legislation. It should be abandoned at the earliest opportunity.

There is a better way for the Albanese government to tackle the real issue at stake: media revenue.

Right now, most Australian news media outlets are struggling to survive. Since news media moved online, audiences now expect news for free and most readers are not willing to pay. That leaves many publications without a sustainable business model and in need of public subsidy.

But we don’t usually provide subsidies by forcing profitable companies to negotiate directly with unprofitable ones, like the News Media Bargaining Code does. An alternative model is needed.

One option could be to use the corporate tax generated from digital platforms to support public-interest journalism by Australian media organisations. This would mean taxing the platforms’ revenues appropriately and fairly in the name of Australian citizens and in the national interest.

However, this would also require a stronger quality framework for what constitutes public-interest journalism. The latest round of journalism lay-offs in Australia shows we are rapidly running out of alternatives if we want to sustain quality, diverse Australian news content into the future.The Conversation

Axel Bruns, Professor, Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The ARC Centre of Excellence for Automated Decision-Making and Society announces new key research projects

Montage of colourful images representing new ADM+S projects

The ARC Centre of Excellence for Automated Decision-Making and Society announces new key research projects

Author ADM+S Centre
Date 1 July 2024

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) has launched new key projects responding to the complex challenges and opportunities that emerging automated decision-making and artificial intelligence systems present. These projects mark the second half of the Centre’s life.

Drawing on international perspectives from academic partners, organisations, and collaborators, the new research projects will address high-level challenges of automated decision-making in society, from generative authenticity, regulation, and cultural curation to inclusive AI.

Director of the ADM+S Centre Distinguished Professor Julian Thomas said, “The Centre has moved from the initial work of discovery and investigation to a new set of larger projects, integrating research across the humanities, social sciences, computing and data sciences.

We will be investigating the social, cultural, regulatory, and industry aspects of new technologies and thinking through how Australia can best respond to ensure that those technologies are deployed in ethical, responsible, and inclusive ways.”

Since 2020, researchers at the ADM+S Centre have been working to map the expanding reach of automated systems and gauge their impacts across Australia. 

We have established a range of sociotechnical approaches, including tools and frameworks, for understanding and addressing the impacts of automation on society.

Building on existing tools and research, the innovative projects will use synthetic data to simulate and predict policy outcomes, map the ecological impacts of ADM use, and provide unprecedented observability of platform operation, news content curation, and diverse accessibility, to name a few.

“These projects represent the knowledge, expertise, collaborations and capabilities generated during the first phase of the Centre’s research program and will enable us to work more closely with some of our key partners across industry, technology, government and the community sector.”

Using a participatory approach to engage affected communities, organisations, and civil society that are impacted by the technologies, our research brings an independent perspective to the development and application of AI and ADM tools to ensure they are responsible, ethical, and inclusive. 

The new research program seeks to provide transformational insight into addressing higher-level challenges of automated decision-making in society. 

“The Centre’s new projects will enable us to make sure that the work we do has the best possible impact in terms of influencing Australia’s response to this emerging new tech landscape,” said Professor Thomas.

Hear from our project leads about the new key projects in this video

 

Visit the Project Pages for more information:

SEE ALSO

ADM+S Research Featured at the 74th Annual International Communication Association Conference

ADM+S Research Featured at the 74th Annual International Communication Association Conference

Author Natalie Campbell
Date 1 July 2024

The 74th Annual International Communication Association (ICA) conference was held on the Gold Coast in Queensland from 20-24 June, where ADM+S members from across Australia showcased their research around the 2024 theme of ‘Communication and Global Human Rights’.

The ICA conference is the premier annual event for scholars and professionals in the field of communication, where hundreds of research papers are presented to an average attendance of over 2,000 academics.

The ARC Centre of Excellence for Automated Decision-Making and Society held a strong presence throughout the program. 

ADM+S Associate Director Distinguished Professor Jean Burgess was invited to deliver the annual ICA Steve Jones lecture. Prof Burgess’ talk, titled ‘Why the GenAI Moment Needs Communication and Media Studies’, covered the widespread integration of artificial intelligence (AI) tools in everyday apps and services, highlighting the importance of inclusivity, accessibility, transparency, and explainability.

“It was wonderful, if intimidating, to have such a prestigious opportunity and such a large platform,” said Prof Burgess.

“I used the occasion to share my thoughts on how our field might respond to and help shape the unfolding configurations of GenAI in our communication and media environment, and how the various projects, Centres and labs I’m involved in are beginning to do so.

“I was really touched to have so many colleagues from the QUT Digital Media Research Centre and ARC Centre of Excellence for Automated Decision-Making and Society come along in support. I couldn’t be prouder to be a member of this community.”

ICA Awards

ADM+S Affiliate Dr T.J. Thomson was awarded a Top Faculty Paper in the Journalism Studies division for his paper ‘Generative Visual AI in Newsrooms: Challenges, Opportunities, Perceptions, and Policies’ co-authored by Assoc Prof Ryan Thomas and Phoebe Matich.

The paper explores how photo editors perceive and/or use generative visual AI in their editorial operations and outlines the challenges and opportunities they see for the technology.

Additionally, Ehsan Dehgan, Dominique Carlon, Ashwin Nagappa and Kateryna Kasianenko also received a Top Paper award in the Intergroup Communication division for their paper ‘A Culture War without a Battlefront: Sedimented Polarisation across Political Subreddits’ which analysed 16 years’ worth of submissions across 11 political subreddits.

ICA Presentations

The following ICA sessions featured ADM+S researchers. To view all speakers and session abstracts, see the 2024 ICA program.

  • Aging with Technology: Multipe Interfaces for Social Connections
    ADM+S presenter: Anthony McCosker (chair)
  • Business and Pleasure: Queer Perspectives on Work, Health and Desire
    ADM+S presenters: Kathy Albury and Zahra Stardust (From Commodified Pleasures to Improvisational Desires: Countersexual Uses and Experiences of Sextech by LGBTQ+ People)
  • Communication and Knowledge in an Age of AI Philosophy
    ADM+S presentes: Mark Andrejevic (Automated Parasociality: From Personalization to Personification)
  • Covering the Climate Crisis
    ADM+S presenter: Michelle Riedlinger (Medialization Works Both Ways: Describing the Scientization of Journalism)
  • Critical Perspectives on Health and Popular Media
    ADM+S presenter: Wenqi Tan (Representations of Cyborgs and Disability in the Worlds of Cyberpunk 2077 and Citizen Sleeper)
  • Cultural Production and Generative Artificial Intelligence: A Matchpoint for Creativity
    ADM+S presenter: Jonathon Hutchinson (chair/ The Match Point for Creative Work: Generative AI in China’s Live E-Commerce Industry)
  • Datafication: Ethical, Political, and Cultural Questions Philosophy
    ADM+S presenter: Mark Andrejevic (chair)
  • Disability Rights are Human Rights: Disability Research Across Communication Studies
    ADM+S presenter: Gerard Goggin (chair)
  • Disability Rights, Communications and Technology
    ADM+S presenters: Gerard Goggin (chair), Alexa Scarlata (Disability Rights, Media Accessibility and Smart TVs) and Wenqi Tan (Interrogating the Autonomous Dream: An Instrumentalization Theory Approach to Examining the Inclusion of People with Ambulatory Disabilities in Singapore’s Autonomous Public Transport Development)
  • Disability Rights, Social Justice and Activism
    ADM+S presenter: Gerard Goggin (chair)
  • Exploring New Strategies, Methods and Technologies to Track and Counter Mis- /Disinformation
    ADM+S presenters: Daniel Angus, Ashwin Nagappa, Axel Bruns, Nadia Jude (“What Else Are They Talking About?”: A Large-Scale Longitudinal Analysis of Misinformation Super-Spreader Communities on Facebook), and Damiano Spina (Human-AI Cooperation for Tackling Misinformation).
  • Follow the Money: Markets and Monetization in Media Industries
    ADM+S presenter: Ramon Lobato (chair)
  • Generating Trust through Generative AI?
    ADM+S presenters: Ned Watt, Silvia Montana-Nino and Michelle Riedlinger (Generative AI and Fact Checking in the Southern Hemisphere: Insights from a Regional Comparison of Meta-Affiliated Fact Checkers)
  • Global Media Witnessing and the Struggle for Human Rights
    ADM+S presenter: Michael Richardson (Witnessing Aftermaths)
  • High-Tech Journalism
    ADM+S presenter: Wiebke Loosen (From Innovation Labs to Innovation Systems in Public-Service Media)
  • Histories and Archaeologies of Digitization
    ADM+S presenter: Gerard Goggin (chair)
  • Journalism: Theories and Paradigms
    ADM+S presenters: Silvia Montana-Nino, Michelle Riedlinger and Ned Watt (Understanding Contemporary Verification Cultures: Informing a Theory of Institutionalized Fact-Checking Values in Times of News Platformization)
  • Living in a Datafied Society: Surveillance, Algorithms, and the Transformation of Everyday Materiality in China
    ADM+S presenter: Haiqing Yu (discussant) 
  • NZCA – Australia’s Media and Communication Ecology and the 2023 Voice Referendum
    ADM+S presenters: Timothy Graham and Bronwyn Carlson (panel participants)
  • Observing the Cultural Practices of TikTokers in Views of Platform Algorithm and Global Human Rights
    ADM+S presenter: Haiqing Yu (Claiming Identity and Nationalism on TikTok: A Case Study of Rohingya Digital Diaspora)
  • Online Deliberation, and Media in Civic Engagement
    ADM+S presenters: Lucinda Nelson (Depp v Heard: Cancel Culture and Online Discourses about Violence against Women)
  • Polarization and Partisanship
    ADM+S Presenter: Axel Bruns (Polarised Media Framing of Climate Protests: A Comparative Mixed-Methods Analysis of Australia and Germany)
  • Prompting Progress or Generating Problems? AI in News Construction Processes
    ADM+S presenters: Axel Bruns (chair), T.J. Thomson (Generative Visual AI in Newsrooms: Challenges, Opportunities, Perceptions, and Policies)
  • Questions and Research Directions in Communication Studies and Global Human Rights, ICA Closing Keynote
    ADM+S presenter: Gerard Goggin (panelist)
  • Storytelling on Steroids? Video and Audio Technologies for Journalism
    ADM+S presenter: T.J. Thomson (Visual News and Journalistic Practice in Urban and Regional Areas: A Comparative Australian- Chinese Perspective)
  • Streaming Diversity? On and Off-Screen Diversity in an Era of Automated Media Culture
    ADM+S presenters: Kylie Pappalardo (Policy and Regulatory Challenges for Improving Representation Diversity on Our Screens), Alexa Scarlata (Streaming Women: Gendering SVOD Curation from Netflix to Passionflix), and Verity Trott (Defining and Doing Diversity)
  • The CAP Roundtable on 30 Years of the Chinese Internet and Beyond
    ADM+S presenter: Haiqing Yu (participant)
  • The Possibilities and Perils of Generating News with Generative AI
    ADM+S presenters: Ned Watt and Michelle Riedlinger (The Fact Checkers’ “Helper”: Fact-Checking Imaginaries for Generative AI Technologies)
  • The Very Picture of Health: Images and Well-Being
    ADM+S presenter: T.J. Thomson (chair)
  • Top Papers in Media Industry Studies
    ADM+S presenters: Ramon Lobato and Alexa Scarlata (Smart TV Users and Interfaces: Who’s in Control?)
  • Understanding Laws and Regulations for Children’s Media Use
    ADM+S presenter: Jonathon Hutchinson (Social Digital Dilemmas: Young People’s and Parents’ Negotiation of Emerging Online Safety Issues)
  • ICA24 Sunday Fellows’ Session
    ADM+S presenter: Jean Burgess (panelist)

Lastly, following the recent launch of the Australian Internet Observatory, Program Director and ADM+S research Fellow Amanda Lawrence spoke about the new initiative in a panel organised and chaired by ADM+S Chief Investigator Prof Daniel Angus, titled ‘Supporting the Stack: Considerations in the Ongoing Development, Deployment and Maintenance of Computational Communication Research Infrastructure’.

This panel focussed on the importance of larger scale software infrastructure for research, and also featured ADM+S members Dr Laura Vodden, Dr Abdul Karim Obeid, Dr Elizabeth Alpert, alongside Jane Tan (QUT) and international colleagues Megan Brown, Dr Josephine Lukito, and Jason Greenfield (New York U).

The significant contributions made by ADM+S researchers at ICA 2024 exemplified the Association and ADM+S’ shared commitment to advancing theoretical frameworks and strategies for communication studies.

Learn more about the International Communication Association.

SEE ALSO

ADM+S Research Fellow featured in the World Association for Sexual Health’s Sexual Rights Webinar

ADM+S Research Fellow featured in the World Association for Sexual Health’s Sexual Rights Webinar

Author Natalie Campbell
Date 21 June 2024

ADM+S Research Fellow Dr Zahra Stardust recently joined the World Association for Sexual Health as a panellist for their 10-year anniversary of the WAS Declaration of Sexual Rights Webinar.

Speaking alongside Faysal El Kak, Sharful Islam Khan, Mauro Cabral Grinspan, Anne Philpott, and moderator and Chair of the WAS Sexual Rights Committee, Eszter Kismödi, the panel reflected on the past decade of challenges and triumphs in sexual health.

Speakers discussed critical themes such as the influence of technology on sexual rights, the effects of migration and war, and the ongoing challenges of criminalisation.

“Over the last decade we’ve seen rapid advances in the pace of technology, and in some contexts, it’s played a role in facilitating access to healthcare,” said Dr Stardust.

“Digital tools are being developed as alternative methods in environments where there are criminal legal frameworks, government neglect or limited infrastructure such as apps for abortion self-care, chatbots for sex education, or tools to screen and predict health issues like infertility.

“However, there remain a lot of concerns about the design, data and governance of such technologies, including risks around data breaches of sensitive.”

Dr Stardust is a socio-legal scholar working at the intersections of sexuality, technology, law and social justice.

Over the last 15 years Zahra has worked in policy, advocacy, legal and research capacities with community organisations, NGOs and UN bodies on human rights in Australia and internationally.

WAS is a Confederation representing thousands of people who work in Sexual Health globally, including Healthcare Professionals, Educators and Activists, actively creating a world in which all people have access to Sexual Health, Rights, Justice and Pleasure.

View the webinar on Youtube.

SEE ALSO

New major infrastructure initiative for social data and digital platform research in Australia

Laptop half closed with colours coming out of it
Usplash/Tainyi Ma

New major infrastructure initiative for social data and digital platform research in Australia

Author ADM+S Centre
Date 17 June 2024

Announced today, the Australian Internet Observatory (AIO) is a major new research infrastructure initiative that will open up the ‘black box’ of digital platforms and their algorithms.

Digital platforms play a critical role in Australia’s economy and society, yet our capacities to collect and analyse data from digital platforms and observe their activities is very limited. 

The Australian Internet Observatory will develop the tools and capabilities required to gather and analyse online user experience data, algorithms, and interactions. It will support innovative approaches to the collection and analysis of digital social data and internet platforms and the analytical tools and governance required to support cutting-edge research on social, economic, health and environmental issues.

Distinguished Professor Julian Thomas, AIO Program Lead and Director of the ARC Centre of Excellence for Automated Decision Making and Society (ADM+S) said, “over the last decade there’s been a dramatic transformation in how Australians use digital platforms, how they interact with automated systems and the digital economy, and how they communicate with machines and each other. Every day, we are now using more platforms, more intensively, for a wider range of activities.”

“But as researchers we’ve had very little visibility of how digital platforms work.”

The Australian Internet Observatory comprises a range of new tools which give researchers visibility for the first time, over how people use critical services every day such as search engines, social media, video on demand services, messaging systems, and other digital services.

“We realised that there was a real need for new research infrastructure when we were developing some of our projects in the ADM+S Centre.

“We were interested in particular problems, such as the kinds of ads Australians see when they use online platforms, and the lack of regulatory oversight in areas such as gambling, alcohol, or unhealthy foods We knew that there were few reliable or accurate tools for gathering that kind of information, and that better tools would be useful for many researchers.

“With the support of the Australian Research Data Commons, the Australian Internet Observatory brings together a group of researchers and universities with the capabilities to assemble the necessary techniques and systems. It’s not a problem that can be solved within any one discipline or research centre. It requires a collaborative, co-operative effort.”

 

The AIO is a four-year national research infrastructure project that will create an interconnected ecosystem of people, data and tools to support innovative approaches to the collection and analysis of digital social data across a range of disciplines and sectors. It will enable researchers to explore topics  such as the distribution of misinformation, the patterns of everyday engagement with business, culture and science, flows of communication in emergencies and humanitarian crises, and the dynamics of political conflict and consensus.

The AIO is an initiative of the ARC Centre of Excellence for Automated Decision-Making + Society (ADM+S) in collaboration with researchers and research centres, university partners and organisations across Australia and internationally. 

The facility will be developed and led by RMIT University in partnership with QUT, The University of Queensland, University of Melbourne, Swinburne University of Technology and Deakin University. The AIO is supported by the Australian Research Data Commons (ARDC), enabled by the Australian Government’s National Collaborative Infrastructure Strategy as part of the Humanities and Social Science (HASS) and Indigenous Research Data Commons. 

Jenny Fewster, Director, ARDC HASS and Indigenous Research Data Commons said, “The Australian Internet Observatory is building on a strong foundation of digital research infrastructure to establish a national, joined-up ecosystem that will enable exciting research. It is a vital new part of the suite of research-accelerating national infrastructure within the HASS and Indigenous Research Data Commons.”

Key deliverables include:

  • Data governance, ethical and legal frameworks and guides 
  •  A national research training program
  • Citizen science data donation program
  • An integrated suite of data sourcing and data donation tools including browser extensions, data donation packages and APIs
  • Generative AI models for text, audio and image generation
  • Test environments and simulation tools
  • An integrated suite of open source machine learning tools and data visualisations

Outcomes for the wider community include improvements to informed decision-making and public policy, democratisation and participation in the digital sphere, and public debate, improved digital capabilities and inclusion, greater platform accountability and transparency. 

Visit the Australian Internet Observatory website.
View the Australian Internet Observatory video.

SEE ALSO

Age verification for pornography access? Our research shows it fails on many levels

Curtains with 18 cross out and text Adult content only

Age verification for pornography access? Our research shows it fails on many levels

Authors Zahra Stardust & Alan McKee
Date 11 June 2024

The Australian government has announced a A$6.5 million trial of “age assurance” technology to restrict minors’ access to pornography. It’s part of a $1 billion package to address gendered violence. And it now comes alongside a proposal to ban people under 16 from social media.

The government will consider various types of “age assurance” methods, such as matching drivers’ licences, credit cards or passports against government databases. It may also explore analysing biometric information (such as faces, fingerprints or voices), and profiling online behaviour (like username, browsing history and cookie data). Each has different privacy risks.

While the government refers to these tools as “age assurance”, many of them are more accurately called “age estimation”.

Published in Big Data and Society, our new study into one common facial age estimation tool shows such technologies are unreliable, and have a racial and gender bias.

They are also undesirable – they make pornography a political scapegoat for gendered violence and divert resources from evidence-based strategies that can actually help.

Framing pornography as the problem

The link between pornography and sexual violence is tenuous. In part, this is because existing research often conflates kink with violence and assumes porn causes misogyny.

Pornography is not a homogeneous category. It includes horror, comedy, romance and documentary, and porn creators are highly diverse.

Sexually explicit media can play a role in affirming bodies and desires of people excluded from mainstream media.

Despite this, various narratives are used to justify the increasing regulation of pornography. This includes construing porn as a public health crisis. The idea of “porn addiction” has also been shown to lack methodological rigour.

The idea to “face scan people watching porn” was first raised by then-Minister for Home Affairs Peter Dutton in 2019, the same year the government tried to introduce a national facial recognition scheme to match people’s identities across government agencies.

Furthermore, research into pornography consumption shows that young adults are media literate, critical consumers. Pornography can be a source of arousal, laughter, bonding or stress relief.

The technical limits of age estimation

Civil society groups have cited privacy and feasibility concerns about age estimation tech. These include:

  • accessibility issues for people without identity documents
  • the potential burden on small, low-income websites
  • queries about what data could be collected, sold or exploited
  • and the likelihood of circumvention.

In the eSafety Commission’s own research, young people “expressed their right to safe, autonomous sexual development and exploration”. They were concerned age assurance is of limited efficacy, and comes with privacy and security issues.

Age estimation software that uses facial recognition relies on stereotypical indicators of age, such as hair, wrinkles and jawlines. These are highly variable – for example, wrinkles can be altered by cosmetics or injectables.

Studies also indicate that facial recognition software often has a significant racial and gender bias.

In our research, our colleague Abdul Obeid used a neural network to analyse a data set of 10,139 images. He found the model was most accurate in estimating age in the “Caucasian” category and least accurate in the “African” category.

Boys were more likely to be misclassified than girls, especially in the 0–12 age bracket. People aged 26 and over were generally misclassified as younger, sometimes by as much as 40 years.

Age estimation is already a fraught task when done by humans, who regularly misjudge age. It is no better when done by machines.

Supporting healthy sexual development

Overall, age-based restrictions on access are unlikely to stop people from viewing porn. Teenagers can easily avoid age verification and may even get around age checks using the dark web, putting them at greater risk of encountering child abuse images.

Young people often think about harm very differently from their parents. Sometimes, blurry understandings of “harm” from the media and angry responses from parents bother young people more than the actual porn they encounter.

The best approach to supporting healthy sexual development for young people is to “talk soon, talk often” with them about sex, especially if they can do so openly with trusted adults.

Part of healthy sexual development is understanding how sexual representations are shaped through media and culture. Porn literacy – a subset of media literacy – is about reading porn well rather than taking an abstinence-based approach.

Evidence-based alternatives

Restricted-access approaches make a crude distinction between people over or under 18. But the various age groups under 18 have very different needs in relation to sex and relationships. Importantly, this includes 16- to 17-year-olds who can legally consent to sex.

For pre-pubescents, the biggest risk factor involving pornography is when adults use these materials to commit sexual assault. This shows governments must invest in community-led prevention and frontline services.

Meanwhile, post-pubescents need comprehensive sex and relationship education appropriate for their development. Its focus should be on providing the information they actually want, including about consent, communication, gender diversity, non-monogamy, sexual experimentation and sexual autonomy.

Instead of barring under-18s from all porn, a more impactful approach would be to facilitate access to diverse sexual representations. This includes measures such as preventing media monopolies from dominating the pornography market and supporting worker-owned platform cooperatives to flourish. It includes ending financial discrimination against sex workers and decriminalising porn production.

Importantly, addressing gendered violence requires actioning the recommendations of First Nations women, who remain the most affected by family, police and carceral violence.

Age estimation for pornography access is not an easy fix for gendered violence. It will not support young people to contextualise the sexual media they come across. It will not address structural factors behind gendered homicide and sexual violence, including racism and misogyny. In reality, it will only introduce more problems, and at great cost – political and financial.The Conversation

Zahra Stardust, Postdoctoral Research Fellow, Centre of Excellence in Automated Decision-Making and Society, Queensland University of Technology and Alan McKee, Head of School of Art, Communication and English, Faculty of Arts and Social Sciences, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

New podcast episodes from an ADM+S/PERN collaboration

New podcast episodes from an ADM+S/PERN collaboration

Author Natalie Campbell
Date 31 May 2024

The ADM+S podcast has released five new episodes following the 25-26 April event ‘Digital Platform Economies: Value from Data’, in collaboration with PERN.

The event was held at The New School in New York City, featuring a program of speakers from both the ARC Centre of Excellence for Automated Decision-Making and Society, and PERN.

Designed to stimulate discussion about value forms and valuation processes through the lenses of digital assets, Web3 tokenization, digital twins, automated optimization, and generative AI, each session considers the questions: How do platforms produce value and monetize those value forms?

New episodes:

 

  • Web 3: Creating Economies in Digital Worlds
    Featuring Kean Birch (PERN), Fabio Mattioli (PERN/ADM+S), Ellie Rennie (ADM+S), Kelsie Nabben (ADM+S), and moderated by Janet Roitman.
    This session examines the following questions: How are Web3 digital economies designed? What processes, infrastructures, and practices are implicated in these designs? What forms of ‘new’ value are emerging? What forms of value are increasingly irrelevant? And what methods are applicable to the examination of these domains?

 

  • Digital Twins
    Featuring Michael Richardson (ADM+S), Zoe Horn (ADM+S), Mark Andrejevic (ADM+S), and moderated by Seyram Avle (PERN).
    This panel asks: How do digital twins generate value? How are they imagined to reshape labour, logistics, and future planning? What regulatory interventions are needed as government and industry are increasingly drawn to the lure of digital platforms for modelling futures and modulating the real? What multidisciplinary methods of analysis and lines of inquiry are relevant to this emerging domain?
  • Value Propositions in Platform Regulation
    Featuring Jake Goldenfein (ADM+S/PERN), James Meese (ADM+S/PERN), Thao Phan (ADM+S), Angela Xiao Wu (PERN), and moderated by Linda Huber (PERN).
    This session addresses the following questions: How do particular value propositions justify specific governance and managerial interventions? How do market-framing narratives (e.g., the data market) become dominant? What are their expressions in different contexts? How do these approaches embed diverse strategies for distributing regulatory and civic functions between private and public actors?

 

  • Concept Work for Platform Economies
    Featuring Na Fu (PERN), Koray Çalışkan (PERN), Franziska Cooiman (PERN), Silvia Lindtner (PERN), Janet Roitman (ADM+S/PERN), and moderated by Emma Park (PERN).
    The session examines how core concepts, such as commodity, capital, labor, rent, data, and information, operate with reference to specific platform contexts. The aim is to consider how each case either challenges or confirms conventional understandings of particular concepts and to stimulate general discussion of theoretical challenges and research methods.

Listen on the ADM+S Podcast.

View photos from this event on Flickr.

SEE ALSO

ADM+S Student presents at the 2024 World Wide Web Conference in Singapore

Search Recommender systems

ADM+S Student presents at the 2024 World Wide Web Conference in Singapore

Author Natalie Campbell
Date 31 May 2024

ADM+S PhD Student Chenglong Ma recently presented at the 2024 World Wide Web Conference (WWW’24) in Singapore.

Chenglong presented a poster and oral presentation on Temporal Conformity-aware Hawkes Process on Recommendations, which challenges the assumption that user behaviour in recommender systems is solely driven by personal interests, and highlights the influence of peer effects and conformity behaviour.

His work criticizes existing solutions that overlook this influence and introduces the TCHN model which employs attentional Hawkes processes to separate user self-interest from conformity, and temporal graph attention networks to capture users’ changing dynamics.

“I’m thrilled that my work on the Temporal Conformity-aware Hawkes Proess on Recommendations received significant attention and valuable feedback,” said Chenglong.

“It was a wonderful experience, and I had the opportunity to meet many outstanding researchers.”

The WWW’24 is an annual academic conference on the topic of the future direction of the World Wide Web. It remains the premier venue to present and discuss progress in research, development, standards, and applications of the topics related to the Web.

In addition to making important connections during the conference, Chenglong also connected with leading researchers in his field while visiting Nanyang Technological University, including Prof Aixin Sun.

“We share common views on research issues in recommender systems, and I greatly admire his rigorous and critical thinking.

He criticised the simplification of research task definitions for overemphasising modelling decision outcomes rather than the decision-making process. This approach hinders the ability to predict users’ decisions in subsequent interactions within a dynamic, evolving, and application-specific context.”

Prof Sun praised Chenglong’s ability, as a PhD student, to discover and identify valuable research questions in addition to merely solving the problems, noting that some studies overly focus on improving recommendation accuracy, neglecting research questions that are more valuable and worthy of exploration.

Chenglong’s WWW’24 experience was supported by ADM+S.

SEE ALSO

Decoding Canada’s Directive on Automated Decision-Making: A blueprint for AI ‘guardrails’?

Purple/blue circuit board with AI text on it

Decoding Canada’s Directive on Automated Decision-Making: A blueprint for AI ‘guardrails’?

Author ADM+S Centre
Date 31 May 2024

The pace at which advances in generative AI are being made accessible by companies, without perceived oversight, has sharpened the focus of governments worldwide on ensuring there are sufficient ‘guardrails’ for the development and deployment of AI. 

It’s a good time to assess potential approaches to AI regulation in Australia.

The Safe and Responsible AI in Australia discussion paper released by the Federal Government last year proposed a risk-based approach, focused on setting up additional guardrails to reduce the likelihood of harms occurring in high-risk settings in the development and deployment of AI.

The discussion paper gave extensive consideration to Canada’s Directive on Automated Decision-Making (ADM) which focuses on processes that encourage fairness, accountability and transparency in government decision-making, rather than prohibiting particular use cases or outcomes.

In a recent article Decoding Canada’s Directive on Automated Decision-Making: A blueprint for AI ‘guardrails’? Research Fellow Dr Henry Fraser from the ARC Centre of Excellence for Automated Decision-Making and Society and QUT Law students Jacqueline McIlroy and Sara Luck discuss the Canadian Directive on ADM and what it reveals about the strengths and limitations of a ‘guardrails’ approach to AI regulation.

“Deciding which risks from AI are acceptable, and which are not, is incredibly challenging,” writes Fraser, McIlroy and Luck. 

“The Directive offers a solid starting point for AI regulation, engaging deep policy questions about rights, safety, efficiency, public interests, and social justice.”

To illustrate the practical application of these guardrails, the authors imagine how the ADM Directive’s requirements might have impacted the notorious Robodebt system. 

As Australia moves forward in shaping its AI regulatory framework, the lessons from Canada’s ADM Directive provide a valuable blueprint. 

Read the full article here

SEE ALSO

ADM+S research informs Senate report on Bank Closures in Regional Australia

ADM+S research informs Senate report on Bank Closures in Regional Australia

Author ADM+S Centre
Date 30 May 2024

On Friday 24 May 2024 the Senate Rural and Regional Affairs and Transport References Committee delivered its final report on Bank closures in regional Australia, citing the ADM+S submission, as well as evidence provided by Centre Director Prof Julian Thomas.

The enquiry addresses the current extent of bank closures in regional Australia, including reasons for closure, economic and welfare impacts on communities, the effectiveness of government banking statistics capturing and reporting regional service levels, and consideration of solutions.

The ARC Centre of Excellence for Automated Decision-Making and Society’s submission to the enquiry detailed findings from two key programs at the Centre, the Australian Digital Inclusion Index (ADII) and the Mapping the Digital Gap project.

The submission is referenced throughout the Committee report referring to issues around digital connectivity; scams and fraud; and impacts on older Australians as well as service access issues in remote communities and the pace of closures and the digital divide.



The report also includes quotes from Centre Director Prof Julian Thomas, who provided evidence to the Enquiry at a public hearing in February 2024.

In the report, Prof Thomas is quoted, “as our society and economy transitions more to digital services, those who are somewhat behind can fall further behind very quickly. 

“That’s really the difficulty, that what would have been an adequate internet service 10 or 15 years ago is no longer really sufficient for the provision of the sorts of digital services which governments and organisations like banks are now providing. 

“It’s that moving target problem which is the issue here.”

View the full report.

SEE ALSO

New report highlights the environmental benefits and costs of generative AI

Green leaves growing from green circuit board

New report highlights the environmental benefits and costs of generative AI

Author Kathy Nickels
Date 23 May 2024

Generative AI technology is hyped for its promise in contributing to sustainability and environmental health – but what are the real costs of manufacturing, training and using these technologies on the environment?

A new report Generative AI Technologies Applied to Ecosystems and the Natural Environment, released today from the ARC Centre of Excellence for Automated Decision-Making and Society provides a scoping review of the literature on the ways that novel generative AI tools are being applied to living things and other elements of ecosystems and the natural environment. 

Authored by ADM+S Chief Investigator Professor Deborah Lupton and former ADM+S postdoctoral fellow Dr Ella Butler from the University of New South Wales, this report details the deployment of generative AI in sustainability projects and biodiversity conservation as well as how this use creates environmental impacts such as increased carbon emissions, energy use and water consumption. 

Professor Lupton and Dr Butler suggest that while there are many potential benefits, the vested interests underlying major commercial initiatives to apply generative AI to ecosystems should be closely examined.

“As these generative AI and LLM tools continue to develop, detailed examination of the assumptions underpinning their design and applications when used in relation to ecosystems is crucial that we examine  to limit these human tendencies to exploitative, extractivist, and potentially deeply harmful approaches to other species and the natural world,” said Dr Butler.

The report outlines various uses of generative AI aimed towards animals, plants, biodiversity, conservation and climate change. It concludes with some considerations of the impacts on the natural environment of these tools and the need for ethical consideration of how they are deployed.

As with all AI tools,social, geographical and political context is everything when considering the potential benefits and harms of generative AI used for ecological purposes. Professor Lupton emphasised that, “unless these situated experiences and differences are acknowledged in future research, the possibilities, risks, accessibility and impacts of generative AI when applied to ecosystems and the natural environment will not be fully recognised or addressed.”

Read the full report on the APO

SEE ALSO

ADM+S members recognised at RMIT Annual Research Awards

RMIT Research Awards

ADM+S members recognised at RMIT Annual Research Awards

Author  Natalie Campbell
Date 23 May 2024

Congratulations to ADM+S researchers and the ADM+S operations team, who have been recognised for their impact and engagement at the 2023 RMIT Vice Chancellors Awards, RMIT Research Awards, and Research Service Excellence Awards, held at RMIT University on Tuesday 30 April 2024.

  • Mapping the Digital Gap research team, Vice Chancellors Award for Research Engagement and Impact (team category). 

The Mapping the Digital Gap team, consisting of Dr Daniel Featherstone, Dr Lyndon Ormond-Parker, Professor Julian Thomas, Dr Indigo Holcombe-James and Dr Jenny Kennedy, received this award in recognition of their significant research impact and engagement in their work, addressing the lack of longitudinal digital inclusion data in remote First Nations communities.

Mapping the Digital Gap project lead Dr Daniel Featherstone said, “I want to acknowledge the community organisations and local co-researchers that make the Mapping the Digital Gap project possible. Those partnerships allow us to engage and have an impact on the ground, as well as through policy. 

“We’re grateful to Telstra for their ongoing support, to the ADM+S community, and to the policy commitment from States, Territories, and industry in supporting progress on Closing the Gap target 17.”

Daniel RMIT award.
The award was received by Daniel Featherstone on behalf of the Mapping the Digital Gap team.

 

Kelsie Nabben, Vice Chancellors Prize for Research Engagement and Impact (Higher Degree by Research category).  

Kelsie was awarded this prize for demonstrating research engagement and impact in the social implications of emerging technologies. The award acknowledges her interdisciplinary collaboration and innovation at the confluence of economics, engineering, law, and online communities.

Throughout her PhD program, Kelsie’s work has attracted attention from industry leaders who recognize the practical application of her work in shaping technology governance frameworks relevant to their sectors, allowing her to engage with industry and demonstrate impact in her work.

Kelsie Nabben receiving the Vice Chancellors Prize for Research Engagement and Impact.

Deputy Vice-Chancellor of Research and Innovation Calum Drummond said “the nomination recognises your exceptional support for research and innovation across every aspect of the ADM+S centre’s work. Your solutions-oriented innovative approach in supporting initiatives for the research community , strengthening relationships, and thereby enhancing the research ecosystem is remarkable.

“Further, the nomination recognises you as a team that demonstrates an unwavering commitment to contribute to building RMIT’s research and innovation reputation.”

ADM+S Chief Operating Officer Nick Walsh said the award was a terrific honour for the research service team at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and thanks RMIT for their continued support.

“We are grateful to all the professional staff that provide support for ADM+S across the nine Australian university nodes, as well as our fellow colleagues in RMIT’s service areas who have helped build ADM+S into a highly successful, world class research centre.”

Absent: Leah Hawkins, Julie Stuart, Kathy Nickels and Lucy Valenta.

 

Kieran Hegarty, RMIT Prize for Research Engagement and Impact (Higher Degree by Research category).    

Kieran was awarded the prize for Research Engagement and Impact in recognition of his research on the changing role of public libraries in an era of digital and social media. 

His interdisciplinary research not only makes significant contributions to knowledge, but also informs practice change within the library and information profession. During his PhD, Kieran has embedded himself within a practice setting, enabling him to engage with professionals and better understand their perspectives to inform his work.

Kieran Hegarty receiving the RMIT Prize for Research Engagement and Impact.

 

  • Sally Story, Special Commendation for Service Excellence Award.

Sally received multiple nominations for the Service Excellence Award, which detailed Sally’s strong commitment to the betterment of student experiences, professional growth, and wellbeing. 

The nominations demonstrated Sally’s dedicated performance as Research Training Coordinator at the ADM+S Centre of Excellence. 

Sally Storey receiving the Special Commendation for Service Excellence Award.

SEE ALSO

New ADM+S Project Film: Automation and Public Space

Automation and Public Space

New ADM+S Project Film: Automation and Public Space

Author Natalie Campbell
Date 22 May 2024

On Wednesday 22 May the ARC Centre of Excellence for Automated Decision-Making and Society released the first short-film in a new outreach series providing a look into the inner workings of research projects underway at the Centre.

The ADM+S Project Films initiative will span across ongoing phase 1 and phase 2 projects, highlighting the breadth of topics covered across the four focus areas, disciplines, institutions and researchers.

How is Automation Impacting Public and Shared Space? based on the ADM+S project Automation and Public Space, features project co-lead AI Michael Richardson alongside Research Fellow Thao Phan, CI Jake Goldenfein, Affiliate Andrew Brooks and PhD Student  Zoe Horn, who provide critical insights into their work on Drone Delivery, Automated Crowd Control, and Digital Twins.

The film identifies key research questions, methodologies and findings so far, including:

  • Research participants in our testbed project tell us that drone delivery is a convenient solution to traffic congestion, unsafe roads and poor public transport. But if the success of this new marketplace relies on the failures of local infrastructure, what are commercial actors really investing in?
  • Predictive policing tools using AI and machine learning are often presented as neutral and objective solutions to the problem of the crowd. However, issues arise when the models are trained on existing police data that may already contain discriminatory bias.
  • Digital Twins allow you to transform a space, environment or process via a feedback loop of sensors between the real and the virtual, and these hidden systems are often informing the ecological, economic, social and cultural decisions that govern everyday life and space.

The speed at which these technologies are emerging means that many are under regulated and require a great deal of regulatory modernisation.

The multidisciplinary and cross-institutional project team at ADM+S is working to understand how automated spatiality leads to the reconfiguring of public space, how commercial operators like digital platforms are mediating our experience of shared space, and how policy settings, industrial demands and defence priorities shape the development and application of automated technologies.

SEE ALSO

ADM+S partner on the 2024-33 Decadal Plan for Social Science Research Infrastructure in Australia

2024-33 Decadal Plan for Social Science Research infrastructure

ADM+S partner on the 2024-33 Decadal Plan for Social Science Research Infrastructure in Australia

Author Australian Academy of the Social Sciences
Date 20 May 2024

On 10 April 2024 the Academy of the Social Sciences launched a 10-year strategy for transforming national social science research infrastructure in Australia.

Led by the Academy with the support of five partner organisations including the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), Connected, Innovative and Responsive: Decadal Plan for Social Science Research Infrastructure 2024-33 sets out a compelling vision for a framework of connected and integrated social science researchers across universities, government research and data agencies, and private and not-for-profit organisations.

It includes three broad goals for the sector over the 10-year timeframe, with nine priority actions that will help achieve those goals, and five decision-making principles to guide investments and priorities and ensure the biggest return for Australians on our research investment.

ADM+S researcher and expert working group member on the project Prof Daniel Angus explains, “a national commitment to digital platform National Research Infrastructure (NRI), on the scope and scale that is often provided for science and medical infrastructure is likely to be one of the most cost effective and sustainable approaches.”

Speaking at the launch in Canberra, the Academy’s project lead Dr Isabel Ceron noted that the plan has been developed at the right time to take advantage of an enormous step change in the amount of social data that’s becoming available to researchers.

‘In a similar way large telescopes inaugurated a new era of discovery for astrophysics and space science, in the same way that peeking into our genes forever changed the way we understood life and its determinants, we are now starting to see masses of social, human data, pouring in from all corners of society.’

By building and connecting the infrastructure, protocols, governance and people support needed to make this data accessible, social science researchers will be able to gain a much more detailed and nuanced understanding of social systems, structures and trends and provide more valuable insights and advice to decision makers.

One of the central considerations of the plan is the need to facilitate Aboriginal and Torres Strait Islander peoples’ leadership of and sovereignty over their own data, with priority actions focused on embedding principles of Indigenous Data Governance, Indigenous Data Sovereignty and Indigenous Cultural and Intellectual Property principles and processes into future research infrastructure.

Another key consideration is to encourage greater awareness and understanding of the value of investing in and utilising a cohesive and functional research infrastructure ecosystem across both the research and policy sectors.

The Decadal Plan is the result of a partnership between the Academy of the Social Sciences in Australia, ARC Centre of Excellence for Automated Decision-Making and Society, ANU Centre for Social Research and Methods, ARC Centre of Excellence for Children and Families over the Life Course, ARC Centre of Excellence in Population Ageing Researchand UQ Institute for Social Science Research (ISSR).

It was developed over two years and in consultation with hundreds of social science researchers, technical experts and stakeholder organisations.

Read more on the Academy website.

SEE ALSO

ADM+S researchers finalists in UC Berkeley Prosocial Ranking Challenge

The Prosocial Ranking Challenge

ADM+S researchers finalists in UC Berkeley Prosocial Ranking Challenge

Author Kathy Nickels
Date 17 May 2024

A team led by ADM+S researchers Dr Aaron Snoswell, Distinguished Professor Jean Burgess and William He from the new GenAI lab at QUT in collaboration with ADM+S Associate Investigator Dr Daminao Spina (RMIT University) and Dr Tariq Choucair from QUT have been announced as one of nine finalists in the Prosocial Ranking Challenge.

The Prosocial Ranking Challenge, hosted by the Center for Human-Compatible AI at UC Berkeley, awards $60,000 in prizes to build better social media algorithms.

The challenge tests ways to mitigate problems or harms associated with social media algorithms, it also seeks to demonstrate new ways to design systems toward socially desirable ends. 

The team’s approach is based on the concept of Search Result Diversification.

Dr Snoswell said, “We aim to mitigate political and other forms of polarization by exposing users to diverse opinions and content. 

“To do this, we use a Large Language Model (LLM) to simulate personas with diverse political perspectives, rank the users’ social media feed according to each of these personas, and then combine these partisan rankings with an award-winning fairness-preserving algorithm that balances the range of opinions present in the news feed.”

As finalists, the team has one month to create a production-ready version of their algorithm that meets performance and security requirements. If successful the algorithm will be selected to take part in a large-scale trial to evaluate the real-world performance of the algorithms, with results published in 2025.

SEE ALSO

Novel algorithmic assessment toolkit to guide the development of AI systems that focus on human wellbeing

Group of people in discussion

Novel algorithmic assessment toolkit to guide the development of AI systems that focus on human wellbeing

Author Kathy Nickels
Date 16 May 2024

A team led by Prof Paul Henman from the ARC Centre of Excellence for Automated Decision-Making and Society at the University of Queensland, have developed an internationally novel algorithmic assessment toolkit to guide the development and deployment of AI systems with human wellbeing front of mind. 

This toolkit contributes to the important and urgent work of building human wellbeing in a world with AI. It moves beyond the focus on digital harms to one that is positively framed in being trauma aware.

“AI and automated decision making have enormous potential to improve society, but as Robodebt and many other examples show, they can also do great harm,” said Professor Henman. 

“This Toolkit was specifically designed for AI developers and deployers to guide them to harness AI and automation to enhance wellbeing.”

The toolkit and its methods of development are outlined in the new report: Building a Trauma-Informed Algorithmic Assessment Toolkit. 

Researchers co-designed the toolkit with social service professionals drawing together two fields of research and practice – trauma informed approaches and ethical and accountable AI/algorithms.

The toolkit aims to assist organisations to think through, document and review algorithmic supported services. It  includes 100 prompt questions for design consideration across categories of Empowerment and Choice; Collaboration; Trust and Transparency; Safety; and Intersectionality.

Prompt questions range from “are service users aware that an algorithmic system is used?” to “can the service user choose to interact with a human?”

While of particular use for social service organisations working with people who may have experienced past trauma, the tool will also be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI.

Researchers have successfully piloted the toolkit across case studies including Robodebt, Allegheny County Family Screening Tool and chatbot Tessa with plans to work with partner organisations to apply the toolkit to real live case studies. 

Researchers

  • Paul Henman, Professor of Digital Sociology and Social Policy at the University of Queensland
  • Suvradip Maitra, a practising lawyer and researcher in ethics of AI, data and algorithms and senior research assistant at the ADM+S Centre
  • Dr Lyndal Sleep, Senior Lecturer in the Queensland Centre for Domestic and Family Violence Research at Central Queensland University, and Affiliate at the ADM+S
  • Suzanna Fay, Associate Professor of Criminology at the University of Queensland

This research was supported by The University of Notre Dame-IBM Tech Ethics Lab 2022-23 Auditing AI funding (Award # 262812UQ) with additional support provided by the Australian Research Council’s Centre of Excellence for Automated Decision Making and Society (CE200100005). 

We acknowledge the contributions of Philip Gillingham to this project in its initial stages. We thank all our participants and their organisations for their time and invaluable insights into the formation of the project’s resulting Trauma Informed Algorithmic Assessment Tool.

SEE ALSO

OVIC and the ADM+S launch GenAI Concepts: A curated index for understanding AI terms

Digital generated image of abstract AI data chat icons flying over digital surface with codes

OVIC and the ADM+S launch GenAI Concepts: A curated index for understanding AI terms

Author ADM+S Centre
Date 9 May 2024

The Office of the Victorian Information Commissioner (OVIC) and the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) have launched GenAI Concepts, an online resource designed to explain fundamental concepts and terms related to Generative AI.

GenAI Concepts covers over 40 terms, unlocking the technical, operational, and regulatory aspects of AI systems. It is an accessible tool for individuals and organisations seeking to better understand the fundamental ideas, technologies, and risks behind Generative AI.

From ‘prompt and prompt engineering’, ‘machine learning’, ‘large language models (LLMs)’, to ‘privacy’ and ‘human oversight’, GenAI Concepts dives into the core elements of AI systems while offering insights into their operations, associated risks, and existing regulations.

With accessible descriptions, vivid examples, and engaging visuals, GenAI Concepts will be a valuable and inclusive resource designed to accommodate people with varying levels of digital literacy and interests in this emerging field.

Deputy Commissioner Privacy & Data Protection, Rachel Dixon said, “OVIC is very pleased to be associated with this resource, to help people better understand Generative AI.

“This is a complex field, and it has rapidly infiltrated many workplaces. Resources such as this will help interested people get a better grasp of the basics, and help them make more informed decisions when using these models”.

ADM+S researchers Dr Fan Yang, Dr Jake Goldenfein from the University of Melbourne have been leading the project.

Professor Goldenfein said, “We’ve created the Generative AI concepts web page with the hope that users at different levels of resources and sophistication will be able to use this tool to understand fundamental ideas and fundamental complexities associated with these emerging automated decision-making systems.

“This tool is designed for users to gain an understanding of fundamental ideas and complexities associated with these emerging automated decision-making systems”.

“It’s not just how these systems work or the commercial hype, but really some of the complexities they introduce around data flows around questions of open source versus proprietary questions around transparency and security and different ways that these tools can be deployed and the supply chains that they invoke.”

Key features of GenAI Concepts include:

  • Comprehensive Index of terms: Explore a curated collection of GenAI terms, each supported by clear definitions,examples, and literature (i.e., academic articles and industry reports) to enhance understanding.
  • Expert Collaboration: Benefit from insights provided by leading experts from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and the Office of the Victorian Information Commissioner (OVIC), ensuring accuracy and relevance.
  • User-Friendly Interface: Navigate through the platform with an intuitive interface designed for ease of use and accessibility.
  • Inclusivity: People with varying levels of digital literacy and interests in this emerging field will find GenAI Concept useful.

Visit GenAI Concepts to explore the collection of selected terms used in the field of Artificial Intelligence. It is also available in PDF format.

SEE ALSO

ADM+S Members Awarded 2024 ARC Early Career Industry Fellowships

ADM+S Members Awarded 2024 ARC Early Career Industry Fellowships

Author Natalie Campbell
Date 8 May 2024

Congratulations to Dr Jose-Miguel Bello y Villarino from the University of Sydney and Dr Jessica Balanzategui from RMIT University who are amongst just 50 recipients of the 2024 ARC Early Career Industry Fellowship grants.

Announced on 6 May 2024, ARC Acting Chief Executive Officer Dr Richard Johnson said that offering the opportunity for early career researchers to collaborate in an industry setting is critical to ensuring Australia’s capability in meeting future industry-identified challenges.

Dr Bello y Villarino’s project ‘Artificial Intelligence (AI) and Anticorruption’ is a collaboration with the Independent Commission Against Corruption (ICAC), and seeks to realise the revolutionary potential of artificial intelligence systems as an anticorruption tool, providing a legal and policy roadmap to ensure data and methods are properly designed and deployed.

“The appointment is, above all, an exceptional opportunity to work with a leading partner on how to use AI and ADM in government in ways that are effective and efficient to achieve social goals, but ensure that the use of those future tools are procure and deployed responsibly,” said Dr Bello y Villarino.

“The partnership with ICAC, Australia’s longest-standing anticorruption agency, is expected to build in-house capacity and knowledge diffusion within ICAC, as well as deliver a holistic approach to ensuring the sustainability and broader impact of the project in other Australian anticorruption agencies.”

Dr Balanzategui’s project ‘Enhancing Discoverability of Australian Children’s TV in the Streaming Era’, aims to protect the Australian children’s TV sector by developing an understanding of how children use video streaming platforms to access local and age-appropriate content.

In collaboration with The Australian Children’s Television Foundation, the project expects to generate new evidence to inform regulation, investment, and strategy around children’s TV, as well as well as develop an education program with additional partner ACMI.

Dr Balanzategui explains, “this Fellowship provides me the opportunity to contribute directly to the Australian children’s television sector at a time of significant flux and policy change for the industry.

“For over 40 years the Australian Children’s Television Foundation (ACTF) has been a pivotal strength of the sector and the policy settings that undergird it, but the structure of the sector has been overhauled in the streaming era.”

Working with an advisory board of representatives from the ABC, ACMA and Screen Australia, the project will develop a prototype platform showcasing child-centred design principles for the benefit of the broader sector.

ARC Early Career Industry Fellowships are funded for three years under the Industry Fellowships Programs to help build innovation in the industry, community, not-for-profit, and other government and publicly funded research sectors.

Read the ARC Media Release.

SEE ALSO

Short Course: Artificial Intelligence for Social Impact

Dang Nguyễn AI shortcourse

Short Course: Artificial Intelligence for Social Impact

Author Natalie Campbell
Date 1 May 2024

ADM+S Research Fellow Dr Dang Nguyen has collaborated with the Asian Development Bank Institute to develop an E-learning short course which explores the use of artificial intelligence (AI) for social impact.

The free online course aims to help policy makers, researchers and students gain foundational knowledge in Artificial Intelligence (AI) and its potential for driving positive societal change.

“It was a fantastic opportunity to collaborate with the ADBI in developing this resource.

“The course is intended to help policymakers, researchers, and non-specialists in emerging economies, particularly in the Asia-Pacific, to navigate the intersection of AI and social change with confidence,” explains Dr Nguyen.

The course consists of three units that examine the historical and social context of the emergence of AI, as well as the capabilities of the technologies and systems.

“Whether you’re seeking to shape policies, develop innovative solutions, or simply deepen your understanding of this rapidly evolving field, this course serves as a vital starting point for thinking about how AI can contribute to a more ethical, responsible, and inclusive future for all.”

Access Artificial Intelligence for Social Impact on our Learning Resources page.

SEE ALSO

Prof Mark Sanderson inducted into the 2024 SIGIR Academy

Mark Sanderson SIGIR Academy
2024 SIGIR Academy Inductee, Prof Mark Sanderson (RMIT)

Prof Mark Sanderson inducted into the 2024 SIGIR Academy

Author Natalie Campbell
Date 1 May 2024

Congratulations to ADM+S Chief Investigator Prof Mark Sanderson from RMIT University who has been recognised as a Special Interest Group on Information Retrieval (SIGIR) Academy Inductee in 2024.

Appointment to the SIGIR Academy honours individuals who have made significant, cumulative contributions to the development of the information retrieval field.

Each year, a small cohort of 3-5 new members is inducted into the SIGIR Academy, and election is considered an official Association for Computing Machinery (ACM) award.

Prof Mark Sanderson is the Dean of Research and Innovation at the STEM College at RMIT University. His research primarily focuses on search engines, recommender systems, user, data, and text analytics.

He has been an investigator on over $50 million worth of externally funded grants. He has published over 300 papers and has over 13,000 citations to his work.

“The SIGIR information retrieval research community means a great deal to me and to be recognised in such a way is an honour.

This vibrant research community continues to be a nurturing environment to develop and present my research. The community has always provided feedback in a collegiate manner that has continually encouraged me,” said Prof Sanderson.

Inductees are recognised as principal leaders in information retrieval whose efforts have shaped the discipline through significant research, innovation, and/or service.

Other criteria for nomination include the development of new research directions and innovations, influence on the work of others, and active participation in the ACM SIGIR community.

“I would not be joining the class this year if it were not for the incredible collaboration I have enjoyed with the Undergraduate, Masters, and PhD students I have had the privilege to work with over the years.”

Prof Sanderson’s appointment will be formally celebrated at the 2024 SIGIR conference, to be held 14-18 July in Washington, USA.

SEE ALSO

2023 ADM+S Annual Report released

ADM+S 2023 Annual Report Cover

2023 ADM+S Annual Report released

Author Kathy Nickels
Date 29 April 2024

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is pleased to present its 2023 Annual Report.

The 2023 ADM+S Annual Report underscores the Centre’s commitment to creating knowledge and strategies necessary for responsible, ethical, and inclusive automated decision-making by showcasing its groundbreaking interdisciplinary research and outreach and engagement initiatives.

It details the Centre’s achievements and progress gained in groundbreaking research, in advising governments, business and community organisations, in connecting researchers and practitioners around compelling shared problems, in public outreach, and in developing new training programs and resources.

“The 2023 ADM+S Annual Report well describes the work of the Centre in mapping the expanding reach of automated systems, and in gauging their impacts across Australia,” said Deena Shiff, Chair of the ADM+S International Advisory Board.

“It shows how, as the Centre’s research has advanced, it is working increasingly closely with partners across industry and the public and not for profit sectors.”

While 2023 was a year of consolidation and convergence for many of the Centre’s projects, it was also a year of dramatic developments in our field. It was the year of the chatbot — the year the world began to come to grips with the remarkable potential of what we have come to call Generative AI.

ADM+S researchers continue to be guided by our shared commitment to responsible, ethical and inclusive automated systems, and in many ways the dramatic emergence and take-up of generative tools such as ChatGPT has underlined the importance of our approach.

View the 2023 ADM+S Annual Report online: admscentre.org.au/2023-annual-report/

SEE ALSO

Experts suggest evidence-based stories about harmful online marketing will drive policy action

Alcohol bottle in small trolley in front of computer screen with credit card options

Experts suggest evidence-based stories about harmful online marketing will drive policy action

Author Kathy Nickels
Date 24 April 2024

Researchers agree that compelling evidence-based stories about harmful digital marketing, and its impacts on people and society, are needed to prompt political action to address legal and regulatory gaps in online advertising.

This was one of the key learnings from Strategic Public Interest Litigation for Transparency and Accountability of Harmful Digital Marketing, a 2-day workshop supported by the Academy of Social Science in Australian (ASSA) Workshops Program

Leading social science and socio-legal researchers, lawyers, and community advocacy groups joined the workshop to tackle issues of online advertising by harmful industries such as alcohol, unhealthy food, and gambling. 

Professor Christine Parker, Chief Investigator with the ADM+S at the University of Melbourne, said, “Our workshop surfaced some compelling stories about the harm that can be caused by targeted and personalised social media advertising of alcohol, gambling and unhealthy food. 

“But the overall feeling was positive because we were able to talk about ways in which communities impacted by harm can be empowered to address these challenges. Law reform and litigation are one of the tools available.”

The aim was to reflect on the regulatory and policy implications of harmful digital marketing and engage in dialogue about the potential benefits, challenges, and pitfalls of strategic public interest litigation to address these harms.

Discussions were directed towards informing and shaping practical strategies by regulators and community groups in advocating for greater transparency about harmful digital marketing, holding entities involved, such as platforms or advertisers, accountable and thereby reducing harm.

Professor Parker said, “It can take a while for law and regulatory practice to catch up with rapid technological innovation. 

“We were able to explore some great ideas for how to make them fit for purpose in the new world of digital and increasingly AI generated advertising.”

Participants explored the potential of public interest litigation to make digital marketing transparent and accountable, and to prompt further regulatory and policy action. 

Key discussion points

  •       The place of litigation in pursuing public interest goals, its efficacy in responding to concerns about digital marketing and the influence of digital platforms, and its strengths and weaknesses as a regulatory tool.
  •       What the research reveals about harmful digital marketing practices relating to alcohol, gambling, and unhealthy food, and the impacts of such practices: participants heard from, and discussed, the latest social science research and results, and how the evidence could be used.
  •       The potential for test case complaints to, and litigation by, the Australian Competition and Consumer Commission (ACCC) regarding harmful digital marketing.
  •       The potential for class actions by consumers/users of social media targeted by harmful digital marketing relating to alcohol, gambling, and unhealthy food.
  •       Lessons learned from current public interest litigation against harmful digital marketing promulgating crypto currency investment scams.
  •       The evidentiary, procedural, and technical hurdles to framing and proving cases of these kinds.
  •       Identifying gaps in the research about harmful digital marketing that would need to be addressed for the purposes of public interest litigation.
  •       The need for, and shape of, further regulatory reforms relating to harmful digital marketing that are indicated by the workshop’s analysis of the potentials and pitfalls of litigation.  

It is anticipated insights gained from the workshop will contribute to informing impactful community and regulatory actions, thereby shaping the future of the digital marketing landscape.

The workshop was funded by the Academy of the Social Sciences of Australia workshops program and co-hosted by the ARC Centre of Excellence for Automated Decision Making and Society (ADM+S), the University of Melbourne’s Centre for AI and Digital Ethics (CAIDE) and the Health Ethics and Law Network (HELN) of Melbourne Law School, The University of Melbourne. 

It was convened by Prof Christine Parker, Assoc Professor Paula O’Brien, Prof Jeannie Paterson (Melbourne Law School) and Prof Kim Weatherall (the University of Sydney Law School) and supported by Astari Kusumuwardani at the University of Melbourne node of ADM+S and Holly Jones from CAIDE.

SEE ALSO

Dr Zahra Stardust joins call for accountability for police online racial profiling

Social Media apps blurred

Dr Zahra Stardust joins call for accountability for police online racial profiling

Author Kathy Nickels
Date 22 April 2024

ADM+S researcher Dr Zahra Stardust recently joined with Dr Michael Bennett, the American Civil Liberties Union of Massachusetts, and the Innocence Project to submit an Amicus Brief to the Supreme Judicial Court of Massachusetts seeking increased transparency around racially-targeted, online police investigative techniques. Dr. Stardust and Dr. Bennett were represented by the Harvard Law School Cyberlaw Clinic.

Zahra, a socio-legal scholar working at the intersections of sexuality, technology, law and social justice contributed to the Amicus Brief as a Berkman Klein Centre Affiliate at Harvard University.

“For law enforcement agencies, social media data provides a cost-effective form of out-sourced surveillance and an abundance of data for intelligence-gathering and entrapment. Police are consistently looking for ‘backchannels’ to access social media data without court approval,” said Dr Stardust. 

The brief was submitted on 16 April 2024  in support of defendant-appellee Richard Dilworth, Jr., who raised claims of racial profiling in the Boston Police Department’s (“BPD”) use of social media as an investigative tool. 

This case arises out of the Boston Police Department’s practice of creating fake social media accounts, “friending” people of colour and then trawling their posts for evidence.

“Impersonating users on social media is just one of many net-widening tactics that subjects communities of colour to targeted surveillance and increased criminalisation. The tactics used in this case are part of the same legacy of racism and racial profiling endemic to law enforcement agencies,” said Dr Stardust.

The police refused to comply with the court-ordered discovery process, and the Supreme Judicial Court of Massachusetts sought briefs on the standard for discovery requests on racially-investigative techniques. The police argued that social media profiling was less invasive than traffic stops or stop-and-frisk searches. 

In their brief, Dr Zahra Stardust and colleagues argue that social media surveillance can impose particular harms to marginalised communities, and can be more intrusive than some offline investigations. 

Moreover, it recognises the reality that online spaces have been essential for personal and political activities of communities of color. Allowing police to infiltrate these spaces would have a chilling effect on these life-saving activities.

“Discovery is essential to expose the nature and scale of these racially-investigative techniques, and to bring to light partnerships and relationships between police and social media platforms, including the extent of their data sharing and procurement practices.”

Dr Stardust has previously published about police misuse of dating app data in her research with ADM+S researchers Dr Rosalie Gillett and Professor Kath Albury.

Spring 2024 Cyberlaw Clinic students Jane Boettcher, Angie Cui, and David Poole worked on the brief alongside attorneys from ACLUM and the Innocence Project. The students were supervised by clinical instructors Wendy Chu and Mason Kortz, assisted by teaching fellow Isabel Sistachs. 

Decisions from the SJC usually take around four months to deliver.

Read more about the case in this blog by the  Harvard Law School Cyberlaw Clinic

Read the full Amicus Brief submission

SEE ALSO

Visualising the 1800s or designing wedding invitations: 6 ways you can use AI beyond generating text

Midjourney image of Australian countryside in the 1800s
Midjourney image by T.J. Thomson

Visualising the 1800s or designing wedding invitations: 6 ways you can use AI beyond generating text

Author T.J. Thomson
Date 18 April 2024

As more than half of Australian office workers report using generative artificial intelligence (AI) for work, we’re starting to see this technology affect every part of society, from banking and finance through to weather forecasting, health and medicine.

Many people are now using AI tools like ChatGPT, Claude or Gemini to get advice, find information or summarise longer passages of text. But our recent research demonstrates how generative AI can be used for much more than this, returning results in different formats.

On the one hand, AI tools are neutral – they can be used for good or ill depending on one’s intent.

However, the models powering such tools can also suffer from biases based on how they were developed. AI tools, especially image generators, are also power hungry, ratcheting up the world’s energy usage.

And there are unresolved copyright claims surrounding AI-generated outputs, given the content used to train some of the models isn’t owned by the organisations developing the AI.

But ultimately, there’s no escaping generative AI. Learning more about what these tools can do will improve your digital literacy and help you understand their full impact, from benign to problematic.

1. Imagining what lies beyond the frame

Adobe’s recently developed “generative expand” tool allows users to expand the canvas of their photos and have Photoshop “imagine” what is happening beyond the frame. Nine News infamously experimented with this tool for a broadcast featuring Victorian politician Georgie Purcell.

Here’s a video that shows how that tool works:

But it can also be used more innocently to extend the borders of a landscape or still-life image, for example. You might do this when trying to edit a square Instagram photo to fit a 4×6 inch photo frame.

2. Visualising the past or the future

Photography was only invented within the past 200 years, and camera-equipped smartphones within the last 25.

That leaves us with plenty of things that existed before cameras were common, yet we might want to visualise them. This could be for educational purposes, entertainment or self-reflection.

One example is the writings of historical figures, like architect Robert Russell, who conducted the first survey of what is now Melbourne in 1836. He wrote at the time:

The soil is in this country superior to any in the colony, we have a good grazing land, and a fine supply of water: a fine harbour, a Town on which much capital (I am afraid to say how much) has been expended, enterprising settlers and flocks and herds increasing in all directions, a climate well fitted for Englishmen, and events hastening forward the necessity for some scheme of extended emigration from which we shall soon feel the benefit.

We can feed this text from Russell’s letters into a text-to-image generator and see what the area may have looked like.

Conversely, we might want to look ahead and see if AI can help us visualise what is to come.

For example, a probe is currently heading to a never-before-seen metal asteroid, 16 Psyche. It’s projected to reach the asteroid in 2029. We can feed an AI tool a description from NASA to get a rough sense of what the asteroid might look like.

NASA currently works with artists to illustrate concepts we can’t see, but artists could also draw on AI to help create these renderings.

3. Brainstorming how to visualise difficult concepts

Where we might have once turned to Google Images or Pinterest boards for visual inspiration, AI can also help with suggestions on how to show difficult-to-visualise subject matter.

Take the Mariana Trench, for example. As one of the deepest places on Earth, few people have ever seen it firsthand. It’s also pitch black and artificial light wouldn’t allow you to see very far.

But ask AI for suggestions on how to visualise this spot and it provides a number of ideas, including taking a more familiar landmark, such as the Burj Khalifa, the world’s tallest structure, and placing a scaled model next to the trench to better allow audiences to appreciate its depth.

Or creating a layered illustration that shows the flora and fauna that live at each of the ocean’s five zones above the trench.

4. Visualising data

Depending on the tool, you can prompt AI with numbers, not just text.

For example, you might upload a spreadsheet to ChatGPT 4 and ask it to visualise the results. Or, if the data is already publicly available (such as Earth’s population over time), you might ask a chatbot to visualise it without even having to supply a spreadsheet.

It’s a great way to speed up such tasks, as long as you keep in mind AI can “hallucinate”, or make things up, so you need to double check the accuracy of the results.

5. Creating simple moving images

You can create a simple yet effective animation by uploading a photo to an AI tool like Runway and giving it an animation command, such as zooming in, zooming out or tracking from left to right. That’s what I’ve done with this historical photo preserved by the State Library of Western Australia.

A historical photo of a ship that has been AI animated to appear like it is moving
Runway’s image animation with historical footage.
T.J Thomson

Another way you can experiment with video is using Runway’s text-to-video feature to describe the scene you want to see and let it make a video for you. I used this description to create the following video:

Tracking shot from left to right of the snowy mountains of Nagano, Japan. Clouds hang low around the mountains and they are about 50m away.

An animated landscape scene with mountains and clouds moving left to right with parallax, based on Runway's AI text to video function
Runway’s text-to-video capabilities.
T.J Thomas

6. Generating a colour palette or simple graphics

Maybe you’re creating a logo for your small business or helping a friend with the design of an event invitation. In these cases, having a consistent colour palette can help unify your design.

You can ask generative AI services like Midjourney or Gemini to create a colour palette for you based on the event or its vibe.

If you’re designing a website or poster and need some icons to represent certain parts of the message, you can turn to AI to generate them for you. This is true for both browser-based generators like Adobe Firefly, as well as desktop apps with built-in AI, like Adobe Illustrator.

Next time you’re interacting with a generative AI chatbot, ask it what it’s capable of. In addition to these six use cases, you might be surprised to know that generative AI can also write code, translate content, make music and describe images. This can be handy for writing alt-text descriptions and making the web more accessible for those with vision impairments.The Conversation

T.J. Thomson is an Affiliate of the ADM+S Centre and Senior Lecturer in Visual Communication & Digital Media at RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Dr Kylie Pappalardo joins the Federal Government’s new copyright and AI reference group

Kylie Pappalardo

Dr Kylie Pappalardo joins the Federal Government’s new copyright and AI reference group

Author Kathy Nickels
Date 19 April 2024

Dr Kylie Pappalardo from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at QUT has been selected as one of 20 experts to join the Federal Government’s Copyright and Artificial Intelligence Reference Group (CAIRG) steering committee. 

The copyright and artificial intelligence (AI) reference group has been established to better prepare for future copyright challenges emerging from AI.

Dr Pappalardo brings research experience in how automation, digital distribution, and intellectual property laws shape the reach and diversity of our culture. 

“I’m very much looking forward to working with the Steering Committee on the next big challenge for copyright law and policy – generative AI,” said Dr Pappalardo. 

“The Attorney-General’s Department has brought together a great group of people to tackle the tricky questions in this space.”

AI gives rise to a number of important copyright issues, including the material used to train AI models, transparency of inputs and outputs, the use of AI to create imitative works, and whether and when AI-generated works should receive copyright protection.

The reference group will be a standing mechanism for ongoing engagement with stakeholders across a wide range of sectors, including the creative, media and technology sectors, to consider issues in a careful and consultative way.

It will complement other AI-related Government initiatives, including the work being led by the Minister for Industry and Science Ed Husic on the safe and responsible use of AI.

The establishment of the group is an outcome of a series of roundtables held by the  Federal Government throughout 2023, with more than 50 peak bodies and other organisations raising issues of copyright reform.

SEE ALSO

Australian media need generative AI policies to help navigate misinformation and disinformation

A wall of media images overlaid with AI abstract images

Australian media need generative AI policies to help navigate misinformation and disinformation

Author Shu Shu Zheng, RMIT University
Date 16 April 2024

New research into generative AI images shows only over a third of media organisations surveyed at the time of research have an image-specific AI policy in place.

The study, led by RMIT University in collaboration with Washington State University and the QUT Digital Media Research Centre, interviewed 20 photo editors or related roles from 16 leading public and commercial media organisations across Europe, Australia and the US about their perceptions of generative AI technologies in visual journalism.

Lead researcher, RMIT Senior Lecturer and Affiliate of the ARC Centre of Excellence for Automated Decision-Making (ADM+S), Dr TJ Thomson, said while most staff interviewed were concerned about the impact of generative AI on misinformation and disinformation, factors that compound the issue, such as the scale and speed at which content is shared on social media and algorithmic bias, were out of their control.

“Photo editors want to be transparent with their audiences when generative AI technologies are being used, but media organisations can’t control human behaviour or how other platforms display information,” said Thomson, from RMIT’s School of Media and Communication.

“Audiences don’t always click through to learn more about the context and attribution of an image. We saw this happen when AI images of the Pope wearing Balenciaga went viral, with many believing it was real because it was a near-photorealistic image shared without context.

“Photo editors we interviewed also said images they receive don’t always specify what sort of image editing has been done, which can lead to news sites sharing AI images without knowing, impacting their credibility.”

Thomson said having policies and processes in place that detail how generative AI can be used across different communication forms could prevent incidents of mis- and disinformation, such as the altered images of Victorian MP Georgie Purcell, from happening.

“More media organisations need to be transparent with their policies so their audiences can also trust that the content was made or edited in the ways the organisation says it is,” he said.

Banning generative AI use not the answer 

The study found five of the surveyed outlets barred staff from using AI to generate images, and three of those outlets only barred photorealistic images. Others allowed AI-generated images if the story was about AI.

“Many of the policies I’ve seen from media organisations about generative AI are general and abstract. If a media outlet creates an AI policy, it needs to consider all forms of communication, including images and videos, and provide more concrete guidance,” Thomson said.

“Banning generative AI outright would likely be a competitive disadvantage and almost impossible to enforce.

“It would also deprive media workers of the technology’s benefits, such as using AI to recognise faces or objects in visuals to enrich metadata and to help with captioning.”

Thomson said Australia was still at “the back of the pack” when it came to AI regulation, with the US and the EU leading.

“Australia’s population is much smaller, so our resources limit our ability to be flexible and adaptive,” he said.

“However, there is also a wait-and-see attitude where we are watching what other countries are doing so we can improve or emulate their approaches.

“I think it’s good to be proactive, whether that’s from government or a media organisation. If we can show we are being proactive to make the internet a safer place, it shows leadership and can shape conversations around AI.”

Algorithmic bias affecting trust 

The study found journalists were concerned about how algorithmic bias could perpetuate stereotypes around gender, race, sexuality and ability, leading to reputational risk and distrust of media.

“We had a photo editor in our study type a detailed prompt into a text-to-image generator to show a South Asian woman wearing a top and pants,” Thomson said.

“Despite detailing the woman’s clothing, the generator persisted with creating an image of a South Asian woman wearing a sari.”

“Problems like this stem from a lack of diversity in the training data, and it leads us to question how representative are our training data, and what can we do to think about who is being represented in our news, stock photos but also cinema and video games, which can all be used to train these algorithms.”

Copyright was also a concern for photo editors as many text-to-image generators were not transparent about where their source materials came from.

While there have been generative AI copyright cases making their way into the courts, such as The New York Times’ lawsuit against OpenAI, Thomson said it’s still an evolving area.

“Being more conservative and only using third-party AI generators that are trained on proprietary data or only using them for brainstorming or research rather than publication can lessen the legal risk while the courts settle the copyright question,” he said.

“Another option is to train models with an organisation’s own content and that way they have confidence they own copyright to resulting generations.”

Generative AI is not all bad 

Despite concerns about mis- and disinformation, the study found most photo editors saw many opportunities for using generative AI, such as brainstorming and generating ideas.

Many were happy to use AI to generate illustrations that were not photorealistic, while others were happy to use AI to generate images when they don’t have good existing stock images.

“For example, existing stock images of bitcoin all look quite similar, so generative AI can help fill a gap in what is lacking in a stock image catalogue,” Thomson said.

While there was concern about losing photojournalism jobs to generative AI, one editor interviewed said they could imagine using AI for simple photography tasks.

“Photographers who are employed will get to do more creative projects and less tasks like photographing something on a white background,” said the interviewed editor.

“One could argue that those things are also very easy and simple and take less time for a photographer, but sometimes they’re a headache too.”

Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies” was published in Digital Journalism. (DOI: 10.1080/21670811.2024.2331769)

T.J. Thomson (RMIT University), Ryan Thomas (Washington State University) and Phoebe Matich (Queensland University of Technology) are co-authors.

Thomson was a visiting fellow at the German Internet Institute in Berlin, which allowed him to complete the European portion of this research.

SEE ALSO

2024 ADM+S Symposium: Call for Papers, Creative Practice Presentations and Posters

Person on Scooter with colourful blurred background

2024 ADM+S Symposium: Call for Papers, Creative Practice Presentations and Posters

Author Kathy Nickels
Date 8 April 2024

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is thrilled to announce that the 2024 ADM+S Symposium: Automated Mobilities will be held at UNSW, Kensington campus, Sydney from Tuesday 15 – Thursday 17 October 2024. 

The main symposium will highlight the challenges and opportunities of AI and automated decision-making in mobilties. ADM+S members will share, explore, create and connect on related work developed by researchers, partners and communities across the Centre. 

The 2024 symposium will  advance and open up our Mobilities agenda to create a vibrant new space for discussion, debate and creating new outcomes and impact for our ADM+S work. It brings together the new knowledge and insight generated through ADM+S Phase 1 projects to create a new interdisciplinary prism to view, comprehend and shape ADM in mobilities. 

We are excited at the prospect of seeing the interfaces between ADM+S research into the diverse but interconnected questions of public and private transport and automobilities, active and micromobilities, drones and other airborne mobilities, waste trucks, logistics workers and materials, mobility test beds, surveillance tech, and migration.

At this symposium participants will:

  • Share new insights and knowledge  from research connected to the Mobilities focus area conducted during Phase 1 of the Centre, 
  • Create and develop traditional and novel research outputs for publication  (e.g publications for special issue journals or an edited book, poster presentations)

The symposium will include collaborative workshops, interactive experiences, film screenings, tours and discussions.

The 3 day program includes:

  • Day 1: Satellite events, HDR/ECR workshop, CI meetings, evening event with research posters
  • Day 2: 2024 ADM+S Symposium: Automated Mobilities and evening public event
  • Day 3: Whole-of-Centre Phase 2 project workshops

The symposium working group is inviting submissions for the symposium in the form of working papers (traditional papers), non-traditional outputs (such as creative practice, documentary or other outputs), and research posters. Submissions are due 20 May 2024.

ADM+S HDR students are invited to submit posters contributing insights into AI and automation resulting from current ADM+S research thesis. Submissions for the HDR student poster competition are due 23 September 2024.

For further information visit the 2024 ADM+S Symposium event page.

The Whole-of-Centre Phase 2 workshops will provide dedicated time for Phase 2 project teams to meet and make progress toward their objectives. These workshops are for CIs, PIs, AIs, directly involved postdocs/students, and representatives of partner organisations who are involved in the projects and attending the symposium. 

The 3-day symposium program will be available in August 2024.

SEE ALSO

ADM+S researchers share perspectives on the digital future with Queensland Parliament

PIctured: Prof Daniel Angus, Prof Debbie Terry AC, Prof Margaret Sheil AO, Prof Carolyn Evans Assoc Prof Nic Carah, Dr Susan Grantham at Queensland Parliament House for Queensland Future Conversations event
Left to right: Prof Daniel Angus, Prof Debbie Terry AC, Prof Margaret Sheil AO, Prof Carolyn Evans Assoc Prof Nic Carah, Dr Susan Grantham at Queensland Parliament House for Queensland Future Conversations event

ADM+S researchers share perspectives on the digital future with Queensland Parliament

Author Kathy Nickels
Date 25 March 2024

ADM+S researchers Prof Daniel Angus and Assoc Prof Nicholas Carah, and communication scholar Dr Susan Grantham, were invited to share their perspectives on the evolving digital world with policy-makers and ministers in a lunchtime event held at Queensland Parliament house on 20 March.

In his address to members of the Legislative Assembly of Queensland, Prof Daniel Angus, Chief Investigator at the ARC Centre of Excellence for Automated Decision-Making at QUT, spoke about the challenges presented by AI and digital media for the Government.

“While it’s important to allow room for speculation on what futures we imagine and what role such technology can and should play, there are immediate and well-documented challenges that governments can address,” said Prof Angus.

“We know that AI systems, having been trained on data of varying quality can replicate and perpetuate societal inequalities by replicating hegemonic structures.” 

Prof Angus also spoke about the many opportunities for Government. 

“Enhanced planning and decision-making tools powered by AI can revolutionize governance and public service delivery.

“Queensland also has some of the world’s most respected minds and the potential to become a global leader in the development of socially-responsible AI, setting the standard for ethical and equitable AI implementation.”

In his address to parliament, Prof Angus said it’s crucial to recognise that technical disruption is inherently social disruption. 

“As we navigate the challenges and opportunities presented by AI and digital media, we must adopt a multidisciplinary approach that incorporates humanities and social sciences perspectives, and avoids technologically deterministic thinking that erase the role of humans within these systems. 

“Each of us also has a role to play in safeguarding and improving the quality of democracy and our shared culture, particularly as leaders in our community. By closing the gaps in our society, we can also ensure that Queensland is prepared to seize the opportunities presented by new technologies and we can see our state thrive in the digital age.”

The event series Queensland Future Conversations is hosted by the University of Queensland, QUT, and Griffith University. The series offers policy-makers and members of Parliament an audience with academic experts to learn more about key topics and issues facing society. 

Read the full speech to parliament Challenges and Opportunities for an Evolving Digital World

SEE ALSO

Dr Jake Goldenfein invited to present on AI in government decision-making at Parliament House

Dr Jake Goldenfein invited to present on AI in government decision-making at Parliament House

Author Natalie Campbell
Date 15 March 2024

On 15 March ADM+S Chief Investigator Dr Jake Goldenfein was invited by the Association for Australian Information Commissioner’s to speak at Victoria’s Parliament House about how software supply chains, outsourcing, and human oversight requirements affect transparency into automated decision-making systems.

The meeting was hosted by the Office of the Victorian Information Commissioner (OVIC), and was attended by Information Commissioners, Privacy Commissioners, and both state and federal Information Access Commissioners from Australia and New Zealand.

Dr Goldenfein was invited to talk about AI in government decision-making – specifically how AI software supply chains frustrate information access and transparency rules.

“It is always a privilege to address regulators who really understand the policy environment and how public service organisations operate. I appreciate the opportunity to translate some of the more conceptual work we do into that context,” he said.

Dr Goldenfein is a law and technology scholar at Melbourne Law School. His research focus spans the regulation of surveillance, law in cyber-physical systems, the relationship between data science and legal theory, and platform governance.

SEE ALSO

ADM+S research informs NSW Artificial Intelligence Inquiry

Abstract map of NSW, Australia

ADM+S research informs NSW Artificial Intelligence Inquiry

Author Kathy Nickels
Date 8 March 2024

ADM+S researchers have provided evidence on the use of automated decision-making (ADM) systems by state and local governments in NSW at the first hearing of the NSW Artificial Intelligence Inquiry held in Parliament today.

The inquiry is examining the impact of artificial intelligence on various aspects of people’s lives in NSW, both now and into the future, to ensure that New South Wales is well positioned to navigate the opportunities, risks and challenges this technology presents.

Evidence presented by Prof Kimberlee Weatherall and Dr José-Miguel Bello y Villarino from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) comes from research reported in Automated Decision-Making in NSW: Mapping and analysis of the use of ADM systems by state and local governments

This research was the first attempt to undertake a systematic mapping of ADM in any jurisdiction in Australia and one of the very few attempts across the world. 

Through comprehensive surveys and targeted interviews with NSW state and local government entities, researchers found widespread and accelerated use of automation in NSW departments and agencies as well as providing some insights regarding ADM use in local councils, which had not been sufficiently explored to date.

“We found ADM systems involved across government services, from low to high stakes contexts”, said Professor Kimberlee Weatherall, Professor of Law, The University of Sydney Law School and Chief Investigator, ARC Centre of Excellence on Automated Decision-Making and Society.

Key findings:

  • The NSW government sector use of ADM systems is widespread and increasing
  • NSW government organisations are interested in AI, but simpler forms of automation and data linkage and matching are widespread
  • There is widespread use of sensors, computer vision and analysis, including use by local councils
  • Humans are mostly ‘in the loop’ for now, but further automation is a short step away
  • There may be a need for wider expertise and testing at the development stage of ADM systems

“We found that a mapping of this kind is challenging for a whole range of reasons, and so we also provide insights, learned through the process of conducting this mapping, about how to identify, and record ADM system use in government.

“We believe this will be useful both for researchers, and for governments seeking to be transparent and accountable for their use of technology”.

This research was undertaken as a partnership between ADM+S and the NSW Ombudsman seeking better visibility on when and how ADM systems are being used to support or replace the work of NSW public servants in making decisions that affect the public in NSW.

NSW Ombudsman, Paul Miller PSM said, “we hope that all departments, agencies and local councils that have contributed to this research will find the analysis and insights in the report of value, and useful as they continue to consider and pursue their own current and future ADM projects”.

The project follows from a ground-breaking report on the use of technology in government decision-making published by the NSW Ombudsman in 2021.

This research is expected to impact the future deployment of AI and automation. More importantly, the research project has generated important conversations about systems already deployed.

SEE ALSO

Research nominated for prestigious UK National Health Service Innovation Award

Profile image of Jackie Leah Scully

Research nominated for prestigious UK National Health Service Innovation Award

Author ADM+S Centre & UNSW
Date 5 March 2024

Research undertaken in the UK in collaboration with the Disability Innovation Institute UNSW, directed by ADM+S researcher Prof Jackie Leach Scully has been nominated for the National Health Service (NHS) Innovation Award.

Organ Quality Assessment, referred to as OrQA, is a developing new technology that harnesses artificial intelligence (AI) to assess and evaluate the quality of donated kidneys and livers for human transplant. Uncertainty around the viability of donated organs is a contributing factor to increased waiting times as well as failed operations.

The technology developed within the NHS in collaboration with the University of Bradford and Newcastle Upon Tyne Hospitals revolutionises the transplant process, giving more information and support to medical staff in assessing organ suitability for patients.

Director of the Disability Innovation Institute UNSW, Professor of Bioethics and Chief Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), Jackie Leach Scully has been collaborating with the OrQA team for over a year.

Her role has been to investigate the responses of transplant surgeons, transplant recipients, and potential donors to the idea of AI involvement in organ transplantation. It is important work for Jackie, who is a recipient of an organ transplant and is able to provide valuable insights into the process as well as her academic contributions.

The awards hosted by independent healthcare company Medipex showcase the best new ideas and technologies to improve service delivery for patients of the National Health Service in the UK. Organisers of the awards look for innovative, practical solutions that address common medical problems.

The OrQA team are currently working on expanding the use of technology to include the assessment of more organ types.

The Medipex NHS Innovation Awards will take place online on 13 March 2024.

Read more about the OrQA project.

SEE ALSO

ADM+S researchers present evidence at Senate Committee hearing into Media Reform Bill

ADM+S researchers present evidence at Senate Committee hearing into Media Reform Bill

Author Natalie Campbell
Date 5 March 2024

On 23 February 2024 Assoc Prof Ramon Lobato, Dr Alexa Scarlata and Dr Jessica Balanzategui from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University presented evidence at the Senate Communications and Environment Committee hearing into the Media Reform Bill (Prominence and Anti-Siphoning) at Parliament House in Canberra.

In November 2023, Communications Minister Michelle Rowland introduced legislation that requires smart TV manufacturers to preinstall iView, SBS On Demand, 9Now, 7Plus and 10Play on all smart TVs sold in Australia.

“The government’s proposed smart TV law is a light-touch change that will support our local content and public-service broadcasting ecosystem without compromising the user experience,” said Prof Ramon Lobato.

Assoc Prof Lobato and Dr Scarlata were invited to the hearing to present findings from their research on smart TV users, while Dr Balanzategui shared evidence on child audiences from her Australian Children’s Television Cultures project and study on Netflix and child/family audiences.

Assoc Prof Lobato and Dr Scarlata described findings from a nationally representative survey of over 1,000 Australian smart TV users which revealed that 33 per cent of users don’t know how to download apps on their smart TV, and 56 per cent of users don’t know how to change the order of apps on their TV.

Therefore, exposure to local content is heavily determined by manufacturers and their commercial partners.

“Since 2019, our research group at ADM+S, RMIT University has been studying local content prominence in smart TVs, to understand what smart TV’s mean for public policy, and to provide analysis and evidence to inform the regulatory options.

“Based on these findings, we support the bill, because it will rebalance what has become a structurally unequal marketplace,” said Assoc Prof Lobato.

Similarly, Dr Balanzategui’s evidence described that while children enjoy Australian content, they struggle to find and identify it on streaming platforms and smart TVs. Citing her research findings, only 16.7 per cent of children in her study selected Australian content as a first choice when observed using streaming platforms.

This research shows that discoverability of local and age-appropriate content is a challenge for Australian children and their families which could be alleviated by the requirements introduced in the Prominence and Anti-siphoning Bill.

Assoc Prof Lobato said, “we appreciated the opportunity to share our findings on smart TV user behaviour with the Senate Committee and to participate in what were often detailed policy and technical discussions between the Senators, TV network heads, streaming services, and consumer electronics manufacturers.”

The meeting was chaired by Senator Karen Grogran, and attended by Senators Catryna Bilyk, Ross Cadell, Hollie Hughes and David Pocock. Amongst the expert witnesses were heads of all Australian free-to-air networks, as well as representatives from Netflix and Foxtel.

 

SEE ALSO

Researcher explores future of smart home technologies for older adults at the University of Amsterdam

University of Amsterdam entrance

Researcher explores future of smart home technologies for older adults at the University of Amsterdam

Author Kathy Nickels
Date 4 March 2024

Miguel Gomez-Hernandez, from the ARC Centre of Excellence for Automated Decision-Making and Society at Monash University, has spent 3 weeks at AlgoSoc, a leading hub hosted at the University of Amsterdam (UvA), for the integration of public values in automated decision systems and the algorithmic society. 

During the visit, hosted by UvA researchers Prof Julia van Weert, Miguel developed international perspectives on his research exploring the responsible integration of automated eHealth in the lives of older adults.

Miguel’s research focuses on future smart home technology for older people through industry and older people’s visions. . 

“By immersing myself in AlgoSoc, I have deepened my theoretical knowledge, refined my research approach, and established a solid foundation for contributing effectively to the discourse surrounding the responsible integration of automated eHealth in the lives of older adults”, said Miguel.

AlgoSoc leverages the expertise of some of the world’s highest ranking groups in communication, law, governance, health, humancentric AI, computer and data science to develop solutions for the design of governance frameworks needed to complement technology-driven initiatives in the algorithmic society.

At AlgoSoc and UvA Miguel met other health researchers and anthropologists from different parts of the world working on technology and futures such as Dr. Roanne van Voorst, her team, and fellow anthropologists.

“Seeing the growing interdisciplinary interest in futures and health at AlgoSoc and UvA, my research question is now more aligned with these fields as I find promising and important work to be done in this space”.

Miguel is continuing to explore collaborations with the researchers he met at UvA with plans to organise conference panels, workshops, and publications on futures, health, and ageing technology.

The University of Amsterdam is a Partner Organisation of the ARC Centre of Excellence for Automated Decision-Making and Society, fostering collaboration for ADM+S members in an international network. 

This research visit was supported by ADM+S and the University of Amsterdam.

SEE ALSO

New Media & Society special issue: Automated responses to the COVID-19 pandemic

Zeros and ones appearing in a blackhole

New Media & Society special issue: Automated responses to the COVID-19 pandemic

Author Kathy Nickels
Date 1 March 2024

The New Media & Society journal has released its special issue: Automated Responses to the COVID-19 Pandemic edited by Prof Mark Andrejevic, and Dr Christopher O’Neill from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at Monash University.

This special issue examines the pandemic response as a mediated phenomenon – one that paired digital information technologies with automated logistical systems to address inter-related crises of circulation. 

In the logistical sphere, automated media were used to manage flows of people, commodities and even (in the case of ‘smart’ ventilation systems) air itself. In the media realm, automated systems played a role in circulating timely notifications and alerts and in detecting and responding to false information. 

This theme issue brings together an interdisciplinary group of researchers focused on the analysis of automated control and response systems, including the networked devices and infrastructures that supported them, and the digital forms of data collection and processing they enabled. 

“The special issue brings together remarkable contributions from researchers across the ADM+S, said Dr Christopher O’Neill. 

“The breadth and rigour of this research make clear both the challenge of understanding how automation has changed in the wake of the pandemic, as well as the contribution that the ADM+S has made to this project.”

New Media & Society engages in critical discussions of the key issues arising from the scale and speed of new media development, drawing on a wide range of disciplinary perspectives and on both theoretical and empirical research.

News Media & Society is a Q1 journal, ranked in the top 25 percent of journals for impact in the international research community.

Some themes that emerge from the special issue contributions, include:

  • the relationship between automation and the temporality of viral contagion, 
  • logics of pre-emptive intervention and 
  • forms of atmospheric and environmental control

Special issue contributions from ADM+S members:

SEE ALSO

ADM+S Partner Investigator invited to speak about responsible AI at the United Nations

Julian Stoyanovich UN
Prof Julia Stoyanovich at the CsocD62.

ADM+S Partner Investigator invited to speak about responsible AI at the United Nations

Author Natalie Campbell
Date 29 February 2024

On 7 February 2024, ADM+S Partner Investigator Prof Julia Stoyanovich from New York University was invited to speak on responsible artificial intelligence (AI) at the United Nation’s 62nd session of the Commission for Social Development (CSocD62).

The Commission took place from 5 to 14 February 2024 at the United Nations Headquarters in New York and focused on ‘fostering social development and social justice through social policies to accelerate progress on the implementation of the 2030 Agenda for Sustainable Development and to achieve the overarching goal of poverty eradication’.

Prof Stoyanovich was one of six panellists invited to discuss this meeting’s emerging issue, ‘the Influence of Digital Transformation on Inclusive Growth and Development: A Path to Achieving Social Justice.’

Defining AI as a system in which algorithms use data to make decisions on our behalf, or help humans make decisions, Prof Stoyanovich provided examples of successful AI implementation, where technology responds to a particular need to improve the status quo.

As an example, Prof Stoyanovich detailed how AI has significantly improved the efficiency of MRI’s and other technical medical services.

However, referring to a recent study about the use of AI in hiring processes and their amplification of biases, Prof Stoyanovich demonstrated how such systems can fail.

“You cannot outsource the work of being human,” she said.

“For safe use we must have another factor that contributes to their successful use and that is decision-maker readiness.

For example, a clinician or radiologist who is assisted by an AI, but understands they are ultimately responsible for treatment and diagnosis and knows when to trust AI predictions and when to challenge them.”

The panellists offered a range of perspectives on Responsible AI, AI Literacy and AI Governance to be considered by the commission, which is the advisory body responsible for the social development pillar of global development.

“We are technically ready to make [AI systems] in terms of data software and hardware, and we know how to validate them. These are some of the hallmarks of responsible AI.

“However, for safe use we must have another factor that contributes to their success, and that is decision-maker readiness,” Prof Stoyanovich explained to the Commission.

View the full hearing on UNDESA YouTube. (Prof Stoyanovich: 1:00h-1:15h)

Recent opinion piece from Prof Stoyanovich on regulating responsible AI.

SEE ALSO

New online platform to improve disaster preparedness using community sourced data, resource mapping and AI: report

Indigenous community consultation sitting at outdoor table

New online platform to improve disaster preparedness using community sourced data, resource mapping and AI: report

Author Kathy Nickels
Date 29 February 2024

A new online platform aims to better prepare communities for disasters with the use of community sourced data, resource mapping and artificial intelligence (AI) tools.

The report Towards Resilient Communities released today, provides details on the platform developed by researchers at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at Swinburne University in partnership with Humanitech at the Australian Red Cross.

Prof Anthony McCosker, Chief Investigator at the ADM+S at Swinburne University and lead author of the report said,

“When disasters occur in Australia, particularly in remote areas, one of the biggest issues is access to reliable and useful information.”

“Post disaster reviews consistently tell us that local residents and community organisations need to be involved in generating that information and supported to access it with better tools and platforms. For us this is about building community data capability – or the skills and tools needed to support local decision-making”

“The community resource mapping approach aims to achieve a step-change in disaster preparedness by embedding data-gathering and capability development in the community context as it evolves.”

How the platform works using data collaboration, capability building, AI-driven data processing and analysis to enable community disaster resilience.

Government reports and disaster management inquiries have consistently found that data and information are vital to disaster resilience and that this is not only the domain of emergency services but community members themselves can actively contribute.

Katy Southall Head of Humanitech at Australian Red Cross said in today’s rapidly evolving landscape of disaster preparedness, collaboration between research, technology, and humanitarian efforts is vital.

“By harnessing community-sourced data, resource mapping, and AI tools, we can further empower communities by adding innovative tools to their preparedness toolkit.

“This platform not only fills gaps in access to information but also ensures we’re putting communities first by ensuring local voices and knowledge are central to the process,” said Ms Southall.

Informal local sources of information and knowledge have been vital to emergency responses in rural and isolated small communities prone to bushfires, flooding, heat and storm damage.

Their importance was evident in the response to bushfires across southern New South Wales and Victoria in 2019–2020 and during the 2022 floods in Queensland and Lismore. Residents of Lismore themselves identified social media posts from people seeking help as waters rose with collated addresses made available through a public Google spreadsheet. This information was used to coordinate local rescuers and resources such as boats, emergency accommodation and

The online platform was trialled for Ararat in regional Victoria in late 2023, which includes areas of the Grampians damaged in recent bushfires. With the first stage bringing together relevant data about local organisations, amenities, and infrastructure that will help inform decision-making before, during and after a disaster event.

Historical bushfire data and overlay for Ararat.

Rather than producing more data about communities, the platform emphasises generating and mapping resources data with and within communities as an ongoing and iterative mode of resilience building and preparedness action.

While there is a long way to go to embed the work with specific communities, the model offers an approach to building context-responsive capabilities for resource mapping that puts local communities first.

Read the report Towards Resilient Communities: Data capability and resource mapping for disaster preparedness.

Media enquiries:
Breck Carter
Media Advisor, Swinburne University

breckcarter@swin.edu.au
0455 502 999

SEE ALSO

So, you’ve been scammed by a deepfake. What can you do?

Image: Tero Vesalainen/Shutterstock

So, you’ve been scammed by a deepfake. What can you do?

Author Jeannie Marie Paterson
Date 26 February 2024

Earlier this month, a Hong Kong company lost HK$200 million (A$40 million) in a deepfake scam. An employee transferred funds following a video conference call with scammers who looked and sounded like senior company officials.

Generative AI tools can create image, video and voice replicas of real people saying and doing things they never would have done. And these tools are becoming increasingly easy to access and use.

This can perpetuate intimate image abuse (including things like “revenge porn”) and disrupt democratic processes. Currently, many jurisdictions are grappling with how to regulate AI deepfakes.

But if you’ve been a victim of a deepfake scam, can you obtain compensation or redress for your losses? The legislation hasn’t caught up yet.

Who is responsible?

In most cases of deepfake fraud, scammers will avoid trying to fool banks and security systems, instead opting for so-called “push payment” frauds where victims are tricked into directing their bank to pay the fraudster.

So, if you’re seeking a remedy, there are at least four possible targets:

  1. the fraudster (who will often have disappeared)
  2. the social media platform that hosted the fake
  3. any bank that paid out the money on the instructions of the victim of the fraud
  4. the provider of the AI tool that created the fake.

The quick answer is that once the fraudster vanishes, it is currently unclear whether you have a right to a remedy from any of these other parties (though that may change in the future).

Let’s see why.

The social media platform

In principle, you could seek damages from a social media platform if it hosted a deepfake used to defraud you. But there are hurdles to overcome.

Platforms typically frame themselves as mere conduits of content – which means they are not legally responsible for the content. In the United States, platforms are explicitly shielded from this kind of liability. However, no such protection exists in most other common law countries, including Australia.

The Australian Competition and Consumer Commission (ACCC) is taking Meta(Facebook’s parent company) to court. They are testing the possibility of making digital platforms directly liable for deepfake crypto scams if they actively target the ads to possible victims.

The ACCC is also arguing Meta should be liable as an accessory to the scam – for failing to remove the misleading ads promptly once notified of the problem.

At the very least, platforms should be responsible for promptly removing deepfake content used for fraudulent purposes. They may already claim to be doing this, but it might soon become a legal obligation.

The ACCC has sued Meta (Facebook’s parent company) to test if Facebook could be sued for targeting scam ads to victims. Jeff Chiu/AP

The bank

In Australia, the legal obligations of whether a bank has to reimburse you in the case of a deepfake scam aren’t settled.

This was recently considered by the United Kingdom’s Supreme Court, in a case likely to be influential in Australia. It suggests banks don’t have a duty to refuse a customer’s payment instructions where the recipient is suspected to be a (deepfake) fraudster, even if they have a general duty to act promptly once the scam is discovered.

That said, the UK is introducing a mandatory scheme that requires banks to reimburse victims of push payment fraud, at least in certain circumstances.

In Australia, the ACCC and others have presented proposals for a similar scheme, though none exists at this stage.

In Australia, the legal obligations of whether a bank has to reimburse you in the case of a deepfake scam aren’t settled.

This was recently considered by the United Kingdom’s Supreme Court, in a case likely to be influential in Australia. It suggests banks don’t have a duty to refuse a customer’s payment instructions where the recipient is suspected to be a (deepfake) fraudster, even if they have a general duty to act promptly once the scam is discovered.

That said, the UK is introducing a mandatory scheme that requires banks to reimburse victims of push payment fraud, at least in certain circumstances.

In Australia, the ACCC and others have presented proposals for a similar scheme, though none exists at this stage.

Australian banks are unlikely to be liable for customer losses due to scams, but new schemes could force them to reimburse victims. TK Kurikawa/Shutterstock

 

The AI tool provider

The providers of generative AI tools are currently not legally obliged to make their tools unusable for fraud or deception. In law, there is no duty of care to the world at large to prevent someone else’s fraud.

However, providers of generative AI do have an opportunity to use technology to reduce the likelihood of deepfakes. Like banks and social media platforms, they may soon be required to do this, at least in some jurisdictions.

The recently proposed EU AI Act obligates the providers of generative AI tools to design these tools in a way that allows the synthetic/fake content to be detected.

Currently, it’s proposed this could work through digital watermarking, although its effectiveness is still being debated. Other measures include prompt limits, digital ID to verify a person’s identity, and further education about the signs of deepfakes.

Can we stop deepfake fraud altogether?

None of these legal or technical guardrails are likely to be entirely effective in stemming the tide of deepfake fraud, scams or deception – especially as generative AI technology keeps advancing.

However, the response doesn’t need to be perfect: slowing down AI generated fakes and frauds can still reduce harm. We also need to pressure platforms, banks and tech providers to stay on top of the risks.

So while you might never be able to completely prevent yourself from being the victim of a deepfake scam, with all these new legal and technical developments, you might soon be able to seek compensation if things go wrong.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The secret sauce of Coles’ and Woolworths’ profits: high-tech surveillance and control

Image credit: Jack Sparrow, Pexels

The secret sauce of Coles’ and Woolworths’ profits: high-tech surveillance and control

Author Lauren Kelly
Date 23 February 2024

Coles and Woolworths, the supermarket chains that together control almost two-thirds of the Australian grocery market, are facing unprecedented scrutiny.

One recent inquiry, commissioned by the Australian Council of Trade Unions and led by former Australian Consumer and Competition Commission chair Allan Fels, found the pair engaged in unfair pricing practices; an ongoing Senate inquiry into food prices is looking at how these practices are linked to inflation; and the ACCC has just begun a government-directed inquiry into potentially anti-competitive behaviour in Australia’s supermarkets.

Earlier this week, the two companies also came under the gaze of the ABC current affairs program Four Corners. Their respective chief executives each gave somewhat prickly interviews, and Woolworths chief Brad Banducci announced his retirement two days after the program aired.

A focus on the power of the supermarket duopoly is long overdue. However, one aspect of how Coles and Woolworths exercise their power has received relatively little attention: a growing high-tech infrastructure of surveillance and control that pervades retail stores, warehouses, delivery systems and beyond.

Every customer a potential thief

As the largest private-sector employers and providers of essential household goods, the supermarkets play an outsized role in public life. Indeed, they are such familiar places that technological developments there may fly under the radar of public attention.

Coles and Woolworths are both implementing technologies that treat the supermarket as a “problem space” in which workers are controlled, customers are tracked and profits boosted.

For example, in response to a purported spike in shoplifting, a raft of customer surveillance measures have been introduced that treat every customer as a potential thief. This includes ceiling cameras which assign a digital ID to individuals and track them through the store, and “smart” exit gates that remain closed until a purchase is made. Some customers have reported being “trapped” by the gate despite paying for their items, causing significant embarrassment.

Woolworths surveillance cameras monitor the self-checkout area. Woolworths.

 

At least one Woolworths store has installed 500 mini cameras on product shelves. The cameras monitor real-time stock levels, and Woolworths says customers captured in photos will be silhouetted for privacy.

A Woolworths spokesperson explained the shelf cameras were part of “a number of initiatives, both covert and overt, to minimise instances of retail crime”. It is unclear whether the cameras are for inventory management, surveillance, or both.

Workers themselves are being fitted with body-worn cameras and wearable alarms. Such measures may protect against customer aggression, which is a serious problem facing workers. Biometric data collected this way could also be used to discipline staff in what scholars Karen Levy and Solon Barocas refer to as “refractive surveillance” – a process whereby surveillance measures intended for one group can also impact another.

Predicting crime

At the same time as the supermarkets ramp up the amount of data they collect on staff and shoppers, they are also investing in data-driven “crime intelligence” software. Both supermarkets have partnered with New Zealand start-up Auror, which shares a name with the magic police from the Harry Potter books and claims it can predict crime before it happens.

New Zealand startup Auror claims to predict crime before it happens.

 

Coles also recently began a partnership with Palantir, a global data-driven surveillance company that takes its name from magical crystal balls in The Lord of the Rings.

These heavy-handed measures seek to make self-service checkouts more secure without increasing staff numbers. This leads to something of a vicious cycle, as under-staffing, self-checkouts, and high prices are often causes of customer aggression to begin with.

Many staff are similarly frustrated by historical wage theft by the supermarketsthat totals hundreds of millions of dollars.

From community employment to gig work

Both supermarkets have brought the gig economy squarely inside the traditional workplace. Uber and Doordash drivers are now part of the infrastructure of home delivery, in an attempt to push last-mile delivery costs onto gig workers.

The precarious working conditions of the gig economy are well known. Customers may not be aware, however, that Coles recently increased Uber Eats and Doordash prices by at least 10%, and will no longer match in-store promotions. Drivers have been instructed to dispose of the shopping receipt and should no longer place it in the customer’s bag at drop-off.

In addition to higher prices, customers also pay service and delivery fees for the convenience of on-demand delivery. Despite the price increases to customers, drivers I have interviewed in my ongoing research report they are earning less and less through the apps, often well below Australia’s minimum wage.

Viewed as a whole, Coles’ and Woolworths’ high-tech measures paint a picture of surveillance and control that exerts pressures on both customers and workers. While issues of market competition, price gouging, and power asymmetries with suppliers must be scrutinised, issues of worker and customer surveillance are the other side of the same coin – and they too must be reckoned with.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Research reveals the real cost of TikTok superstar economy

Social media influencer recording
Image credit: Blue Bird/Pexels

Research reveals the real cost of TikTok superstar economy

Author ADM+S Centre
Date 23 February 2024

TikTok has made a game out of human connection — and it’s making millions.  But how much are streamers actually making? And at what cost to TikTok fans?

In a joint investigation Prof Patrik Wikstrom from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and ABC News set up a system to monitor the gifts received by a sample of 84 top Australian and New Zealand-based streamers.

On TikTok a gift looks like little icons or emojis, but each one is a micro-transaction involving real cash.

The investigation revealed that fans donated a combined total of $1.9 million in gifts to streamers and in one transaction observed TikTok took 60 per cent of the initial value. 

“I would say that TikTok is probably the one that has been playing its cards closest to the chest,” says Prof Wikstrom. “So, we have to be creative and find other ways of studying the platform.”

Researchers found that up to 80 per cent of gifts were received during live “battles” and the cut from these battles are based on streamers rankings.  

In a battle, streamers compete against another streamer to see whose fans spend the most on gifts in five minutes.

“The Internet is a superstar economy,” explains Prof Wikstrom, “which means that those on top earn lots more than those [further down].”

Top streamers made almost $64,000 in the month, while number 20 made $12,600. By rank 50, the streamer made only $3,500.

This phenomenon leaves streamers who are popular but not at the pinnacle, with a much smaller slice of the money.

The gamification and monetisation of relationships on TikTok is also having a concerning impact on fans. 

This experiment found the top spender gave away $50,000 in the month. 

But far more surprising was what the sixth highest spender — with $27,000 in recorded spending — revealed when he shared his full TikTok purchase history.

He actually spent $30,000 in a single night. And over $300,000 in the month.

About this research

  • Professor Patrik Wikstrom, with financial support from QUT’s Digital Media Research Centre, ran the data collection project across 84 streamers from November 18, 2023 to December 18, 2023.
  • The software used to collect the data was written by ABC News, using the TikTok-Live-Connector library written by zerody.

Visit the ABC Story Lab interactive TikTok Superstar Economy Livestreaming Gifts 

SEE ALSO

ADM+S report finds regional bank closures are further disadvantaging remote First Nations communities

City buidling with Bank sign at front

ADM+S report finds regional bank closures are further disadvantaging remote First Nations communities

Author RMIT University Media
Date 16 February 2024

An ADM+S report has found remote First Nations communities are particularly disadvantaged by local bank closures due to the lack of affordable and reliable internet.

Led by RMIT and Swinburne University researchers in the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), the submission was made to the Senate inquiry into regional bank branch closures.

It found remote First Nations communities still rely on face-to-face interactions with their banks, despite the growing prevalence of online banking.

Project lead and Distinguished Professor Julian Thomas said in-person interactions were especially important for complex banking tasks and tackling elder abuse, scams and fraud.

“The quality, reliability and cost of internet in remote areas also pose challenges for these communities, making going to a branch to speak to a person even more crucial,” Thomas said.

“By removing banks in regional areas, it potentially disadvantages an already vulnerable community from accessing basic necessities such as financial services, impacting their independence.”

Thomas said better online safety was paramount to improving digital access and participation, but the infrastructure of reliable internet needs to be in place first.

“We can’t expect these communities to learn about online safety if they don’t have working internet to begin with,” he said.

Previous RMIT research for the ADM+S Centre found remote First Nations communities were among the most digitally excluded people in Australia.

The “Mapping the Digital Gap” 2023 Outcomes Report found a significant gap in digital inclusion for First Nations people compared with other Australians, which widens substantially with remoteness.

The research showed about 43% of the 1,545 First Nations communities and homelands across Australia have no mobile service – including some with only a shared public phone or no telecommunications access – highlighting a need for action to close the digital gap.

The study highlighted accessing digital technologies was most challenging in remote communities due to limited communications infrastructure, low household access and patchy, congested mobile services.

With residents in remote communities typically on low incomes, 84% of these respondents in the study used or shared a mobile device, and 94% of these used pre-paid services.

The high cost of pre-paid data and low household uptake of fixed broadband also led to significant affordability issues.

Lead investigator and Senior Research Fellow, Dr Daniel Featherstone, said as banking, government and other services increasingly move online, it’s crucial that all Australians can effectively access and use digital technologies.

“Everyone should have the opportunity to benefit from digital technologies,” he said.

“We use these technologies to access essential services for health, welfare, finance and education, participate in social and cultural activities, follow news and media, as well as connect with family, friends, and the wider world.”

“Improving digital inclusion and access to services is critically important to ensure informed decision-making and agency among Aboriginal and Torres Strait Islander people.”

Read the orginal article Regional bank closures are further disadvantaging remote First Nations communities: report publised by RMIT University News

SEE ALSO

ADM+S/DMRC Summer School supports next generation researchers in digital media and automated decision-making

2024 Summer School participans standing under world globe

ADM+S/DMRC Summer School supports next generation researchers in digital media and automated decision-making

Author Natalie Campbell
Date 15 February 2024

The 2024 Centre of Excellence for Automated Decision-Making and Society / Digital Media Research Centre Summer School saw over 150 delegates and presenters gather at QUT in Brisbane for an intensive five-day program.

Higher degree research students (HDRs) and early career researchers (ECRs) from all nine ADM+S nodes were in attendance, networking and learning from over 50 DMRC and ADM+S senior researchers, industry guests and partner collaborators, through a range of workshops, panels and masterclasses

ADM+S Manager of Research Training and Development, and member of the Summer School working group Sally Storey, said “the program covered a wide territory of exciting research across the two Centre’s disciplines and topic areas.

“The Summer School showed us how much we can learn and benefit from both the DMRC and ADM+S’ extraordinary research community, and helped us sharpen questions, refine methods, enhance knowledge on cross-disciplinary research, and make connections both nationally and internationally.”

The program offered a week of inspiring and thought-provoking training by DMRC and ADM+S world-leading researchers.

Sessions ranged from ‘Hollywood’s Labour Crisis’, to ‘Dewesternising Research’, ‘Researching informal Media Industries’, ‘Creative Approaches to Research Translation’, and ‘Using Open AI to Classify, Annotate and Process Data’.

The program was designed to bring researchers, students and industry partners together, to share how they are confronting a range of Australian and global challenges in digital media industries and automated decision-making areas.

A key highlight of the program was keynote speaker Yoel Roth, Former Head of Trust and Safety at Twitter (now X), which emphasised the notion that we can’t rely on technical solutions for social problems.

Another highlight of the week for many students and senior researchers, were the one-on-one mentoring sessions tailored by our event organisers.

“It’s such a great opportunity to talk to people at the Centre who you don’t always have a chance to meet.

“It’s really reassuring to know that there’s a kind of model of work out there in the world that you might emulate, and that it might be just within your grasp,” said PhD student Zoe Horn.

The 2024 Summer School was designed to encourage participants to question, debate, and share their research with others.

“HDR students and ECRs are working at the forefront of media industry and ADM research, so fostering collaboration for meaningful discussions that will further collective knowledge is vital for offering fresh perspectives and insights,” said Dr Michelle Riedlinger, Chair of the 2024 Summer School.

We extend thanks to all our speakers, mentors, and student participants for making this event possible, and especially the DMRC and ADM+S working group for their hard work behind the scenes delivering this event so seamlessly.

View the 2024 Summer School photo library.

SEE ALSO

ADM+S researchers appointed to new Federal Government artificial intelligence expert group

Abstract image: blue and purple with code

ADM+S researchers appointed to new Federal Government artificial intelligence expert group

Author Kathy Nickels
Date 14 February 2024

Members of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) have been appointed to the Australian Government’s new Artificial Intelligence Expert Group, announced today by Minister for Industry and Science Ed Husic.

The group was established following the Federal Government’s interim response to the Safe and Responsible AI in Australia consultation. It will provide advice to the Department of Industry, Science and Resources on immediate work on transparency, testing and accountability, including options for AI guardrails in high-risk settings, to help ensure AI systems are safe.

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) Chief Investigators Prof Nicolas Suzor, from QUT and Prof Kimberlee Weatherall, from the University of Sydney, have been appointed to the group, alongside centre affiliate Prof Jeannie Paterson, from the University of Melbourne, and Bill Simpson Young from collaborating organisation, the Gradient Institute.

“This Artificial Intelligence Expert Group brings the right mix of skills to steer the formation of mandatory guardrails for high-risk AI settings,” said Hon. Ed Husic, Minister for Industry and Science. 

“With expertise in law, ethics and technology, I’m confident this group will get the balance right.”

The Group has already started work and met for the first time on Friday 2 February 2024.

The twelve appointees to the Artificial Intelligence Expert Group are:

  • Professor Bronwyn Fox: CSIRO Chief Scientist, represents Australia on the panel overseeing the international Frontier AI State of the Science report.
  • Aurélie Jacquet: A leading figure in the development of responsible artificial intelligence systems, Chair of Australia’s national AI standards committee, OECD expert on AI risks, and advisor on international AI certification initiatives. 
  • Dr Terri Janke: An international authority on Indigenous Cultural and Intellectual Property (ICIP).
  • Angus Lang SC: A leading legal practitioner and sought after contributor on intellectual property law and AI, addressing developments in Australia and Europe.
  • Professor Simon Lucey: Director of the Australian Institute for Machine Learning at the University of Adelaide, with a background in artificial intelligence, autonomous vehicles, and research spanning computer vision, machine learning, and robotics.
  • Professor Jeannie Paterson: Founding co-director of the Centre for AI and Digital Ethics, and leading contributor to legal and regulatory reform processes in Australia and internationally.
  • Professor Ed Santow: Co-founder of the Human Technology Institute, leading major initiatives to promote human-centred artificial intelligence.
  • Professor Nicolas Suzor: A Future Fellow at QUT, a Chief Investigator of the ARC Centre of Excellence for Automated Decision-Making & Society, and expert on the governance of digital technologies.
  • Professor Toby Walsh: Widely recognised voice on AI development, with leading roles at Data 61 and UNSW, and numerous international fellowships. 
  • Professor Kimberlee Weatherall: A Chief Investigator with the ARC Centre of Excellence for Automated Decision-Making and Society.
  • Professor Peta Wyeth: An internationally recognised researcher on human computer interaction, human-centred artificial intelligence, and design practice and management.
  • Bill Simpson Young: Co-founder and CEO of Gradient Institute, founded to accelerate the ethical progress of AI-based systems, and leading technologist for the safe and responsible use of AI.

The group will be in place until 30 June 2024. The Government is considering longer-term arrangements as part of its work implementing the interim response to the safe and responsible AI consultation. 

SEE ALSO

ADM+S Research Fellow invited to provide evidence to the Federal Parliament

Jose Miguel Bello y Villarino and colleagues at the AI for education hearing on Jan 30
Dr Jose-Miguel Bello y Villarino alongside other panellists at the January 30 hearing.

ADM+S Research Fellow invited to provide evidence to the Federal Parliament

Author Natalie Campbell
Date 13 February 2024

On Tuesday 30 January 2024, ADM+S Research Fellow Dr Jose-Miguel Bello y Villarino from the University of Sydney was invited to present evidence to Federal Parliament on the use of generative artificial intelligence (AI) in the Australian education system.

Members of Parliament’s House Standing Committee on Employment, Education and Training, travelled to Sydney to meet with Jose-Miguel and colleagues who are working on an ARC Discovery project investigating the use and regulation of AI in education.

The 45-minute hearing focused on preliminary observations connected to two ongoing projects; the panel’s 2024-2026 ARC Discovery Project,Artificial intelligence in education: Democratising policy’, and a 2023-2025 James Martin Institute Policy Challenge grant,Governing AI, education, and equity together.’

The common objective of both projects is to find ways of involving people directly affected by the deployment of automation in the education sector – such as teachers and students – in its governance.

“The Committee was very interested in how we can do this, and the type of governance measures we can establish now, and in the future,” said Jose-Miguel.

Jose-Miguel’s contributions highlighted his expertise around regulatory and comparative experience, which he has developed as a Research Fellow with the ADM+S Centre.

He told the Committee of the Centre’s work around AI regulation, and when asked if AI could degrade human individuality by steering ideologies in a particular way, Jose-Miguel referred to the recent ADM+S 2023 Hackathon which explored bias in large language models.

Explaining that bias is embedded in such systems, Jose-Miguel advised that the real risk of using AI platforms is not being able to evaluate the system as a user.

After the formal discussion, Jose-Miguel engaged in further conversations with Committee members about the importance of AI infrastructure for equality and access.

Jose-Miguel’s focus on AI in education governance complements the Centre’s broader engagement with the Department of Industry, Science and Resources to support the responsible development of AI governance in general.

“This meeting indicates that the apex regulator in Australia, that is Parliament, is taking the disruption created by AI in diverse sectors seriously, and are willing to invest their resources in listening to what different actors have to say about it,” he said.

“My hope is that MPs listen to us when we insist that this is quite new and a trial-and-error approach is absolutely ok.

“Learning from other jurisdiction’s strengths and errors is much better that just adopting a policy or a regulation that ticks a box and is forgotten for the next few years. I hope that they are sceptical about those who say they have the silver bullet for the governance of AI.”

On the panel, Jose-Miguel was joined by Prof. Kalervo Gulson from the Education Futures Studio at the University of Sydney, Dr. Teresa Swist from Western Sydney University, and A/Prof. Simon Knight from the Centre for Research on Education in a Digital Society, University of Technology Sydney.

SEE ALSO

ADM+S researcher writes for international CUTE exhibition

Hello Kitty display from CUTE exhibition at Somerset House, UK

ADM+S researcher writes for international CUTE exhibition

Author Kathy Nickels
Date 9 February 2024

ADM+S researcher, Dr Megan Rose joins internationally respected scholars to share perspectives on contemporary artworks as part of CUTE, a landmark exhibition exploring the irresistible force of cuteness in contemporary culture.

The exhibition taking place at Somerset House in the UK, examines the world’s embrace of cute culture and seeks to unravel cuteness’ emotive charge, revealing its extraordinary and complex power and potential.   

Cute studies specialist, Dr Megan Rose from the University of NSW node at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) was chosen to write for the CUTE guide that accompanies the exhibition. Megan also assisted with the curation and sourcing of original works from Tokyo for the “cute cluster” section of the exhibition.

It features artworks by over 50 contemporary artists, presented alongside cultural phenomena from music, fashion and toys, to video games and social media.

The online guide, which accompanies the exhibition, unpacks the meanings behind a selection of objects in the exhibition.

In the catalogue Megan unpacks the cultural significance, meaning, and positive impacts of cute as it is intertwined in subcultures, video games such as Nintendo’s Animal Crossing, and a range of robots for the home, including QOOBO, a care robot and artificial companion.

“It was a pleasure to work with the curation team in selecting interesting objects to inspire the public to rethink their relationships with cute interfaces in technology design” says Megan. A number of items were curated based off her portfolio of writing on cute cultures.

In writing about QOOBO Megan says “With great attention paid to the engineering of its tail, this robot aims to deconstruct the soothing experience of holding a cat by exaggerating its key cute and ‘iyashi’ (mentally restorative) properties.”

“In Japan there is an extensive body of research that tries to understand why cuteness is so attractive, and how it promotes wellbeing,” says Megan.

“Cuteness has a long history in Japanese culture, but in particular the “iyashi boom” in the aftermath of the 2009 recession and 2011 Tōhoku earthquake and tsunami points to the high demand for soothing media and environments. Research shows that cuteness not only allows us to connect with and experience art and design, but also with ourselves and each other in affective networks of belonging.”

Megan’s research at the ADM+S investigates the impact of social robotics in the health sector, including therapeutic animal robots and telepresence technologies.

Her current creative-practice research focuses on assemblages of animals, humans and technology in the future of robot pets and therapeutic aids,and the role that cute morphologies play in facilitating and hindering these connections.

As a cute studies specialist, Megan collaborates on a range of projects that look at the intersections of popular culture, media and creative practice to promote community wellbeing and inclusion. She is interested in the intersections between cute media, care, voice and precarity in contexts such as girls’ activism, neurodivergent sensory seeking, Japanese mascot characters and social simulation games like Animal Crossing.

Dr. Megan Rose’s participation in the CUTE exhibition promises to enrich understandings of the irresistible force of cuteness in contemporary culture. Her insightful contributions to the CUTE guide catalogue and her innovative research at ADM+S underscore the profound impact of cute studies on various aspects of our lives.

The exhibition is featured 25 January to 14 April 2024 at Somerset House, UK.

SEE ALSO

The Facebook trick online gambling is using to target Australians

Poker machine on laptop computer
Getty Images/ audioundwerbung

The Facebook trick online gambling is using to target Australians

Authors Christine Parker and César Albarrán-Torres
Date 6 February 2024

Gambling advertising has long been a contentious issue in Australia, with critics and regulators regularly raising concerns about the intensity and placement of ads in the media.

Online gambling is rife among Australian adults, with an estimated 44 per cent gambling on sports and racing using a smartphone or computer. And studies have associated online gambling with a range of harms, ranging from financial distress to relationship breakdown and mental health issues.

An estimated 44 per cent of Australians gamble on sports and racing using a smartphone or computer. Picture: Getty Images

In fact, the COVID pandemic accelerated the growth of the online gambling market, with Australian expenditure reaching $AU9.56 billion in 2022.

While advertising on traditional outlets like television, radio or print are (relatively) easy to monitor and regulate – advertisers are bypassing these restrictions to reach potentially vulnerable populations using the internet and social media.

Because online advertising is hard to track and archive, and because there’s a high level of self-governance on social media platforms, advertisers can reach these online audiences – even if the law forbids them doing it.

Our new research published with colleagues at the ARC Centre of Excellence for Automated Decision Making and Society in Addiction Research & Theory has uncovered gambling advertising to Australians by BitStarz.

This is an issue because BitStarz is an online offshore casino registered in the Dutch Caribbean Island of Curaçao, that cannot legally operate or advertise in Australia. Our preliminary finding received significant coverage in the media.

These gambling ads were served to Australians on the social media platform, Facebook (owned by Meta).

This is just one example that shows us how gambling advertising can foster and sustain gambling cultures online.

The case is particularly relevant in the context of a June 2023 report on online gambling harm released by the Standing Committee on Social Policy and Legal Affairs of the House of Representatives.

The gambling ads were served to Australians on the social media platform, Facebook. Picture: Supplied

The ‘You win some, you lose more report offers 31 recommendations to lessen online gambling harm. Among these, the committee led by Labor MP Peta Murphy proposes a “phased, comprehensive ban on all gambling advertising on all media – broadcast and online, that leaves no room for circumvention”.

The proposal has prompted lobbyists from gambling companies, broadcasters, sporting codes and tech companies to request meetings with the communications minister, Michelle Rowland, to argue against the ban.

Our research finds that when it comes to social media advertising there may indeed be “room for circumvention”.

What makes our case study particularly timely is that it reveals a hidden economy of gambling ads that often fall under the radar of regulators.

Online casinos like BitStarz operate overseas with servers located in international jurisdictions that do not fall under Australian law.

Rather than advertising in online newspapers or on broadcast media (many Australians are used to seeing big gambling companies like Sportsbet or TAB on their TVs), these online casinos are promoted almost exclusively on social media – using targeted, personalised ads that are only seen by individual users and only in particular sessions.

The Australian Ad Observatory, which researches the challenges associated with monitoring online advertising and uncovering targeted ads, gives us an insight into how casinos like BitStarz target Australians.

A pixellated image of the Facebook advertisements featuring an Australian flag. Picture: Supplied

It also highlights how monitoring and enforcement of a ban on programmatic and personalised advertising would be borderline unfeasible without the collaboration of global social media platforms.

According to the Australian Communications and Media Authority (ACMA), the wording of the Australian law prohibiting the advertising of online casinos “in Australia” potentially limits the enforcement of the law against these global social media platforms.

This means that ACMA has no power to ask platforms like Facebook to block these advertisements, even though the ads themselves would otherwise be considered illegal in Australia and BitStarz’s own websites have been blocked by ACMA.

Facebook, and some other platforms, do have their own internal policies that prohibit online casinos from targeting advertising in jurisdictions where it is illegal, like Australia.

But as our research shows, these internal policies and procedures do not always work, despite the enormous information and resources social media platforms have at their disposal.

However, platforms like Facebook can and should do more to curb the presence of potentially harmful gambling ads and be transparent about how they do so.

Importantly, we need reforms to Australian law to close the loophole that makes platforms unaccountable for illegal advertising online. This could be done by following the lead of the 2022 European Union Digital Services Act which would give ACMA the power to issue social media platforms with ‘notice and takedown’ orders to remove unlawful advertising.

Platforms like Facebook can and should do more to curb the presence of potentially harmful gambling ads. Picture: Supplied

Platforms that do not expeditiously remove illegal ads, once put on notice, should be held liable and be required to make all ads available in an easily searchable public archive.

Only by ensuring accountability and transparency from social media platforms can we hope to give Australian regulators the power prevent the harms of online gambling.

This research is a result of the Australian Ad Observatory Project, part of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) which is a collaboration between researchers at the University of Melbourne, Swinburne University, Monash University, QUT and industry partners including the ABC, Choice and the Consumer Policy Research Centre (CE200100005).

The project encourages citizen scientists to contribute to the research by ‘donating’ their Facebook advertisements, which can be done by installing a plugin on their computers. To date, more than 1900 participants have donated more than 328,000 ads, resulting in over 737,000 observations. These collected ads can be viewed by citizen scientists in a personal dashboard and are accessible for search, filtering and sorting by researchers collaborating with the project.

This article was written by Prof Christine Parker and Dr César Albarrán-Torres.

We’d also like to acknowledge the co-authors of the research article which this piece discusses: Casey Briggs (Australian Broadcasting Commission),  Distinguished Prof Jean Burgess (QUT), Assoc Prof Nicholas Carah (UQ), Prof Mark Andrejevic (Monash University) Prof Daniel Angus (QUT) and Dr Abdul Obeid (QUT).

SEE ALSO

In loving memory of Arjun Srinivas

In loving memory of Arjun

In loving memory of Arjun Srinivas

Author  The ADM+S Centre
Date 6 February 2024

Friends and colleagues from QUT’s Creative industries and Social Justice Faculty, the School of Communication, the Digital Media Research Centre (DMRC), and the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) have gathered with Arjun’s family to remember the extraordinary life of Arjun Srinivas, a much loved and valued member of our community. 

Arjun joined the ADM+S Centre at QUT as a PhD student in April 2022. He was a scholarship recipient of the Centre, working with researchers on the Australian Search Experience Project, and independently investigating the role of YouTube in curating news content for Australian audiences.

Arjun’s research contributed to one of the most pressing questions of our time: how do social media platforms shape news consumption? 

“Researchers like Arjun are truly shooting stars. He had a rare mix of skills and knowledge that are hard enough to find in an entire research team, let alone a single person. His passion for his work, for its real-world impact and contributions, and his indomitable enthusiasm and generosity shone through in his PhD and in everything he did,” said Assoc Prof Timothy Graham, one of Arjun’s PhD supervisors.

His research provided critical insights into a range of areas. By interrogating the very concept of ‘news’ and what YouTube deems to be newsworthy, he highlighted deep questions about what it means to be an authoritative news source in the 21st century. 

In late 2023, Arjun completed a three month internship with the ABC Youtube news desk under the supervision Gary Kemble, Social Media Lead at ABC news. During this placement, Arjun investigated how a public service media organisation navigates the recommender systems of large commercial platforms, such as YouTube. This experience was the first PhD internship of its kind for both ABC and the Queensland University of Technology (QUT).

Along with Arjun’s extraordinary record of academic achievements, he was also known across QUT and the ADM+S Centre for his enthusiasm, warmth, and kindness. Arjun was a highly engaged and caring colleague, often the first to put his hand up to help others or provide support for Centre activities. 

“Arjun connected in such a unique and genuine way with everyone he crossed paths with, finding shared interests that he would thoughtfully grow upon,” said PhD student Dominique Carlon.

Arjun’s close friend, Shubhangi Heda said “Arjun in many ways was the person who connected people with each other to give them a comfortable space to share and listen, and he so profoundly valued friendships and cared in a unique way that often left you in the presence of a warm hug.” 

More than 270 friends and colleagues attended the service in Brisbane and online on Wednesday 31 January with heartfelt tributes shared by his eldest sister Chaithanya Srinivas, who had travelled to Australia, as well as his sister Madhura Srinivas and mother Dr. Savithri Srinivas who joined the service from India. 

“’We lovingly called him Ajju at home. He was the little one and we absolutely adored him.” said Chaithanya.

 “All the lovely stories that Arjun’s friends told me about their experiences with him are testament to the beautiful person he was, how loving and giving he was, how many lives he’s touched.

“He loved this line from a poem, he made it his status, it says ‘I am a part of all that I have met’.. So each one of you who my brother was associated with, he is a part of all of you now and he will live on through you.”

Friends and colleagues Donnie Johannessen, Dennis Leeftink, Distinguished Professor Jean Burgess, and Ashwin Nagappa also spoke at the service.

“I feel very honoured that I was given the opportunity to speak and share at the service.  It was heartwarming to see so many of you and hear your stories. But there is so much left unsaid still, and there is much to learn from what Arjun has brought to each of our lives.” said PhD student Dennis Leeftink

Distinguished Prof Jean Burgess said that alongside fellow PhD supervisors Assoc Prof Timothy Graham and Prof Axel Bruns, they had absolutely no doubt that Arjun would have gone on to have a wonderful career after completing his PhD. And that they felt proud and lucky to have been his supervisors.

“Arjun had boundless energy and an irrepressible curiosity paired with creativity. Not only was he an excellent researcher and writer, but he also turned his research into an interactive theatre performance on the dynamics of hate speech that was performed in several venues in Europe in mid-2023. Such a broad range of talents and interests is very rare – Arjun was a true polymath”, said Distinguished Prof Jean Burgess. 

“We are all the poorer for being robbed of the opportunity to see him pursue and realise his dreams. But our community has been made richer by the time we did have together, and for that, we’re deeply grateful.” said Distinguished Prof Jean Burgess

Arjun Srivinas passed away in Royal Brisbane and Women’s Hospital on Saturday 27 January 2024 after an accident in Brisbane  involving a motor vehicle on Friday 19 January 2024.

Read more about Arjun’s research and achievements on the ADM+S memorial page.

We invite family, friends and colleagues to share memories, thoughts and images on Arjun’s public memorial page

SEE ALSO

Nine was slammed for ‘AI editing’ a Victorian MP’s dress. How can news media use AI responsibly?

Images of Georgie Purcell before and after image altered
Nine News/Georgie Purcell via X/The Conversation

Nine was slammed for ‘AI editing’ a Victorian MP’s dress. How can news media use AI responsibly?

Author T.J. Thomson
Date 1 February 2024

Earlier this week, Channel Nine published an altered image of Victorian MP Georgie Purcell that showed her in a midriff-exposing tank top. The outfit was actually a dress.

Purcell chastised the channel for the image manipulation and accused it of being sexist. Nine apologised for the edit and blamed it on an artificial intelligence (AI) tool in Adobe Photoshop.

Generative AI has become increasingly prevalent over the past six months, as popular image editing and design tools like Photoshop and Canva have started integrating AI features into their programs.

But what are they capable of, exactly? Can they be blamed for doctored images? As these tools become more widespread, learning more about them and their dangers – alongside opportunities – is increasingly important.

What happened with the photo of Purcell?

Typically, making AI-generated or AI-augmented images involves “prompting” – using text commands to describe what you want to see or edit.

But late last year, Photoshop unveiled a new feature, generative fill. Among its options is an “expand” tool that can add content to images, even without text prompts.

For example, to expand an image beyond its original borders, a user can simply extend the canvas and Photoshop will “imagine” content that could go beyond the frame. This ability is powered by Firefly, Adobe’s own generative AI tool.

Nine resized the image to better fit its television composition but, in doing so, also generated new parts of the image that weren’t there originally.

The source material – and if it’s cropped – are of critical importance here.

In the above example where the frame of the photo stops around Purcell’s hips, Photoshop just extends the dress as might be expected. But if you use generative expand with a more tightly cropped or composed photo, Photoshop has to “imagine” more of what is going on in the image, with variable results.

Is it legal to alter someone’s image like this? It’s ultimately up to the courts to decide. It depends on the jurisdiction and, among other aspects, the risk of reputational harm. If a party can argue that publication of an altered image has caused or could cause them “serious harm”, they might have a defamation case.

How else is generative AI being used?

Generative fill is just one way news organisations are using AI. Some are also using it to make or publish images, including photorealistic ones, depicting current events. An example of this is the ongoing Israel-Hamas conflict.

Others use it in place of stock photography or to create illustrations for hard-to-visualise topics, like AI itself.

Many adhere to institutional or industry-wide codes of conduct, such as the Journalist Code of Ethics from the Media, Entertainment & Arts Alliance of Australia. This states journalists should “present pictures and sound which are true and accurate” and disclose “any manipulation likely to mislead.”

Some outlets do not use AI-generated or augmented images at all, or only when reporting on such images if they go viral.

Newsrooms can also benefit from generative AI tools. An example includes uploading a spreadsheet to a service like ChatGPT-4 and receiving suggestions on how to visualise the data. Or using it to help create a three-dimensional model that illustrates how a process works or how an event unfolded.

What safeguards should media have for responsible generative AI use?

I’ve spent the last year interviewing photo editors and people in related roles about how they use generative AI and what policies they have in place to do so safely.

I’ve learned that some media outlets bar their staff from using AI to generate any content. Others allow it only for non-realistic illustrations, such as using AI to create a bitcoin symbol or illustrate a story about finance.

News outlets, according to editors I spoke to, want to be transparent with their audiences about the content they create and how it is edited.

In 2019, Adobe started the Content Authenticity Initiative, which now includes major media organisations, image libraries and multimedia companies. This has led to the rollout of content credentials, a digital history of what equipment was used to make an image and what edits have been done to it.

This has been touted as a way to be more transparent with AI-generated or augmented content. But content credentials are not widely used yet. Besides, audiences shouldn’t outsource their critical thinking to a third party.

In addition to transparency, news editors I spoke to were sensitive to AI potentially displacing human labour. Many outlets strive to use only AI generators that have been trained with proprietary content. This is because of the ongoing cases in jurisdictions around the world over AI training data and whether resulting generations breach copyright.

Lastly, news editors said they are aware of the potential for bias in AI generations, given the unrepresentative data AI models are trained on.

This year, the World Economic Forum has named AI-fuelled misinformation and disinformation as the world’s greatest short-term risk. It placed this above even disasters like extreme weather events, inflation and armed conflict.

The top ten risks as outlined in the World Economic Forum’s Global Risk Report 2024.
World Economic Forum, Global Risks Perception Survey 2023–2024

Because of this risk and the elections happening in the United States and around the world this year, engaging in healthy scepticism about what you see online is a must.

As is being thoughtful about where you get your news and information from. Doing so makes you better equipped to participate in a democracy, and less likely to fall for scams.

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S student paper accepted at prestigious Web Conference 2024

user on laptop

ADM+S student paper accepted at prestigious Web Conference 2024

Author  Natalie Campbell
Date 1 February 2024

ADM+S PhD Student Chenglong Ma from RMIT University will present his paper at the upcoming 2024 Web Conference in Singapore.

Chenglong’s PhD Supervisor and co-author Prof Mark Sanderson said, “this is a very prestigious conference, arguably the most important conference in database and information systems. Getting a paper into that conference is very difficult, only about 20% of the submissions are accepted.”

The paper titled ‘Temporal Conformity-aware Hawkes Graph Network for Recommendations’, acknowledges that traditional recommender systems often overlook the impact of peer influence and conformity, assuming user behaviour is solely driven by individual interests.

However, indiscriminate bias elimination may lead to depersonalized recommendations, neglecting valuable information. The proposed TCHN model addresses this by distinguishing between two types of conformity behaviour: informational and normative.

Chenglong explains, “leveraging attentional Hawkes processes and sequence graph attention networks, TCHN effectively models the interplay between user self-interest and conformity, providing personalized recommendations.

“Experiments on real-world datasets reveal TCHN’s superior performance in accuracy, diversity, and fairness across user groups, highlighting its potential in mitigating conformity biases in recommender systems.”

This research paper aligns with Chenglong’s PhD focus on enhancing recommender systems through the integration of social influence and conformity. It extends the the theoretical framework of his PhD research, providing insights into the dynamics of user behaviour influenced by both individual interests and conformity factors.

Chenglon’s thesis topic was inspired by the changes that occurred in consumer behaviour during the Covid-19 pandemic. He observed that people stopped thinking about their short term needs and instead adopted a herd like mentality, panic buying items such as toilet paper that they thought might run out. 

Chenglong’s research examines whether automated systems such as recommender systems that drive shopping websites, need to adjust in line with changing behaviours. 

“Peer recognition of the novelty and quality of the research provides me with great encouragement to continue contributing meaningfully to the academic discourse in my field.

Having my paper accepted at this conference is a proud moment marking a successful conclusion to my PhD studies and establishing a strong foundation for my future academic career post-graduation,” said Chenglong.

The paper is co-authored by Yongli Ren, Pablo Castells and Prof Mark Sanderson, and will be published on the conference website in the coming months and can be found on the ACM webpage.

SEE ALSO

ADM+S Research Fellow collaborates with field-leading sociologists at the University of Bristol

Ash Watson in Bristol
Dr Ash Watson, visiting fellow at University of Bristol

ADM+S Research Fellow collaborates with field-leading sociologists at the University of Bristol

Author  Natalie Campbell
Date 31 January 2024

ADM+S Research Fellow Dr Ash Watson has recently completed a two-month research visit at the ESCR Centre for Socio Digital Futures, collaborating with field-leading researchers on innovative social science methods.

Through this experience, Ash got to work alongside sociologists Prof Susan Halford and Prof Dale Southerton, and leaders in design including Prof Helen Manchester.

Together, their work considered research methods that prompt critical future thinking, looking at how thinking about the future impacts technological innovation and also shapes how we address problems like digital inclusion and exclusion.

“While I was there, I mostly focused on developing my methodological skills and how I think about method and how we can bring speculative and creative activities into traditional qualitative research approaches.

Doing this allows us to better engage with the future and also reimagine new ways of advancing social change and technological development,” said Ash.

During the research visit, Ash also attended the International Creative Research Methods Conference in Manchester, which brought together researchers from a wide variety of fields who use arts based methods and creative techniques in their scholarship from creative writers to visual artists, to musicians and performers.

“I also attended a one-day symposium at the University of Cambridge, which brought together scholars mostly from sociology, but also from geography and history to think about the place of the text in scholarship and how we can explore the blurry and more porous boundaries of how we understand fact and fiction and knowledge and experience.”

Dr Ash Watson is a cultural sociologist whose work focuses on the meaning of emerging technologies in people’s lives and how they imagine the future. Her research practice focuses on storytelling and belonging, complimented by her passion for writing and editing sociological fiction for the Sociological Review, and So Fi Zine.

This program was supported by ADM+S and Bristol University.

SEE ALSO

Medal of the Order of Australia awarded to ADM+S Investigator

Paul Harpur OAM
Prof Paul Harpur, OAM recipient 2024

Medal of the Order of Australia awarded to ADM+S Investigator

Author  Natalie Campbell
Date 29 January 2024

ADM+S Associate Investigator Prof Paul Harpur has been awarded a Medal of the Order of Australia by the Governor General for his service to people with disability.

“I am deeply honoured to announce that I have been awarded a Medal of the Order of Australia by the Governor General (OAM),” said Prof Harpur.

The Order of Australia recognises Australians who have demonstrated outstanding service or exceptional achievement through their hard-work, service and dedication.

Prof Harpur is a leading international and comparative disability rights legal academic. His focus on disability inclusion forms part of a group of world leading scholars who, individually and collectively, advance ability equality and promote the full realisation of all human rights and fundamental freedoms for all persons with disabilities.

Prof Harpur is the first totally blind legal academic to become a full professor in Queensland, at the University of Queensland (UQ). He chairs the UQ Disability Inclusion Group, which supports the university in its implementation of the UQ Disability Action Plan. He also sits on the Academic Board, the University Senate’s sub-committee focusing on inclusion, and on the Senate Committee for Equity Diversity and Inclusion.

“Things are improving. There is an increase in the empowering and resourcing of persons with disabilities to lead the initiatives which impact upon us. The NDIS board is disability led, and we’ve seen the establishment of Universities Enable, a disability led disability steering group for the university sector.

“I believe the higher education sector can help us realise a world that is fairer and more inclusive.

“The higher education sector educates, employs, and produces research that is transforming the world for the better.”

Prof Harpur’s passion and activism is evident in his many previous accolades.

Prof Harpur was appointed an International Distinguished Fellow with the Burton Blatt Institute from 2015 onwards and a 2020 academic fellow of the Harvard Law School Project on Disabilities. He is the holder of a prestigious Fulbright Future Scholarship, and the recipient of an ARC Future Fellowship.

Prof Harpur regularly appears on the news, speaking on disability law and policies. Outside the law, Prof Harpur has previously been a professional athlete with a disability, competing in the 2000 Sydney and 2004 Athens Paralympics, the 2002 Manchester and 2006 Melbourne Commonwealth Games and a range of other World Titles and international competitions.

“The doors of education are being opened wide to all Australians. With reviews to early childhood, school, and higher education just complete, reforms already started and more to come, I believe more Australians will be able, like me, to turn their dreams into a reality.”

View the full list of 2024 recipients.

SEE ALSO

Getting beyond Net Zero dashboards in the information technology sector

Circuit board with emission cloud

Getting beyond Net Zero dashboards in the information technology sector

Author  Kathy Nickels
Date 25 January 2024

Energy-intensive processes in the Information Technology (IT) sector account for somewhere between 1 and 4% of global greenhouse gas emissions and are projected to rise to at least 14% by 2040.

AI and software services to track carbon emissions are increasingly positioned as solutions to the industry’s energy consumption. But are they doing enough for sustainability beyond Net Zero goals? 

In a recent article published in Energy Reserach & Social Science, Dr Melissa Gregg,  Senior Industry Fellow for the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S) and Prof Yolande Strengers, Associate Investigator at the Monash node of the ADM+S, provide a unique insight for energy social scientists on the logics and inner workings of large technology corporations, and the ways in which engineering-led tools are being mobilized to pursue profits in the name of carbon reduction. 

Inspired by recent developments in corporate social responsibility reporting in large technology companies, they draw on first-hand observation of this practice, focusing on the carbon emissions dashboard as a pertinent example of several concerning developments with current green software approaches to achieving Net Zero targets. 

“While dashboards are of course incredibly successful as a management tool, our problem is with how they too easily limit the ambition of sustainability goals to a narrow set of things that can be counted and measured, allowing many other unsustainable practices to continue,” writes Dr Gregg and Prof Strengers.

“Getting “beyond zero” must involve a frank assessment of the lifestyle, societal and capitalist assumptions that large technology companies inherit from a century of fossil fuel enterprise management. A sustainable energy transition lies not in the dashboard or other proprietary tools currently being mobilized, but in a careful consideration of how organizational cultures, lifestyles and livelihoods – enabled and sustained by Big Tech products and services – generate such high energy demand in the first place.

“In sharing these observations, we encourage more participation from energy social scientists in defining corporate sustainability objectives which can only be improved with the application of relevant research findings in the field.”

Read the full article: Getting beyond Net Zero dashboards in the information technology sector.

SEE ALSO

ADM+S research cited in Australian Government’s Interim Response to Safe and Responsible AI paper

ADM+S research cited in Australian Government’s Interim Response to Safe and Responsible AI paper

Author  Natalie Campbell
Date 19 January 2024

The Australian Government has published its interim response to the Safe and Responsible AI in Australia consultation, citing the ADM+S submission, and the Generative AI Rapid Response Report co-led by Prof Julian Thomas and Prof Jean Burgess.

The interim report released 17 January 2024, highlights key findings from more than 500 submissions to the Safe and Responsible AI discussion paper.

The discussion paper was released in June 2023 by the Department of Industry, Science and Resources, seeking submissions and consultation in response to rapid changes in the industry.

The interim report explains, “while artificial intelligence (AI) is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly.

“This acts as a handbrake on business adoption, and public acceptance [and] more needs to be done to ensure that the development and deployment of AI is safe and responsible.”

The report features a visualisation of the development lifecycle of AI systems (page 10), drawing on the submission from the ARC Centre of Excellence for Automated Decision-Making and Society, led by Prof Kim Weatherall.

The diagram provides a visual representation of the AI lifecycle and identifies harms that may occur at each stage.

AI Product Lifecycle and Associated Harms diagram
Image description: Diagram of impacts through AI lifecycle, from Interim Response report (2024).

The Interim Report also cites the Generative AI Rapid Response Report when discussing the opportunities of AI. The report, commissioned in February 2023 by Australia’s National Science and Technology Council (NSTC), outlines the ways in which AI is already benefitting society and our economy. From analysing medical images, optimising engineering designs and better forecasting and managing natural emergencies.

ADM+S’s contributions to this report seek to prompt government interventions around the risks of AI, and inform government consideration on how regulatory systems can promote responsible, ethical, and inclusive AI and automated decision-making systems for all Australians.

SEE ALSO

AI personality tests for hiring are here. Now what?

Robotic arm throws man in garbage can_web

AI personality tests for hiring are here. Now what?

Author  ADM+S Centre
Date 8 January 2024

Many employers use personality tests to determine a candidate’s cultural fit at a company. According to the New York Times, personality testing is a 2 billion dollar industry. With the recent rise of AI tools more companies are turning to AI driven personality testing to make hiring decisions, but how reliable are they?

“These days we pay a lot of attention to the use of AI in hiring and employment,” explains Julia Stoyanovich, a computer science professor at New York University (NYU) and affilate of the ARC Centre of Excellence for Automated Decision Making and Society.

Associate Professor Julia Stoyanovich has extensively researched the role of AI tools in the hiring process. She and a team of researchers did an external audit of two AI software companies that claim to determine a candidates’ personality when being considered for a job. The tools claim to construct a person’s personality based on their resume, Linkedin and profile on X (formerly known as Twitter). 

But can employers trust the process?

“What we found was that unfortunately these tools do not live up to their own expectations. The kinds of personality profiles that they construct can vary quite a bit depending on some properties of the input such as resume that shouldn’t matter at all.

“And so I think that we really need to be paying attention to the validity of these tools, to how useful they are, in addition to whether or not they are discriminatory.”

Julia said that one of the reasons that employers use these tools despite findings that they aren’t helpful is because they promise efficiency in screening. 

In November, Stoyanovich was invited to speak at one of United States Senator Chuck Schumer’s AI insight forums. In her statement she said one of the ways to make AI use in hiring more ethical is to disclose when AI is being used in the hiring process. And while New York City’s local Law 144 aims to do this, Stoyanovich says this is not enough because the law does not include any provisions to explain to job seekers why they were screened out by the tool.

“The question is whether it’s in fact helpful that employers disclose the use of AI in their screening processes to potential employees.

“What can we gain with the help of disclosure? There are lots of things that we can gain. One of them is simply that the public at large doesn’t have any information at all about what tools are being used today. 

“Additionally for an individual if they learn before they apply that they are going to be screened by a tool that they themselves don’t trust then a potential job applicant can decide not to apply. They can say this particular test is going to disadvantage me because I have a particular type of a disability or just simply to request that they be screened in a different way.”

New York City’s Law 144 requires a bias audit for AI tools, but some factors like age and disability aren’t accounted for.

“They only concern a very specific type of bias, that is relatively easy to comply with and relatively easy to check, and only with respect to gender, ethnicity and race or intersections of these categories.

“But the law doesn’t consider bias auditing based on age for example or on disability status. So ageism is really unfortunately very prevalent in hiring and employment.”

In 2022 the equal employment opportunity commission filed a lawsuit against a company because their AI model reportedly discriminated against female job applicants over the age of 55 and male job applicants over the age of 60.

In an ideal world AI operates without any bias but how likely are we to achieve this? Is it even possible?

“I must say that I am very skeptical about our ability to be able to use a technical patch, piece of technology, to address a long standing societal problem. The reason for this is that these problems, they’re not purely technical , they are social-technical.

“The reason that there’s bias in the predictions that these tools are making is because of how we construct them, because of how we source the data, because of what features we chose to use in order to make these predictions. But also to a very large degree what these tools can do is limited by how biased or unbiased our world is today.”

It’s hard to make strong predictions about exactly how the intersection of AI and hiring will evolve. 

Stoyanovich says there’s more work to be done on a policy and technological level.

“So I think that we all should take it upon ourselves to demand more disclosure about the use of these tools. Folks are paying attention, folks at the federal level, folks in New York City, private companies as well as government entities. Everybody’s paying attention.

“So we really have an opportunity here today, in 2024 and in the following years to hold companies to account about the goals that they pursue in their use of AI and also how they are checking whether these AI based tools are helping them reach these goals.”

Watch the original video story AI personality tests for hiring are here. Now what? published by MarketWatch 4 January, 2024.

SEE ALSO

Hackathon project explores multimodal AI to grapple with human and machine bias

Winning Hackathon team stading in front of Microsoft sign
Left to right: Hiruni Kegalle, Rhea Erica D'Silva, Dr Lida Ghahremanlou & Awais Hameed Khan at Microsoft.

Hackathon project explores multimodal AI to grapple with human and machine bias

Author  Kathy Nickels
Date 5 January 2024

Winners of the 2023 ADM+S Hackathon have visited Microsoft and Canva offices in Sydney to further advance their research exploring human and machine bias. 

The project, named Sub-Zero. A Comparative Thematic Analysis Experiment of Robodebt Discourse Using Humans and LLMs, was originally developed to investigate human and machine bias in the context of large language models (LLMs) like GPT-4 and Llama 2 for Qualitative Data Analysis (QDA). 

It was one of five projects developed over a two-day hackathon hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in August 2023.

Sub-Zero was selected as the winning project as it introduces a perspective of human-AI collaboration for qualitative research with a real potential to grapple with the complexities of bias and perception within our AI era.

It was commended by judges who said that the project is not only concerned with creating advanced qualitative research mechanisms but also about ingraining creativity and self-reflection into the process. It does not just navigate the data; it invites us to scrutinise our biases and preconceptions to pursue more nuanced research outcomes.

Dr Lida Ghahremanlow, Data Scientist Lead at Microsoft, affiliate at the ADM+S and Sub-Zero project mentor hosted the team at Microsoft. Here the team used insights from the Hackathon and expanded their project’s scope from LLMs to AI multimodal systems. 

“We constructed a method that used image-to-text-to-image to scrutinise the multimodal reasoning of multiple commercial and open-source GenAI systems while providing insights to human researchers about our own conceptions and assumptions when we try to observe and mitigate bias,” explained Rhea Erica D’Silva, one of the team researchers from ADM+S at Monash University.

Multimodal artificial intelligence combines multiple types of data to create more accurate determinations, conclusions and make more precise predictions about real-world problems. These systems train with and use video, audio, speech, images, text and a range of traditional numerical data sets. 

Left to right: Peter Bailey (Canva), Damiano Spina, Ned Watt, Awais Hameed Khan, Rhea Erica D’Silva, Hiruni Kegalle & Lida Ghahremanlou at Canva.

Ned Watt from ADM+S at QUT said, “this approach aims to expose biases and blind spots across modalities that emerge and re-emerge as GenAI models juggle multiple types of inputs and outputs.”

Most importantly, multimodal AI means numerous data types are used in tandem to help AI establish content and better interpret context, something missing in LLMs and earlier AI.

The project was showcased to Canva’s Trust, Safety, and Responsible AI team, eliciting valuable feedback and insights.

During the visit, the team also worked with Dr Damiano Spina, Associate Investigator at the RMIT University node of the ADM+S, to explore model degradation using image-to-text-to-image. 

“Our approach aims to embed reflexivity in both human and machine bias detection and mitigation using a combination of human-in-the-loop and machine-in-the-loop to broaden, deepen, and scale multimodal bias detection,” said Ned

The team will be releasing results from further investigations soon.

Project Team

Assoc. Prof Liam Magee (mentor), Dr Lida Ghahremanlou (mentor), Ned Watt, Hiruni Kegalle, Rhea D’Silva, Daniel Whelan-Shamy, and Dr Awais Hameed Khan.

Acknowledgements

The Sub-Zero project team extend their thanks to Peter Bailey (Canva), Dr Damiano Spina and mentors Dr Lida Ghahremanlow and Assoc. Prof Liam Magee.

SEE ALSO

ADM+S student awarded Best Oral Presentation at Information Access Evaluation Conference

Sachin best oral presentation
Sachin Cherumanal awarded Best Oral Presentation at NTCIR-17

ADM+S student awarded Best Oral Presentation at Information Access Evaluation Conference

Author  Natalie Campbell
Date 4 January 2024

ADM+S student Sachin Pathiyan Cherumanal from RMIT has been awarded Best Oral Presentation at the 17th Conference on Evaluation of Information Access Technologies (NTCIR-17), which took place in Japan on 12-15 December 2023.

Sachin represented a group of RMIT information retrieval researchers, including ADM+S members Kaixin Ji, Dr Danula Hettiachchi, Prof. Falk Scholer, and Dr Damiano Spina.

The ‘RMIT_IR’ team participated in the FairWeb-1 task, which required groups to investigate the relation between fairness and diversity in rankings using a systematic evaluation.

The FairWeb-1 task focused on three distinct entity types: researchers, movies, and YouTube contents. Each entity type is associated with one or two attribute sets, containing either nominal or ordinal groupings designed to ensure group fairness, and a target distribution is provided for each attribute set.

Groups were asked to submit results that not only included relevant documents at the top rank but also exhibited group fairness in alignment with the attributes specified for each entity type.

The RMIT_IR report details the team’s approach, exploring the role of explicit search result diversification (SRD) and ranking fusion to generate fair rankings considering multiple fairness attributes.

Sachin explains, “the report also considers the use of a linear combination-based technique (LC) which would take into consideration the relevance while re-ranking. Researchers compared results from five submitted runs, and the retrieval baselines along each topic type separately (i.e., Researcher, Movie, YouTube).”

Situated within the ADM+S project, Quantifying and Measuring Bias and Engagement, this work contributes to Sachin’s PhD thesis on fairness-aware question answering.

Each groups’ spokesperson presented a 10-minute overview of their system, for which Sachin was awarded Best Oral Presentation.

Since 1997, the NTCIR project has promoted research efforts for enhancing Information Access (IA) technologies such as Information Retrieval (IR), Text Summarization, Information Extraction (IE), and Question Answering (QA) techniques.

Dr Damiano Spina said, ‘by participating in NTCIR-17, ADM+S members contribute to the ongoing discussion around creating research infrastructure that allows large-scale evaluation of information access technologies.”

Read the full report.

SEE ALSO

Internship at ABC unlocks insights into news distribution on YouTube

Arjun Srinivas at ABC
Arjun Srinivas at ABC, Brisbane.

Internship at ABC unlocks insights into news distribution on YouTube

Author  Kathy Nickels
Date 2 January 2024

Arjun Srinivas, a PhD student at the QUT node of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), has successfully completed a three-month internship with the Australian Broadcasting Corporation (ABC), marking the first PhD internship of its kind for both ABC and the Queensland University of Technology (QUT).

Working closely with ABC’s YouTube news desk in Brisbane, led by Gary Kemble, Social Media Lead at ABC news, Arjun investigated how a public service media organisation navigates the recommender systems of large commercial platforms, such as YouTube.

Gary said hosting Arjun was a great experience for the ABC YouTube team. 

“We have always had theories about how YouTube and Google algorithms prioritise news content, but Arjun was able to apply his data-crunching experience and previous research to a live news environment, providing us with invaluable insights to do a better job of getting our content to new audiences,” said Gary.

During the internship Arjun was given access to a host of analytics and data assets, including ABC’s YouTube analytics suite, as well as other Third party databases such as Chartbeat and Trisolute. Arjun’s objective during the placement was to triangulate data from multiple sources and derive insights backed by his own analysis to optimise ABC’s news distribution strategy.  

Arjun said “considering the placement coincided with two significant events in the news cycle, the Voice referendum, and the Israel-Hamas conflict, I was able to get valuable insights into how a Public Service Broadcaster such as the ABC reports on and strategises its news distribution on themes that can be sensitive and polarizing.”  

“The experience provided a unique institutional perspective on how ABC uses a platform such as Youtube to disseminate it’s news,” 

“It provided invaluable contextual information to have as I work toward the latter half of my PhD.”

The internship built on Arjun’s ongoing PhD thesis, titled YouTube’s News Conundrum : An examination of ‘authoritative sources’ in YouTube’s recommendations. The thesis explores how news consumption in Australia is mediated by the recommender systems of major digital platforms. The research also investigates how YouTube defines and promotes ‘authoritative’ publishers on the platform and the ensuing impact on the news and information environment.

Arjun’s research is based on crowd-sourced search and recommendation data from YouTube obtained through the Australian Search Experience project at the ADM+S.

SEE ALSO