30 years of the web down under: how Australians made the early internet their ow

Old photo of an old desktop computer in a library.
Blacktown City Librairies, CC BY-SA

30 years of the web down under: how Australians made the early internet their ow

Author  Kieran Hegarty
Date 22 September 2023

The internet is growing old. While the roots of the internet date back to the 1960s, the popular internet – the one that 99% of Australians now use – is a child of the 1990s.

In the space of a decade, the internet moved from a tool used by a handful of researchers to something most Australians used – to talk to friends and family, find out tomorrow’s weather, follow a game, organise a protest, or read the news.

The popular internet grows up

This year marks 30 years since the release of Mosaic, the first browser that integrated text and graphics, helping to popularise the web: the global information network we know today.

Google is now 25, Wikipedia turned 21 last year, and Facebook will soon be 20. These anniversaries were marked with events, feature articles and birthday cakes.

But a local milestone passed with little fanfare: 30 years ago, the first Australian websites started to appear.

The web made the internet intelligible to people without specialist technical knowledge. Hyperlinks made it easy to navigate from page to page and site to site, while the underlying HTML code was relatively easy for newcomers to learn.

Australia gets connected

In late 1992, the first Australian web server was installed. The Bioinformatics Hypermedia Server was set up by David Green at the Australian National University in Canberra, who launched his LIFE website that October. LIFE later claimed to be “Australia’s first information service on the World Wide Web”.

Not that many Australians would have seen it at the time. In the early 1990s, the Australian internet was a university-led research network.

The Australian Academic and Research Network (AARNet) connected to the rest of the world in 1989, through a connection between the University of Hawaii and the University of Melbourne. Within a year, most Australian universities and many research facilities were connected.

The World Wide Web was invented by English computer scientist Tim Berners-Lee and launched in 1991. At the time, it was just one of many communication protocols for creating, sharing and accessing information.

Researchers connected to AARNet were experimenting with tools like Gopher and Internet Relay Chat alongside the web.

Even as a research network, the internet was deeply social. Robert Elz, one of the computer scientists who connected Australia to the internet in 1989, became well-known for his online commentaries on cricket matches. Science fiction fans set up mailing lists.

These uses hinted at what was to come, as everyday Australians got online.

The birth of the public internet

Throughout 1994, AARNet enabled private companies to buy network capacity and connect users outside research contexts. Ownership of the Australian internet was transferred to Telstra in 1995, as private consumers and small businesses began to move online.

With the release of web browsers like Mosaic and Netscape, and the increase in dial-up connections, the number of Australian websites grew rapidly.

At the start of 1995, there were a couple of hundred. When the Australian internet went public just six months later, they numbered in the thousands. By the end of the decade there were hundreds of thousands.

Everyday Australians get connected

As everyday Australians went online, students, activists, artists and fans began to create a diverse array of sites that took advantage of the web’s possibilities.

The “cyberfeminist zine” geekgirl, created by Rosie X. Cross from her home in inner-west Sydney, combined a “Do It Yourself” punk ethos with the global distribution the web made possible. It was part of a diverse and flourishing feminist culture online.

Australia was home to the first fully online doctorate, Simon Pockley’s 1995 PhD thesis Flight of Ducks.

Art students presented poetry as animated gifs, labelling them “cyberpoetry”. Aspiring science fiction writers published multimedia stories on the web.

The Australian internet goes mainstream

Political parties, government and media also moved online.

The Age Online was the first major newspaper website in Australia. Launched in February 1995, the site beat Australia’s own national broadcaster by six months and the New York Times by a year.

Though The Age was first, ABC Online and ninemsn – linked to the Hotmail email service – were the most popular.

During the 1998 federal election, ABC Online saw over two million hits per week. Political parties, candidates and interest groups were quick to establish a web presence, kicking off the era of online political campaigning.

The web also became big business. By the end of the decade, Australia had its own internet entrepreneurs, including a future prime minister. Established media companies dominated web traffic.

Internet fever” was sweeping Australian businesses, leading to an “internet stocks frenzy”. The internet had gone mainstream and the “dot com bubble” was rapidly inflating.

Looking back on the decade the popular internet was born

The public, open, commercial internet is now a few decades old. Given current concerns about the state of the internet – from the power of large digital platforms to the proliferation of disinformation – it might be tempting to look at the 1990s as a “golden age” for the internet.

However, we must resist looking back with rose-coloured glasses. What is needed is critical scrutiny of the conditions that underpinned internet use and attention to how a diversity of people incorporated technology in their lives and helped transformed it in the process. This will help us understand how we got the internet we have and how we might achieve the internet we want.

Understanding online history can be particularly difficult because many sites have long-since disappeared. However, archiving efforts like those of the Internet Archive and the National Library of Australia make it possible to look back and see how much things have changed, what concerns are familiar, and remember the everyday people who helped transform the internet from a niche academic network to a mass medium.The Conversation

Kieran Hegarty, Research Fellow (Automated Decision-Making Systems), RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

ADM+S acknowledged for contribution to Commonwealth Governments’ discussion paper on Safe and Responsible AI

Purple image with text "How do large language models work?"
Pexels/Google DeepMind

ADM+S acknowledged for contribution to Commonwealth Governments’ discussion paper on Safe and Responsible AI

Author  Kathy Nickels
Date 22 September 2023

The rapid rise of generative AI is revolutionising the generation of content, and around the world people are looking to governments to lead the conversation on its regulation.

How should the Australian federal government take action to promote artificial intelligence and automated decision-making that is safe and responsible?

The Minister for Industry and Science Hon Ed Husic has highlighted that the government is committed to a thoughtful approach to the challenges of generative AI while ensuring they also maximise the benefits.

“While AI has been with us for a while and contains great benefits for both individuals and organisations, it’s important we get the balance right on its introduction,” said Hon Ed Husic.

In June 2023, the Australian Government called for public submissions for their discussion paper Supporting Responsible AI

The discussion paper builds on the recent Rapid Research Report on Generative AI delivered by the government’s National Science and Technology Council.

Hon Ed Husic provided some details about submissions received in a recent article published in the Australian Financial Review.

He said that more than 500 submissions were received, reflecting the interest and concern people have around regulations of AI.

“Nearly every submission agreed that getting the guardrails right was about more than just creating new laws. It also meant investing in capability building and education, creating standards, and in co-ordinating and upskilling existing regulators and policymakers,” said Hon Ed Husic.

However there was a divided response on whether Australia explicitly needed new laws to address the growth and management on artificial intelligence. 

“Most of the submissions from the technology companies said updating existing laws would be more effective than introducing new laws specifically for AI developers and users.

“They pointed out that there were many laws that already influenced AI development. But laws will need to be updated.”

Hon Ed Husic highlighted the submission made by The Centre for Automated Decision Making and Society and the existing legal frameworks identified in this report that need updating to address AI’s challenges.

“These included administrative law, copyright law, privacy, political advertising and campaign laws, and rules for financial advisers, medicine and lawyers.

“Consumer and human rights groups, on the other hand, and members of the public, supported explicit new AI laws. The need for watermarking or labelling of AI-generated material was identified by many as a new and urgent issue.

“And there was a real concern that an explosion of cheap AI content would see people spending more time battling information overload, cancelling any productivity gains.

“These are all real and serious concerns. Ones that, as a government, we will grapple with over the next while.

“Getting the balance right will be important. Important in allowing AI to enhance our economic prospects and national wellbeing and protecting Australians,” said Hon Ed Husic.

Read the full ADM+S Submission to the Safe and Responsible AI in Australia Discussion Paper

SEE ALSO

Australian Ad Observatory features on Hungry Talks

aia Guildford-Carey talking to Prof Christine Parker
Left to right: Jaia Guildford-Carey and Prof Christine Parker

Australian Ad Observatory features on Hungry Talks

Author  Kathy Nickels
Date 21 September 2023

Since 2021, the Australian Ad Observatory has collected over 300,000 unique ads. The Observatory represents the largest library of ads collected from the Australian general public. 

Researchers at the ARC Centre of Excellence for Automated Decision-Making and Society have been analysing these ads to identify problematic advertising, such as green claims and unhealthy food advertising as well as alcohol and gambling advertising.

Professor Christine Parker from the ARC Centre of Excellence for Automated Decision-Making and Society and Law Professor at the University of Melbourne spoke about the Australian Ad Observatory with host Jaia Guildford-Carey on a recent episode of Hungry Talks.

Professor Parker said that they are interested in what advertising people are seeing on Facebook because a lot of advertising is personalised and targeted. 

“The various social media platforms, what they’re doing is they’re collecting lots of data about you, and people like you, that they think might be relevant to you. And that is governing what is coming through your feed.

“In traditional legacy media, advertising appeared on billboards, magazines and TV where everybody could see the same ads.

“However when you are looking at social media we don’t really know what different people are getting and whether some people are getting lots of one type of ad and others are getting a lot of another kind of ad,” said Professor Parker.

The Australian Ad Observatory is seeking participants to donate their Facebook Ads to this valuable research. Visit the Australian Ad Observatory website to find out more about the project and how to get involved.

View the Hungry Talks Episode 7: Sustainability and Ethics (Prof Christine Parker talks to Jaia Guildford-Carey from 50:55)

SEE ALSO

Information Retrieval on Country

Indigenous Artwork depicting Information Retrieval on Country
"Information Retrieval on Country" - Treahna Hamm

Commissioned by ADM+S in July 2023. The artwork has been used for an alumni event celebrating 25+ years of collaboration between RMIT and the University of Melbourne in the research field of Information Retrieval, organised by Damiano Spina (RMIT) and Lida Rashidi (University of Melbourne). 

Artist Statement

As an artist, my work explores the profound connection between living and sharing on Aboriginal land, intertwining it with the retrieval of valuable information. Through my art, I aim to honour and celebrate the wisdom of Elders, who hold a wealth of cultural knowledge, and the sacredness of the land itself. Drawing from the rich heritage and stories of Indigenous communities, I seek to create a visual narrative that highlights the significance of this symbiotic relationship between people and place. My art becomes a vessel through which the past, present, and future are interwoven, fostering a deeper understanding of the interconnectedness between humans, the land, and the wealth of data embedded in this ancestral bond.

Incorporating a search engine into my artistic process, I embark on a unique journey of creativity that melds traditional storytelling and modern technology. Through data retrieval and analysis, I collect relevant information about the history, culture, and significance of the Aboriginal land. This data-driven approach allows me to extract meaningful patterns and insights, which serve as the foundation for my artistic expressions.

The search engine acts as a guiding force, influencing the composition, colours, and elements within my artwork. By blending the wisdom of Elders’ narratives with the data-driven revelations, I strive to create a harmonious fusion of the past and present. The algorithmic input serves as a channel through which I can pay homage to the deep-rooted traditions while interpreting them in a contemporary context.

As I navigate the artistic process, the search engine acts as both collaborator and curator, helping me select the most relevant information and translating it into visual representations. It enriches my artwork by infusing it with layers of significance, inviting viewers to engage with the cultural heritage of Aboriginal land in a novel and thought-provoking manner.

Ultimately, my art with a search algorithm seeks to bridge the gap between heritage and innovation, fostering a profound appreciation for the timeless connection between Elders, land, and the wealth of knowledge embedded within their intertwined stories.

The blue islands in the artwork are the algorithms floating above the land. The Elders/Ancestors are the symbolic faces which I hope you can see.

Biography

Dr Treahna Hamm (Firebrace) has been a practising Artist nationally and internationally and holds a Doctorate of Philosophy (School of Education) at RMIT University – graduating in 2008. Treahna’s career began at Wangaratta TAFE in 1982 before completing 5 degrees in Visual Arts, Teaching and Education.

Her artworks are composed with multi-layers of stories garnered from her experiences of living by the Murray River in Northern Victoria and southern NSW. This, along with contemporary practices including printmaking, painting, photography, public art, sculpture, possum skin cloaks, murals and highly individual fibre weaving. She works with abstract forms as well as traditional designs from her Indigenous heritage.

Treahna has exhibited in New York, South Korea, Hawaii, New Zealand, Paris, Belgium, Germany and the United States. Her vibrant works are in national and international collections.

Her dedication towards rejuvenating, revitalising and retelling oral history through her own life experiences has been a foundation to the collective experience as a whole in Victoria and southern NSW.

SEE ALSO

Research supports call for improved safety of dating apps

Dating apps on mobile phone screen

Research supports call for improved safety of dating apps

Author  Kathy Nickels
Date 19 September 2023

Online dating apps could be forced to make changes through government legislation unless they lift their standards and improve safety for users.

Communications Minister Michelle Rowland announced that popular dating companies such as Tinder, Bumble and Hinge have until June 30 to develop a voluntary code of conduct that addresses user safety concerns.

The code could include improving engagement with law enforcement, supporting at-risk users, improving safety policies and practices, and providing greater transparency about harms, she said.
But, Rowland added, if the safety standards are not sufficiently improved, the government will use regulation and legislation to force change.

The government is responding to Australian Institute of Criminology research published last year that found three-in-four users of dating apps or websites had experienced some form of sexual violence through these platforms in the five years through 2021.

“Online dating is actually the most popular way for Australians to meet new people and to form new relationships,” Rowland said.

“The government is concerned about rates of sexual harassment, abusive and threatening language, unsolicited sexual images and violence facilitated by these platforms,” she added.

Earlier this year, the federal government convened a national rountable that brought representatives from the sector face-to-face with experts, advocates and law enforcement agencies to discuss the situation playing out online.

ARC Centre of Excellence for Automated Decision-Making researcher Professor Kath Albury from Swinburne University studies behaviours on online dating and social media platforms, and said users reported a wide variety of problematic experiences.

“The harms range from receiving unwanted contact or images — unwanted texts and images that maybe are using slurs or sexually explicit when a person hasn’t consented to receiving sexually explicit communication,” Professor Albury said.

“And they range from that kind of day-to-day, the equivalent of flashing in the offline environment or on-street harassment — someone yelling out a comment to you, that’s what it feels like with that kind of contact — to, at times, racist or discriminatory language, transphobic language, stalking in some cases, and in other cases quite threatening behaviours — so moving from the dating apps on to other social platforms to stalk, or offline stalking or indeed physical harassment.”

Professor Albury said the handling of complaints was a key area where users wanted to see improvement.
“There could be clearer communication around what happens when you report an unwanted contact or a questionable or threatening contact, and what the app does with that information,” Professor Albury said.

“There could also be a clearer sense of how fast you can expect to get feedback or a very personal response from the app if you report an issue.

“One of the things that dating app users are concerned about is the sense that complaints go into the void, or there’s a response that feels automated, or not personally responsive in a time when they’re feeling quite unsafe or distressed,” Albury said.

Read more

Watch ABC interview with Professor Kath Albury

SEE ALSO

ADM+S Research Fellow invited to deliver the 2023 Hancock Lecture

2023 Hancock Lecture

ADM+S Research Fellow invited to deliver the 2023 Hancock Lecture

Author Kathy Nickels
Date 13 September 2023

ADM+S Research Fellow Dr Thao Phan from Monash University will be the featured speaker for the Australian Academy of Humanities 2023 Hancock Lecture. 

Each year, the Australian Academy of Humanities invites an outstanding scholar at the earlier stages of their careers to talk at the Hancock Lecture about their work in an accessible way for the everyday Australian.

In the talk ‘Artificial figures: gender-in-the-making in algorithmic culture’, Dr Phan will explore how, in the making of AI systems and technologies, gender too is being made.

This lecture centres on questions of power, politics, and identity in today’s algorithmic culture. It asks: how are more-than-human systems reconfiguring the terms of all-too-human categories like gender, race, and class? How does gender influence how new technologies are made intelligible, mediating the expectations of a user, consumer, or audience? 

And finally, how might these encounters with AI reveal the artifice of gender as a system that is tied to the realm of the artificial as much as it is to nature and what we call ‘the natural’?

The Hancock Lecture will be hosted 4pm, Thursday 16 November 2023 at the Kaleide RMIT Union Theatre, Melbourne. Visit the Hancock Lecture webpage to register for this event.

The Hancock Lecture is being hosted as part of the Australian Academy of the Humanities 54th Annual Academy Symposium. Visit the Symposia webpage for further information and registration for this event.

SEE ALSO

Google turns 25: The search engine revolutionised how we access information, but will it survive AI?

Text "Google"
Flickr/sergio m mahugo, Edited by The Conversation CC BY-NC-SA

Google turns 25: The search engine revolutionised how we access information, but will it survive AI?

Authors  Mark Sanderson, Julian Thomas, Kieran Hegarty & Lisa Given
Date 4 September 2023

Today marks an important milestone in the history of the internet: Google’s 25th birthday. With billions of search queries submitted each day, it’s difficult to remember how we ever lived without the search engine.

What was it about Google that led it to revolutionise information access? And will artificial intelligence (AI) make it obsolete, or enhance it?

Let’s look at how our access to information has changed through the decades – and where it might lead as advanced AI and Google Search become increasingly entwined.

Google’s homepage in 1998.
Brent Payne/Flickr, CC BY-SA

1950s: public libraries as community hubs

In the years following the second world war, it became generally accepted that a successful post-war city was one that could provide civic capabilities – and that included open access to information.

So in the 1950s information in Western countries was primarily provided by local libraries. Librarians themselves were a kind of “human search engine”. They answered phone queries from businesses and responded to letters – helping people find information quickly and accurately.

Libraries were more than just a place to borrow books. They were where parents went to look for health information, where tourists requested travel tips, and where businesses sought marketing advice.

The searching was free, but required librarians’ support, as well as a significant amount of labour and catalogue-driven processes. Questions we can now solve in minutes took hours, days or even weeks to answer.

1990s: the rise of paid search services

By the 1990s, libraries had expanded to include personal computers and online access to information services. Commercial search companies thrived as libraries could access information through expensive subscription services.

These systems were so complex that only trained specialists could search, with consumers paying for results. Dialog, developed at Lockheed Martin in the 1960s, remains one of the best examples. Today it claims to provide its customers access “to over 1.7 billion records across more than 140 databases of peer-reviewed literature”.

This photo from 1979 shows librarians at the terminals of online retrieval system Dialog.
U.S. National Archives

Another commercial search system, The Financial Times’ FT PROFILE, enabled access to articles in every UK broadsheet newspaper over a five-year period.

But searching with it wasn’t simple. Users had to remember typed commands to select a collection, using specific words to reduce the list of documents returned. Articles were ordered by date, leaving the reader to scan for the most relevant items.

FT PROFILE made valuable information rapidly accessible to people outside business circles, but at a high price. In the 1990s access cost £1.60 a minute – the equivalent of £4.65 (or A$9.00) today.

The rise of Google

Following the world wide web’s launch in 1993, the number of websites grew exponentially.

Libraries provided public web access, and services such as the State Library of Victoria’s Vicnet offered low-cost access for organisations. Librarians taught users to find information online and build websites. However, the complex search systems struggled with exploding volumes of content and high numbers of new users.

In 1994, the book Managing Gigabytes, penned by three New Zealand computer scientists, presented solutions for this problem. Since the 1950s researchers had imagined a search engine that was fast, accessible to all, and which sorted documents by relevance.

In the 1990s, a Silicon Valley startup began to apply this knowledge – Larry Page and Sergey Brin used the principles in Managing Gigabytes to design Google’s iconic architecture.

After launching on September 4 1998, the Google revolution was in motion. People loved the simplicity of the search box, as well as a novel presentation of results that summarised how the retrieved pages matched the query.

In terms of functionality, Google Search was effective for a few reasons. It used the innovative approach of delivering results by counting web links in a page (a process called PageRank). But more importantly, its algorithm was very sophisticated; it not only matched search queries with the text within a page, but also with other text linking to that page (this was called anchor text).

Google’s popularity quickly surpassed competitors such as AltaVista and Yahoo Search. With more than 85% of the market share today, it remains the most popular search engine.

As the web expanded, however, access costs were contested.

Although consumers now search Google for free, payment is required to download certain articles and books. Many consumers still rely on libraries – while libraries themselves struggle with the rising costs of purchasing material to provide to the public for free.

What will the next 25 years bring?

Google has expanded far beyond Search. Gmail, Google Drive, Google Calendar, Pixel devices and other services show Google’s reach is vast.

With the introduction of AI tools, including Google’s Bard and the recently announced Gemini (a direct competitor to ChatGPT), Google is set to revolutionise search once again.

As Google continues to roll generative AI capabilities into Search, it will become common to read a quick information summary at the top of the results page, rather than dig for information yourself. A key challenge will be ensuring people don’t become complacent to the point that they blindly trust the generated outputs.

Fact-checking against original sources will remain as important as ever. After all, we have seen generative AI tools such as ChatGPT make headlines due to “hallucinations” and misinformation.

If inaccurate or incomplete search summaries aren’t revised, or are further paraphrased and presented without source material, the misinformation problem will only get worse.

Moreover, even if AI tools revolutionise search, they may fail to revolutionise access. As the AI industry grows, we’re seeing a shift towards content only being accessible for a fee, or through paid subscriptions.

The rise of AI provides an opportunity to revisit the tensions between public access and increasingly powerful commercial entities.The Conversation

Mark Sanderson, Professor of Information Retrieval, RMIT University; Julian Thomas, Distinguished Professor of Media and Communications; Director, ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University; Kieran Hegarty, Research Fellow (Automated Decision-Making Systems), RMIT University, and Lisa M. Given, Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Digital Energy Futures documentary wins Social Impact Award at SCINEMA International Film Festival

Digital Energy Futures Documentary Title Screen

Digital Energy Futures documentary wins Social Impact Award at SCINEMA International Film Festival

Author  Kathy Nickels
Date 28 August 2023

Digital Energy Futures documentary directed by ARC Centre of Excellence for Automated Decision-Making researcher Prof Sarah Pink has received the Social Impact Award from the Royal Institution of Australia SCINEMA International Film Festival.

The documentary was one of eight films selected to feature in the 2023 SCINEMA film festival, one of the largest science film festivals in the southern hemisphere. 

This film explores how people living in Australia see their future lives in a country where increasingly extreme weather, concerns about public health, growing levels of technological automation are creating uncertainty about demand for electricity in the future.

The documentary was directed by Prof Sarah Pink, and created by filmaker Jeni Lee from the ADM+S Centre at Monash University alongside researcher Dr Kari Dahlgren, from the Emerging Technologies Research Lab at Monash University.

The filmmakers follow the everyday lives of five households to ask how they are inventing their own ways to live with emerging technologies, imagining and planning for their own futures in ways that might complicate the ambitions of industry and policy makers.

SCINEMA runs from August 1 to August 31 every year. To be part of the festival and watch the films for free, register at the SCINEMA website.

SEE ALSO

More-than-Human wellbeing mini show joins policy impact exhibition

Vaughan O'Connor with artefacts from te More-than-Human mini show
Dr Vaughan Wozniak-O'Connor presenting the More-than-Human mini show at the James Martin Institute for Public Policy Summit.

More-than-Human wellbeing mini show joins policy impact exhibition

Author  Kathy Nickels
Date 9 August 2023

The More-than-Human wellbeing mini show curated by ARC Centre of Excellence for Automated Decision-Making and Society researchers, was recently featured as part of the Policy Impact Exhibition at the James Martin Institute for Public Policy Summit 2023.

The Summit brought together government policymakers, researchers, other experts, and interested members of the public, to discuss, reflect and explore ways to improve outcomes in key policy areas.

As part of the summit, the Policy Impact Exhibition showcased an exclusive insight into how leading public policy institutes design and deliver high-impact projects with far-reaching benefits for people across the country.

Curated by ADM+S Chief Investigator and leader of the Vitalities Lab at UNSW, Prof Deborah Lupton along with ADM+S Researchers from UNSW, Dr Vaughan Wozniak-O’Connor, Dr Ash Watson and Dr Megan Rose, the exhibition uses multimodal arts-based and multisensory methods – both digital and non-digital – to highlight ways of knowing and being within and beyond the world of self-tracking apps, electronic medical records, and smart devices for documenting illnesses and promoting health and wellbeing.

“This Public Policy Summit offered our exhibition team a wonderful opportunity to talk to policymakers, the public and other stakeholders about how we are using public exhibitions as a powerful way of engaging with the community” said Professor Lupton.

The main exhibition is open to the public until Friday 18 August 2023, UNSW Main Library Level 5. Admission is free and open to all ages. Further details, a link to the short film made for the exhibition and downloadable resources are available at the exhibition website: https://dlupton.com/

SEE ALSO

ADM+S Higher Degree Research Students build global connections at Oxford Internet Institute

Pictured from left: Anand Badolo, Dominique Carlon & Kunal Chand.
Pictured from left: Anand Badolo, Dominique Carlon & Kunal Chand.

ADM+S Higher Degree Research Students build global connections at Oxford Internet Institute

Authors  Anand Badola, Dominique Carlon & Kunal Chand
Date 4 August 2023

Higher Degree Research Students Anand Badola, Dominique Carlon and Kunal Chand from the ARC Centre of Excellence for Automated Decision-Making and Society at QUT, recently attended the Oxford Internet Institute Summer Doctoral Programme (OII-SDP) where they participated in 2 weeks of classes and workshops, received feedback on their research projects, and met 28 other participants from around the world.

Reflecting on their experiences, Anand, Dominique and Kunal said, “The OII-SDP was a truly brilliant experience, opening our eyes to broader academic perspectives and setting the foundations to establish long lasting connections and friendships with scholars from across the world.

“There are too many experiences to mention, however some highlights included conducting a networking analysis with Bernie Hogan, a practical ethnographic class on the streets of Oxford with Adam Badger, learning creative ways of disseminating research with Kathyrn Eccles, examining the parallels between UFOs and Bayesian statistics with Joss Wright and using museum artefacts as a form of research ideation with Gemma Newlands.

“In addition to insightful classes on AI and social theory and scrutinising what topics are excluded from academic research, we also had wonderful experiences such as punting (where we were boarded by a pirate duck), visited Bletchley Park and the Museum of computing, and stayed at Christchurch college.

“The greatest highlight by far was forming friendships and collaborations from a network of brilliant and inspiring minds from across the world.”

The students extend their thanks to the OII, particularly Gemma Newlands for hosting a wonderful programme, and to the Digital Media Research Centre at QUT and the ADM+S for supporting their participation.

SEE ALSO

ADM+S Research Fellow Yong-Bin Kang recognised at the prestigious VIC iAwards 2023

VIC iWards

ADM+S Research Fellow Yong-Bin Kang recognised at the prestigious VIC iAwards 2023

Authors Anthony McCosker & Yong-Bin Kang
Date 4 August 2023

ADM+S Research Fellow Yong-Bin Kang has achieved recognition at the prestigious VIC iAwards 2023. As an integral part of two outstanding teams at Swinburne University, Yong-Bin has garnered the VIC iAwards Winner in the ‘Government & Public Sector Solution’ category and the VIC iAwards Merit Recipient in the ‘Technology Platform Solution’ category.

The iAwards ‘unearths, recognises and rewards excellence in Australian innovation that is making a difference and has the potential to create positive change for the community – whether this is at home, in the office or on a global scale’.

The VIC iAwards Winner was earned for the innovative AI-powered 5G IoT solution, designed to transform roadside asset monitoring using SmartGarbos. By leveraging cutting-edge technologies like smart IoT devices and edge computers, this solution addresses the critical issue of roadside asset maintenance. Developed in collaboration with Swinburne University of Technology, Brimbank City Council, Optus, and Amazon Web Services (AWS), the project has brought significant advancements to the municipality of Brimbank City Council.

Building on this work, Yong-Bin led an ADM+S project with Professor Anthony McCosker, Chief Investigator from the ADM+S at Swinburne University, to develop an AI governance framework and action plan for Brimbank City Council. The final report for the project will be released soon and will help guide other Councils seeking to deploy AI technologies responsibly.

The VIC iAwards Merit Recipient was awarded to Vidverity’s state-of-the-art online teaching platform. This recognition highlights the fruitful collaboration between Swinburne researchers and software engineers in Natural Language Processing and Artificial Intelligence, and Vidversity’s exceptional expertise in the education domain. Together, they have crafted an innovative and highly effective modern learning platform.

Winners of the iAwards National finals will be announced in Adelaide at the end of August 2023.

SEE ALSO

What is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues

Electronics board
Laura Ockel/Unsplash

What is ‘AI alignment’? Silicon Valley’s favourite way to think about AI safety misses the real issues

Author  Aaron Snoswell
Date 12 July 2023

As increasingly capable artificial intelligence (AI) systems become widespread, the question of the risks they may pose has taken on new urgency. Governments, researchers and developers have highlighted AI safety.

The EU is moving on AI regulation, the UK is convening an AI safety summit, and Australia is seeking input on supporting safe and responsible AI.

The current wave of interest is an opportunity to address concrete AI safety issues like bias, misuse and labour exploitation. But many in Silicon Valley view safety through the speculative lens of “AI alignment”, which misses out on the very real harms current AI systems can do to society – and the pragmatic ways we can address them.

What is ‘AI alignment’?

AI alignment” is about trying to make sure the behaviour of AI systems matches what we want and what we expect. Alignment research tends to focus on hypothetical future AI systems, more advanced than today’s technology.

It’s a challenging problem because it’s hard to predict how technology will develop, and also because humans aren’t very good at knowing what we want – or agreeing about it.

Nevertheless, there is no shortage of alignment research. There are a host of technical and philosophical proposals with esoteric names such as “Cooperative Inverse Reinforcement Learning” and “Iterated Amplification”.

There are two broad schools of thought. In “top-down” alignment, designers explicitly specify the values and ethical principles for AI to follow (think Asimov’s three laws of robotics), while “bottom-up” efforts try to reverse-engineer human values from data, then build AI systems aligned with those values. There are, of course, difficulties in defining “human values”, deciding who chooses which values are important, and determining what happens when humans disagree.

OpenAI, the company behind the ChatGPT chatbot and the DALL-E image generator among other products, recently outlined its plans for “superalignment”. This plan aims to sidestep tricky questions and align a future superintelligent AI by first building a merely human-level AI to help out with alignment research.

But to do this they must first align the alignment-research AI…

Why is alignment supposed to be so important?

Advocates of the alignment approach to AI safety say failing to “solve” AI alignment could lead to huge risks, up to and including the extinction of humanity.

Belief in these risks largely springs from the idea that “Artificial General Intelligence” (AGI) – roughly speaking, an AI system that can do anything a human can – could be developed in the near future, and could then keep improving itself without human input. In this narrative, the super-intelligent AI might then annihilate the human race, either intentionally or as a side-effect of some other project.

In much the same way the mere possibility of heaven and hell was enough to convince the philosopher Blaise Pascal to believe in God, the possibility of future super-AGI is enough to convince some groups we should devote all our efforts to “solving” AI alignment.

There are many philosophical pitfalls with this kind of reasoning. It is also very difficult to make predictions about technology.

Even leaving those concerns aside, alignment (let alone “superalignment”) is a limited and inadequate way to think about safety and AI systems.

Three problems with AI alignment

First, the concept of “alignment” is not well defined. Alignment research typically aims at vague objectives like building “provably beneficial” systems, or “preventing human extinction”.

But these goals are quite narrow. A super-intelligent AI could meet them and still do immense harm.

More importantly, AI safety is about more than just machines and software. Like all technology, AI is both technical and social.

Making safe AI will involve addressing a whole range of issues including the political economy of AI development, exploitative labour practices, problems with misappropriated data, and ecological impacts. We also need to be honest about the likely uses of advanced AI (such as pervasive authoritarian surveillance and social manipulation) and who will benefit along the way (entrenched technology companies).

Finally, treating AI alignment as a technical problem puts power in the wrong place. Technologists shouldn’t be the ones deciding what risks and which values count.

The rules governing AI systems should be determined by public debate and democratic institutions.

OpenAI is making some efforts in this regard, such as consulting with users in different fields of work during the design of ChatGPT. However, we should be wary of efforts to “solve” AI safety by merely gathering feedback from a broader pool of people, without allowing space to address bigger questions.

Another problem is a lack of diversity – ideological and demographic – among alignment researchers. Many have ties to Silicon Valley groups such as effective altruists and rationalists, and there is a lack of representation from women and other marginalised people groups who have historically been the drivers of progress in understanding the harm technology can do.

If not alignment, then what?

The impacts of technology on society can’t be addressed using technology alone.

The idea of “AI alignment” positions AI companies as guardians protecting users from rogue AI, rather than the developers of AI systems that may well perpetrate harms. While safe AI is certainly a good objective, approaching this by narrowly focusing on “alignment” ignores too many pressing and potential harms.

So what is a better way to think about AI safety? As a social and technical problem to be addressed first of all by acknowledging and addressing existing harms.

This isn’t to say that alignment research won’t be useful, but the framing isn’t helpful. And hare-brained schemes like OpenAI’s “superalignment” amount to kicking the meta-ethical can one block down the road, and hoping we don’t trip over it later on.The Conversation

Dr Aaron Snoswell is a computer scientist and research fellow in AI accountability at the ADM+S Centre, based at QUT in Brisbane, Australia.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Research informs government debate on targeted advertising of harmful products

Alcohol bottle in small trolley in front of computer screen with credit card options

Research informs government debate on targeted advertising of harmful products

Author  Kathy Nickels
Date 2 June 2023

The ADM+S Centre has welcomed the call by crossbench MPs for the Federal Government to use independent evidence on targeted advertising to inform debate about pervasive marketing of harmful products as a matter of public importance.

MPs Dr Sophie Scamps, Ms Zali Steggall, Ms Allegra Spender, Mr Andrew Wilkie, Ms Kate Chaney, Ms Kylea Tink, Dr Monique Ryan, Ms Zoe Daniel, Dr Helen Haines and Ms Rebekha Sharkie on Wednesday called on the Government to use the reform of the Privacy Act to close loopholes that allow companies to “saturate broadcast and social media with harmful product marketing”.

Nicholas Carah, Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and Associate Professor at the University of Queensland, said that the crossbench MPs represented widespread community concern about the use of data to target advertising of harmful products to children and vulnerable people. 

Research from the University of Queensland and Monash University in partnership with VicHealth has shown that Facebook and Instagram collect hundreds of data points on young people aged 16 to 25 years, with 42 per cent assigned terms like ‘alcohol’, ‘alcoholic drinks’, ‘bars’, ‘bartender’ and ‘beer’ as advertising interests, including children.

The Foundation for Alcohol Research and Education, FARE, has discovered dozens of breaches of the advertising code on the Facebook pages of popular alcohol brands. FARE found content that contained images of under-25-year-olds drinking, celebrated binge drinking and implied that alcohol is connected to social success. 

Research at the ADM+S Centre is investigating targeted advertising of harmful products including alcohol, gambling and unhealthy foods through the Australian Ad Observatory project. 

In March 2023, the research from the Australian Ad Observatory found gambling ads that were illegally targeting Australians on Facebook

The Australian Ad Observatory conducts independent research into the role that algorithmically targeted advertising plays in society. 

The Australian Ad Observatory takes a citizen science approach to investigating how Facebook ads target Australian users. It relies on the general public donating data through a plugin available for desktop versions of leading web browsers. To join the project visit www.admscentre.org.au/adobservatory

Associate Professor Carah said “The Australian Ad Observatory project at the ADM+S aims to improve the observability of targeted online advertising in Australia, and forms part of broader efforts to help improve transparency and accountability in the digital platform environment.”

Recognising the importance of research to support government decisions and policy change, the ADM+S Centre has proposed the Australian Social Data Observatory (ASDO). Working with a broad consortium of colleagues, this landmark National Research Infrastructure would provide the tools and capabilities to gather and analyse online user experience data, algorithms and interactions, making social data dramatically more useable for Australian researchers—across universities, government, industry and civil society.

Daniel Angus, Chief Investigator and Chair of Infrastructure at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) and Professor at Queensland University of Technology, said that the highly ephemeral and personalised nature of the online user experience points to the need for increased investment in national-scale research support infrastructure.

Professor Angus says “To understand Australian’s everyday experience of digital platforms we need a far more comprehensive approach to data collection than what is currently available. 

Good policy development for digital platforms relies on not just knowing what is being shared online, but also knowing who is seeing what, and how these targeting and curatorial decisions are being made through the platforms’ proprietary recommendation systems.” 

SEE ALSO

Dr Ariadna Matamoros-Fernández elected member of the AoIR executive committee

Ariadna Matamoros-Fernández

Dr Ariadna Matamoros-Fernández elected member of the AoIR executive committee

Author  Kathy Nickels
Date 5 May 2023

ADM+S Associate Investigator Dr Ariadna Matamoros-Fernández from QUT has been elected to join the 2023-25 Association of Internet Researchers (AoIR) executive committee. 

The AoiR is an academic association dedicated to the advancement of the cross-disciplinary field of Internet studies. 

It is a member-based support network that promotes critical and scholarly Internet research independent from traditional disciplines and exists across academic borders.

The AoIR Committee includes: 

  • President (elected as Vice President in the 2021 Election): Nicholas John, The Hebrew University of Jerusalem, Jerusalem, Israel
  • Vice President (becomes President in two years): Sarah Roberts, UCLA, USA
  • Immediate Past President (elected as Vice President in the 2019 Election): Tama Leaver, Curtain University, Perth, WA Australia
  • Secretary: Gabriel Pereira, London School of Economics and Political Science, UK
  • Treasurer: Sam Srauy, Oakland University Michigan, USA
  • Graduate Student Representative: Tom Divon, The Hebrew University of Jerusalem, Jerusalem, Israel
  • Open Seats: 1) Sophie Bishop, Sheffield University, UK; 2) Ariadna Matamoros-Fernandez, Queensland University of Technology, Australia; 3) Job Mwaura, University of Cape Town, South Africa

The new Executive Committee will officially take office in October at the Association General Meeting. 

SEE ALSO

2023 ADM+S Symposium to highlight latest research in AI and automation in news, media and entertainment

2023 ADM+S Symposium: Automated News & Media

2023 ADM+S Symposium to highlight latest research in AI and automation in news, media and entertainment

Author  Kathy Nickels
Date 1 May 2023

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) will share their latest findings and map out future agendas for research on AI and automation in news, media and entertainment at the annual ADM+S Symposium. 

The symposium will be hosted at the University of Sydney Law School,Camperdown, NSW from 13 to 14 July with the option to join online.

The symposium will feature keynotes, panel discussions and conversations with leading international and national researchers, industry representatives, advocacy groups and policymakers.

Topics will include:

  • Generative AI in news production and online advertising
  • Automation in the Creative Arts
  • Automated fact-checking
  • + more

The symposium will feature keynote speakers from across the globe including:  

  • Professor Bronwyn Carlson, Macquarie University, Australia
  • Professor Wiebke Loosen, Hans Bredow Institut, Germany
  • Assistant Professor Nick Seaver, Tufts University, USA
  • Tarunima Prabhakar, Tattle, India

The symposium will also showcase the latest outcomes of major ADM+S projects including the Australian Ad Observatory, Mapping the Digital Gap and Australian Search Experience.

At 6:30pm on Thursday 13 July a public panel on Internet Futures will address the rapid developments in internet infrastructures and AI technologies, and discuss questions around the dynamic possibilities and uncertain pathways these developments present for internet governance, social media platforms, media industries, and digital inclusion.

Dr James Meese, co-leader of the News and Media Focus Area at the ADM+S, said “With AI tools becoming increasingly accessible to the public, and the media sector at the centre of new developments in automation, I can’t think of a better time to hold this event.”

“We look forward to sharing outcomes from our projects, learning from our invited speakers and collectively working to set an agenda for the socially responsible use of automated decision-making across our intensively mediated society,” he said.

The 2023 ADM+S Symposium: Automated News & Media provides a unique opportunity for attendees to explore the latest research, exchange ideas and knowledge, and connect with like-minded professionals.

For more information and to register, please visit the 2023 ADM+S Symposium website.

In addition to the ADM+S Symposium, there will be a number of satellite events hosted 10 to 12 July. View satellite events.

This event has now passed. Click below to catch up on the sessions.

SEE ALSO

Netflix and other streaming giants pay to get branded buttons on your remote control. Local TV services can’t afford to keep up

Netflix button on TV remote

Netflix and other streaming giants pay to get branded buttons on your remote control. Local TV services can’t afford to keep up

Author  Ramon Lobato, Alexa Scarlata & Bruno Schivinski
Date 26 April 2023

If you’ve bought a new smart TV in the past few years, you’ll likely have a remote with pre-programmed app shortcuts, such as the now ubiquitous “Netflix button”.

These branded buttons offer one-click access to select apps.

The choice and design of shortcuts vary between brands.

Samsung remotes have a monochrome design with small buttons for Netflix, Disney+, Prime Video and Samsung TV Plus. Hisense remotes are overflowing with 12 big, colourful buttons advertising everything from Stan and Kayo to NBA League Pass and Kidoodle.

The remote is now a thoroughly commercial space.

Behind these buttons there is a lucrative business model. Content providers purchase remote shortcut buttons as part of negotiated deals with manufacturers.

For streaming services, presence on the remote control provides branding opportunities and a convenient entry point into their app. For television manufacturers, it provides a new revenue stream.

But the TV user must tolerate unwanted advertising every time they pick up their remote. And smaller apps – including many Australian apps – are disadvantaged because they are typically priced out of the market.

Shortcut buttons on Samsung, LG, Sony, Hisense and TCL remotes.
Author provided

Who’s on your remote?

Our research examined remotes for 2022-model smart TVs from the five major television brands sold in Australia: Samsung, LG, Sony, Hisense and TCL.

We found all major-brand TVs sold in Australia have dedicated buttons for Netflix and Prime Video. Most also have Disney+ and YouTube buttons.

However, local services are harder to find on remotes. A few brands have Stan and Kayo buttons, but only Hisense has an ABC iview button. None have buttons for SBS On Demand, 7Plus, 9Now or 10Play.


For full data see RMIT Smart TVs and Local Content Prominence report

Remote shortcuts are part of a larger battle for brand visibility in smart TV interfaces.

Since 2019, regulators in Europe and the United Kingdom have been investigating the smart TV market. They have uncovered some questionable business arrangements between manufacturers, platforms and apps.

Following this lead, the Australian government is conducting its own investigations and developing a new framework to ensure local services can be easily found on smart TVs and streaming devices.

One proposal under consideration is a “must-carry” or “must-promote” framework that would require local apps to receive equal (or even special) treatment within the home screens of smart TVs. This option is enthusiastically supported by the broadcasters’ lobby group, Free TV Australia.

Free TV is also arguing for a mandatory “Free TV” button on all remotes that would bring the user to a landing page with all of the local free-to-air video-on-demand apps: ABC iview, SBS On Demand, 7Plus, 9Now and 10Play.

But what do we want on our remotes?

We asked more than 1,000 Australian smart TV users which four shortcut buttons they would include if they could design their own remote control. We asked
them to select options from a long list of locally available apps, or write their own choices, up to four.

The clear favourite was Netflix (selected by 75% of respondents), followed by YouTube (56%), Disney+ (33%), ABC iview (28%), Prime Video (28%) and SBS On Demand (26%).

All other services were selected by fewer than a quarter of respondents.

Buttons which received votes from more than 25% of participants. Participants could choose up to four buttons.
Source: RMIT Smart TVs and Local Content Prominence report

SBS On Demand and ABC iview are the only services in the top-ranked apps list not to routinely receive their own remote control buttons. So, based on what we found, there’s a solid policy rationale for mandating some kind of presence on our remotes for public-service broadcasters.

But it is also clear no-one wants their Netflix button messed with. So government needs to tread carefully to ensure user preferences are respected in any future regulation of smart TVs and remotes.

In our survey respondents also raised an interesting question: why can’t we choose our own remote control shortcuts?

While some manufacturers (notably LG) allow limited customisation of their remotes, the general trend in remote control design has been towards increased branding and monetisation of positioning. It is unlikely this will be reversed anytime soon.

In other words, your remote is now part of the global streaming wars – and will remain so for the foreseeable future.The Conversation

Ramon Lobato, Associate Professor, School of Media and Communication, RMIT University; Alexa Scarlata, Researcher, Media & Communications, RMIT University, and Bruno Schivinski, Senior Lecturer – Advertising, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

RentTech platforms making renting that much harder

Abstract image of man looking at house through iPhone

RentTech platforms making renting that much harder

Author  Andy Kollmorgen (CHOICE)
Date 18 April 2023

In recent years the home rental application process has increasingly been taken over by third-party rental platforms that have the potential to harm people looking for a home in a number of ways.

In effect, the private business behind these RentTech platforms are using data-crunching algorithms to decide who gets to find a place to live. Tenants are faced with the choice of giving up excessive amounts of personal information or not being able to apply.

Now CHOICE has released an in-depth report that paints a chilling picture of how much tougher the rental market has become with the introduction of RentTech platforms such as realestate.com.au’s Ignite as well as 2Apply, Snug, tApp and others.

CHOICE research uncovers serious consumer harms

“Finding a home as a renter is already an incredibly difficult, draining experience. Our research found third-party rental platforms are taking advantage of people’s basic need for a roof over their heads to collect excessive data and profit,” says CHOICE consumer data advocate Kate Bower.

“People who rent deserve a guarantee that their personal data is safe and isn’t being used to exploit or harm them. Unfortunately, our research found that renters are seldom granted this assurance.”

41% of renters were pressured to use a third-party platform by their agent or landlord.

59% of landlords who used RentTech said it was required or recommended by their agent.

25% of people who rent have paid for a tenancy check.

60% of renters were uncomfortable with the amount and type of information collected.

29% of renters have opted not to apply for a rental because they didn’t trust the RentTech platform.

21% of young renters (aged 18–34) reported a tenant score was used to assess their application.

The four main problems with RentTech

The report At what cost? The price renters pay to use RentTech highlights four consumer problems in the sector.

Lack of choice

In a national CHOICE survey, 41% of renters said they felt pressured to use a RentTech platform to apply for a place to live, an imposition that may make life easier for landlords and rental agents but that can expose renters to harm.

“In the midst of a cost-of-living and rental housing crisis, people who rent shouldn’t be footing the bill for RentTech they don’t even want” CHOICE consumer data advocate Kate Bower

Meanwhile, the RentTech businesses pass along processing costs to the users of these platforms and give themselves wide latitude on the data they collect and what they use it for – without doing nearly enough to guarantee the security of this personal information.

Read more: RentTech platforms have your data, whether you like it or not

Data insecurity

Applying for a rental can require extensive amounts of personal information such as identity documents, employer and tenancy references, and proof of income, so naturally many renters in our survey were concerned about the data privacy and security risks of RentTech platforms.

Six out of 10 people who rent said they were uncomfortable with the amount and type of private information requested in their rental application. With large scale data breaches at Optus, Medibank and Latitude Finance in the last year, along with smaller breaches at real estate agencies, LJ Hooker and Harcourts, the need for stronger consumer protections is urgent.

Added costs

As if forcing people to sign up for RentTech platforms wasn’t enough, renters can end up paying processing and administrative costs as well as stiff fees for late payments.

“Third-party rental platforms are for-profit businesses which often force or pressure tenants to pay additional fees, including fees to pay rent, penalties for failed payments, and even the costs of their own background checks,” says Bower.

“In the midst of a cost-of-living and rental housing crisis, people who rent shouldn’t be footing the bill for RentTech they don’t even want.”

Read more: Rental agents shifting costs to renters 

Invasive technologies

Third-party rental platforms provide landlords and real estate agents with tools to screen prospective tenants based on their income, employment status, lifestyle, and other criteria and renters have few legal protections from exploitative and unfair automated systems.

“Automated decision-making systems are becoming an increasingly common part of rental application systems. A sore lack of regulation in this market means these automated decision-making systems could increase barriers and discrimination for renters, and potentially exclude them from housing,” Bower says.

Read more: RentTech platforms: How your data is used against you

The case for reform

CHOICE is calling for Federal and State Governments to take the following steps to ensure renters are protected from the risks created by rental technologies:

  • Reform the Privacy Act to ensure Australia’s privacy laws are up to date and fit-for-purpose for consumers.
  • Conduct a federal inquiry into automated decision-making.
  • Legislate an economy-wide ban on unfair trading practices.
  • Modernise state and territory residential tenancies acts to tackle RentTech harms.

“As the risk of data misuse and data breaches continues to grow, so too does the risk to consumers. The government needs to act quickly and strengthen Australia’s privacy laws to ensure they are fit-for-purpose and protect consumers effectively,” says Bower.

Republished with permisssion from CHOICE. View the orginal article RentTech platforms making renting that much harder 

SEE ALSO

ChatGPT: Hype or the Next Big Thing? Everything you should know

Thumb scrolling on ChatGPT OpenAI
Shutterstock: Ascannio

ChatGPT: Hype or the Next Big Thing? Everything you should know

Author  Dr Aaron Snoswell
Date 17 April 2023

Since its launch by OpenAI in November 2022, ChatGPT has dominated the headlines. The multi-talented chatbot can generate reports, translate content into different languages and answer a wide range of questions and even write code. Now with Bing’s new chat search features, Google’s ‘Bard’ chat-search, and Microsoft integrating chat features into the office suite of software, it seems this technology has rapidly come to the fore.

But the technology behind ChatGPT is not new. Natural Language Processing (NLP) has been around as long as Artificial Intelligence, since the 1950s.

Black screen showing conversation with early chatbot
Wikipedia
Eliza is an early natural language processing computer program created from 1964 to 1966 at MIT by Joseph Weizenbaum.

The difference now is the increase in scale – the size of the models, and the amount of data used to build them – and the processes and technical methods used to build these systems.

What is ChatGPT and how does it work?
ChatGPT is an artificial intelligence dialog agent (or ‘chatbot’) developed by OpenAI and released in November 2022. ChatGPT converses in natural language, and can enable anyone to write computer code, craft poetry, or summarise long documents.

ChatGPT is built using a Large Language Model (LLM). LLM’s are essentially an advanced form of the ‘autocomplete’ technology you might see when SMS-ing or emailing someone. LLMs learn to predict the next character or word in a paragraph, based on patterns in text harvested from the internet. In the case of ChatGPT, humans also provide feedback to refine what kind of topics, language, and tone the LLM will use when responding to users.

The catch is that ChatGPT doesn’t actually know anything. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong.

Dr Aaron Snoswell, research fellow in AI accountability,at  the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) says that

“You should trust the responses generated by ChatGPT about as much as you would trust a random stranger on the internet.”

OpenAI acknowledges these pitfalls, some of which are easy to spot and some more subtle. “It’s a mistake to be relying on it for anything important right now,” OpenAI Chief Executive Sam Altman tweeted. “We have lots of work to do on robustness and truthfulness.”

Here’s a look at some of the problems with ChatGPT, and what users should be aware of.

Truthfulness (or lack of)
LLMs are approximate and generative in nature. The results that LLMs provide in response to queries can be factually inaccurate, despite often sounding authoritative in nature. These systems do not calculate but are more like ‘stochastic parrots’; that is, they learn to mimic by example. These systems also frequently make things up – which can be a good (think creative writing), or a very bad (e.g. generating incorrect medical advice).

Exacerbation of Biases
LLMs are trained on text from the internet and as a result they reproduce and exacerbate existing cultural stereotypes and biases. ChatGPT has tendencies toward English language, western, american, male viewpoints with left-wing ideologies. As a result LLMs have the potential to generate toxic, hateful content, or explicit content. How best to evaluate and mitigate this issue is an ongoing research problem that we at the ADM+S (and many others) are working on.

Privacy and Data
Systems like ChatGPT cost a lot of money to build, and a lot of money to run. The value for OpenAI, Google, Meta, and Microsoft comes from collecting user data entered into LLMs. 

Remember that if you are not paying for a service, you are not the customer (in fact, you’re probably part of the product!) Technology companies may have terms of service that allow them to collect any and all information discussed with a dialog agent to further improve that service.

Equity and Accessibility
Training data for LLMs is scraped from the internet with no regard for copyright. Furthermore, systems like ChatGPT are typically built on the back of crowds of ‘data enrichment workers’ in global majority countries who work in substandard and precarious conditions. Companies will also charge higher costs for access to better LLM tools, creating a disparity of access for users with less resources.

Misuse and Malicious use
LLMs can and will be used maliciously to generate mis- and disinformation and propaganda. These systems are good at generating convincing spam, phishing, and hacking messages and have the ability to find vulnerabilities in existing open-source software and to create code for computer viruses. LLM-based tools also have unique software vulnerabilities – for instance, hackers can hijack a language model’s output through a process known as prompt injection, even using this to steal personal user data.

While the use of ChatGPT raises legitimate concerns about misinformation, plagiarism, copyright and creativity, we should also consider the value of human effort and what it means to communicate as a person. 

AI technologies such as ChatGPT are changing the fundamental nature of how we communicate and marginalising the human element out of these interactions. In addition, it is changing the nature and function of the humans that are doing the communication. To quote Joanna Bryson, Professor of Ethics and Technology, from their essay One Day, AI Will Seem as Human as Anyone. What Then?

“Living in a world with technology that mimics humans requires that we get very clear on who it is that we are as humans.” Joanna Bryson.

Dr Aaron Snoswell presented ChatGPT: Hype or the Next Big Thing? At the Hacks/Hackers Brisbane event held 22 March 2023. 

Aaron is a computer scientist and research fellow in AI accountability at the ADM+S Centre, based at QUT in Brisbane, Australia. His PhD “Modelling and explaining behaviour with Inverse Reinforcement Learning” was awarded in 2022 from The University of Queensland, and developed new theory and algorithms for Inverse Reinforcement Learning in the maximum conditional entropy and multiple intent settings.

Aaron’s ongoing research is in the development of socio-technical interventions for reducing toxicity in the foundation model machine learning paradigm, looking in particular at the ways sexism and misogyny manifest in large language models. Prior to academia, Aaron worked in industry as a cross-disciplinary mechatronic engineer in doing medical device research and development, pilot and astronaut training, robotics, and software engineering.

Watch the video ChatGPT: Hype or the Next Big Thing? 

Listen to the podcast ChatGPT: Hype or the Next Big Thing? 

SEE ALSO

What is ChatGPT, and what does it mean for clinical practice?

Person looking at laptop screen with ChatGPT on screen
Shutterstock: Mizkit

What is ChatGPT, and what does it mean for clinical practice?

Author  Centre for Online Health, UQ
Date 14 April 2023

Dr Aaron Snoswell from the Australian Research Council Centre of Excellence for Automated Decision Making and Society at QUT have collaborated with Dr Centaine Snoswell, Dr Anthony Smith, Dr Jaimon Kelly and Dr Liam Caffery from UQ Centre for Online Health and Centre for Health Services Research.

Large language models (LLMs), such as ChatGPT, are transforming the telehealth landscape and becoming an integral part of digital healthcare. Their integration into search engines, Microsoft products, and other applications will change how patients and clinicians access information. LLMs have the potential to improve telehealth services, including remote-monitoring, health record summaries, and patient resources. However, they also have limitations, such as a tendency to generate false or misleading information. Clinicians need to be aware of LLMs’ capabilities and limitations, ensuring any LLM-generated content is fact-checked for accuracy. Digital and health literacy remain crucial for both healthcare providers and patients, as LLMs become increasingly integrated into clinical practice.

Read the latest articles at the links below. If you don’t have access, please contact Dr Aaron Snoswell via email or ResearchGate.

Telehealth Article
Snoswell, Centaine L., Snoswell, Aaron J., Kelly, Jaimon T., Caffery, Liam J., and Smith, Anthony C. (2023). Artificial intelligence: augmenting telehealth with large language models. Journal of Telemedicine and Telecare. https://doi.org/10.1177/1357633×231169055

Pharmacy Article
Snoswell, Centaine L., Falconer, Nazanin, and Snoswell, Aaron (2023). Pharmacist vs Machine: Pharmacy services in an age of large language models. Research in Social and Administrative Pharmacy. https://doi.org/10.1016/j.sapharm.2023.03.006

SEE ALSO

Vote for us! ADM+S and ABC pitch proposals for SXSW Sydney line-up

Vote for Us in SXSW Sydney Session Select

Vote for us! ADM+S and ABC pitch proposals for SXSW Sydney line-up

Author  Sally Jackson (ABC) and Kathy Nickels (ADM+S)
Date 5 April 2023

The ABC and the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at QUT have teamed up to pitch two session proposals to be included in the upcoming South By Southwest Sydney Conference taking place October 15-22.

SXSW brings together inspired thinkers, creators and innovators in tech and innovation, games, music and screen from around the world to experience the latest in forward-thinking ideas within their industry and unlock the unexpected discoveries made possible when a diverse range of topics converge on stage.

To help determine the conference line-up each year, SXSW calls for submissions of session proposals on which the SXSW community then gets to vote.

This year, SXSW Sydney received more than 1400 proposals from around Australia and the world.

Voting is open until 11.59pm on Tuesday 11 April. Voters receive five votes but only one vote per proposal is permitted. Successful sessions will be included in SXSW Sydney.

ADM+S and ABC have proposed the following panel sessions.

What AI-generated news could mean for human-produced journalism

What AI-generated news could mean for human-produced journalism (Media Industry track)

Speakers: Silvia Montaña-Niño (ADM+S at QUT), Stuart Watt (ABC), Michael Collett (ABC), Gareth Seneque (ABC)

ChatGPT has shown that this is no far-flung fantasy. Algorithms have long determined which stories get recommended to people based on their interests, but now AI can create content itself. It can also interact with users and answer any questions they might have in a “human-like” manner. 

However, the way ChatGPT works, from its data collection processes to its opaque machine training models, prompts crucial questions for news organisations willing to use this technology to produce and distribute news. 

A panel of journalists, scholars and technologists will discuss the trends and challenges of AI-generated news and what it means for the future of journalism.

Read more and vote here

Islands in the streaming: Local Content Discovery in a Global Market

Islands in the Streaming: Local Content Discovery in a Global Market (Streaming Industry track)

Speakers: Ramon Lobato (ADM+S at RMIT), Kylie Pappalardo (ADM+S at QUT), Nick Hayden (ABC), Alexandra Hay (ABC)

In the age of video streaming, content discovery is now a crucial strategic space for screen industries. The battle to capture attention is fierce, with streaming platforms, device makers and technology all playing a critical role in shaping the content we watch. But what about local and public benefit storytelling — how can it cut through in a world dominated by tech-driven giants?

Read more and vote here

Stuart Watt, ABC Head of Output & Distribution, said the rapidly unfolding developments in AI presented both opportunities and challenges for journalism.

“While the labour-saving possibilities are exciting, the prospect of misinformation and the further erosion of trust in our profession is daunting,” he said.

“We need to grapple with these challenges and find ways to use this emerging technology so it enhances our journalism rather than diminishes it.”

Dr Silvia Montaña-Niño, from the ADM+S at QUT, said SXSW was a perfect public space to discuss the challenges news organisations have when using AI.

“Journalists and scholars right now have many questions about how the recent developments in generative AI will impact how they work, and what are the new responsibilities with the use of these technologies,” she said.

Media contact: Kathy Nickels | ADM+S Outreach and Engagement, News and Media Focus Area | katherine.nickels@qut.edu.au 

SEE ALSO

ADM+S Higher Degree Research students selected for the Oxford Internet Institute Summer Doctoral Programme

Pictured left to right: Kunal Kaveesh Chand, Dominiqe Carlon & Anand Badola
Left to right: Kunal Kaveesh Chand, Dominiqe Carlon & Anand Badola

ADM+S Higher Degree Research students selected for the Oxford Internet Institute Summer Doctoral Programme

Author Kathy Nickels
Date 4 April 2023

Higher Degree Research Students (HDR) Anand Badola, Dominique Carlon, and Kunal Kaveesh Chand have been selected to attend the Oxford Internet Institute annual Summer Doctoral Programme at the University of Oxford, United Kingdom. 

The annual Summer Doctoral Programme (SDP) brings together a selection of up to 30 outstanding doctoral students engaged in dissertation research relating to the Internet and other digital technologies from around the world for a fortnight of study at the world-leading University of Oxford.

SDP students come from a wide variety of disciplinary and methodological traditions; what they all share is a genuine intellectual curiosity and a willingness to consider these different perspectives.

Anand Badola, Dominique Carlon, and Kunal Kaveesh Chand, from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at QUT will travel to Oxford in July 2023 to take part in the SDP.

Anand’s research looks at the flow of discourse across social media platforms in the Indian context focusing on aspects of disinformation, polarisation and populism across platforms like Facebook, Twitter, and YouTube. 

Anand said he is thrilled to be part of this year’s prestigious Oxford Internet Institute’s Summer Doctoral Program.

“It is a great opportunity for me to explore and contribute to cutting-edge research on the intersection of technology, society, and democracy,” said Anand.

“I am excited to learn from some of the world’s leading scholars in this field. I am also looking forward to meeting fellow researchers from diverse backgrounds from across the world and hopefully collaborating with them in the future. It is very exciting, it is Oxford!”

Dominque’s research explores the life stories of Reddit bots, examining how they came to be created, how they are used, and how they evolve over time. Recognising the capacity of bots to generate expressions of humour, skill, and creativity, Dominique examines the role that Reddit’s distinctive culture plays in fostering an environment that appears to welcome bot creation and use, as well as the influence that bots have in shaping or contributing to the platform environment.

“I am excited to learn about innovative methods and interdisciplinary perspectives on emerging societal and technological issues. I also look forward to forming genuine connections with scholars from around the word and learning about diverse topics of research,” said Dominique.

Kunal’s research focuses on building information visualisation tools that enable digital media scholars to examine how visual social media platforms (like Instagram) are using automated decision making (ADM) systems to aid in their operations, such as content curation and information retrieval. 

“I am looking forward to improving my knowledge of the newest digital technologies impacting society at the world’s oldest learning institution that has a history of outstanding scholarship, while also meeting amazing scholars from around the world focusing on similar, impactful research,” said Kunal.

The SDP provides students with an academic framework to share and discuss their research and build collaborative connections across the globe.

SEE ALSO

Dr Ariadna Matamoros-Fernández awarded ZeMKI Visiting Research Fellowship at the University of Bremen

Universitat Bremen building
Shutterstock: Philip Lange

Dr Ariadna Matamoros-Fernández awarded ZeMKI Visiting Research Fellowship at the University of Bremen

Author Kathy Nickels
Date 3 April 2023

ADM+S researcher Dr Ariadna Matamoros-Fernández is one of 5 international researchers recently awarded a visiting research fellowship at ZeMKI, Centre for Media, Communication and Information Research at the University of Bremen, Germany.

Dr Ariadna Matamoros-Fernández will spend four weeks at the ZeMKI research centre working with Prof. Christian Katzenbach, Head of the Platform Governance, Media, and Technology Lab, and Prof Cornelius Puschmann, Head of the Digital Communication and Information Diversity Lab on a research project that will investigate transparency of ‘soft moderation’ techniques under the EU Digital Services Act (DSA). 

Dr Matamoros-Fernández says that platforms are increasingly using ‘soft moderation’ techniques to limit the visibility of objectionable content, and users are innovating with new collaborations to more actively participate in the assessment of content that has the potential to harm.

In the joint project at the University of Bremen, Ariadna will use Twitter Community Notes as a case study to understand how platforms can respond to the DSA obligations around transparency with the creation of innovative used-led ‘soft-moderation’ techniques. 

“Twitter Community Notes are a crowdsourced fact-checking system that allows users to add context to tweets,” said Ariadna.

“This system is observable to external researchers–the data is free access, and its algorithm is open source and publicly available on GitHub, – which makes it a good case study to assess content moderation initiatives that go beyond takedowns and user bans.” 

This research will help inform how ‘soft moderation’ works in practice, an area which is currently not well understood.

SEE ALSO

Project to counter misinformation receives Meta Foundational Integrity Research funding

Meta logo on iphone screen with Facebook logo in the background
Shutterstock: rafapress

Project to counter misinformation receives Meta Foundational Integrity Research funding

Author Kathy Nickels
Date 30 March 2023

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) researcher Dr Silvia Montaña-Niño and her colleagues have been awarded funding from Meta’s Foundational Integrity Research for their comparative study which seeks to counter misinformation in the Southern Hemisphere.

Meta’s Foundational Integrity Research request for proposals (RFP) was launched in September 2022 and attracted 503 proposals from 349 universities and institutions around the world. 

A total of $1,000,000 USD funding was awarded to research that would enrich the understanding of challenges related to integrity issues on social media and social technology platforms.

The project Countering misinformation in the Southern Hemisphere: A comparative study to be will be led by Dr Michelle Riedlinger (QUT) with colleagues Dr Silvia Montaña-Niño (QUT), Dr Marina Joubert (Stellenbosch University), and Assoc. Prof Víctor García-Perdomo (Universidad de La Sabana) was one of 11 projects to receive the funding. 

Dr Michelle Riedlinger from the School of Communication at QUT is leading the project.

“We have an amazing team of researchers from Australia, Latin America and Africa involved in this project and we’re keen to get started,” says Dr Riedlinger.

The project will investigate what fact checkers are doing in regions outside of North America and Europe.

Dr Silvia Montaña-Niño, research fellow at the ADM+S Centre at QUT, says “We’ve done some initial work and found that fact checkers are packaging their content into reusable ‘checktainment’ explainer formats using video, memes, and infographics to engage local social media users. We’re keen to explore the regional differences a bit more.”

Through the research funding, Meta aims to support the growth of scientific knowledge and contribute to a shared understanding across the broader scientific community and technology industry on how social technology companies can better address integrity issues on their platforms. 

“We are excited to grant these awards to cultivate new knowledge on integrity and establish deeper connections with global social science researchers,” says Umer Farooq, Director of Research for Integrity at Meta.

SEE ALSO

Edward Small selected for research program at the University of Bristol

Edward Small presenting poster at the 2022 ADM+S Symposium

Edward Small selected for research program at the University of Bristol

Author Kathy Nickels
Date 27 March 2023

Edward Small, higher degree research student at the ARC Centre for Excellence for Automated Decision-Making and Society (ADM+S), RMIT University has been selected to undertake a four-month research program with Machine Learning and Computer Vision (MaVi) at the University of Bristol.

Applicants for this program are selected based on their academic excellence, previous experiences and references. 

Edward will receive supervision and support from Associate Professor Raul Santos-Rodriguez, to develop explainable Artificial Intelligence (XAI) tools in collaboration with Bristol General Hospital. 

Edward says he is incredibly excited to work with the University of Bristol and Prof. Raul Santos-Rodriguez. 

“Being a top 10 UK institution, and part of the Russel Group, Bristol has a strong track record in AI research that I hope to contribute to, and Raul is a leading researcher in human-centric machine learning and explainability,” he said

“I expect I will learn a lot, and I hope to come back to Australia to apply this new knowledge in innovative ways. I am very lucky to be a part of a centre like ADM+S, without whom an opportunity like this would be impossible to take up.”

Edward researches fairness, explainability, and transparency in automated decision-making with supervisors Prof Flora Salim, Dr Jeffrey Chan and Dr Kacper Sokol. 

His research examines the robustness and stability of current fairness strategies, and looks to resolve the mathematical conflict between group fairness and individual fairness. Edward’s work also looks at the scalability of automated explanations for machine learning models and questions whether explainable artificial intelligence induces fairness and utility or reduces it.

Edward will receive support from the ADM+S and Bristol University to undertake this research program.

SEE ALSO

Dr Kacper Sokol visits Università della Svizzera italiana to deliver new course on machine learning explainability

View of mountains in Switzerland
Supplied: Kacper Sokol

Dr Kacper Sokol visits Università della Svizzera italiana to deliver new course on machine learning explainability

Author Kathy Nickels
Date 27 March 2023

Research Fellow Dr Kacper Sokol from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), RMIT University has recently visited Università della Svizzera italiana (USI) in Lugano, Switzerland to deliver training on machine learning explainability.

The training was developed to bridge the gap between the theoretical and practical aspects of explainability and interpretability of predictive models based on artificial intelligence and machine learning algorithms, and builds upon Dr Sokol’s research in this area. 

Dr Sokol says that the course differs from others that commonly take an abstract approach. 

“It takes an adversarial perspective and breaks these techniques up into core functional blocks, studies their role and configuration, and reassembles them to create bespoke explainers with well-understood properties, thus making them suitable for the problem at hand,” he said.

The course was offered along with other training opportunities available to postgraduate students from the informatics department at USI. 

“Given its good reception and high modularity of the teaching materials, it will be adapted to support a variety of future training sessions,” said Dr Skolol.

The course resources are available online at Machine Learning Explainability: Exploring Automated Decision-Making Through Transparent Modelling and Peeking Inside Black Boxes.

This training is the most recent output stemming from Dr Sokol’s ongoing collaboration with Professor Marc Langheinrich and his Ubiquitous Computing Research Group at USI. Together they work on advancing explainability and interpretability of machine learning models. They recently presented BayCon: Model-agnostic Bayesian Counterfactual Generator at the 31st International Joint Conference on Artificial Intelligence 2022 (IJCAI-22) in Vienna, Austria.

SEE ALSO

The Australian Ad Observatory uncovering the hidden world of targeted advertising

Computer screen with "Create an Ad screen on Facebook" prompt
Shutterstock: PixieMe

The Australian Ad Observatory uncovering the hidden world of targeted advertising

Author Kathy Nickels
Date 23 March 2023

Millions of Australians are exposed to online advertising every day as they use social media and browse the internet. Advertisers on these platforms target audiences using a mix of data and profile information gathered from our activities online, but there is little publicly available knowledge about who is being targeted by which advertisers.

The Australian Ad Observatory project conducted at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is working to understand the hidden world of advertising by asking volunteers to donate their Facebook ads.

Professor Daniel Angus, one of the Chief Investigators on the project, says the problem with online advertising is that it is hidden from public view, and so it may break the rules that have been put in place to prevent consumer harm, without being noticed.

“We are seeing ads that have been able to slip through the net because humans aren’t involved in making judgements,” he says. 

“The concern there is that if these ads can slip through the net, what other forms of advertising are also making their way through that, that maybe perhaps in violation of existing codes and practices?”

Over the past year more than 2,000 volunteers have donated their ads to the Australian Ad Observatory. 

This research benefits our understanding of platform-based advertising and is enabling independent research into the role that algorithmically targeted advertising plays in society.  

Online Casinos (ABC)

The ABC recently partnered with the Australian Ad Observatory to find gambling ads that were illegally targeting Australians on Facebook. This report asks who should be responsible for monitoring illegal online advertising and whether advertising rules can be better enforced by the Australian Communications and Media Authority (ACAM).  

Read more:  Online casinos based offshore are illegally targeting Australians on Facebook. Who is responsible?

Image of Tweet by David Pocock: "Most ads are dark ads, so there is limited visibility in what is appearing in the feeds of Australians, including young Australians. This could just be the tip of the iceberg"

The issue of gambling advertising was raised in parliament this week by Senator David Pocock who asked whether the government was aware that Australians are being exposed, on their social media feeds, to illegal advertisements from online casinos?

Senator Watt, currently representing the Minister for Communications, said “Australians are  concerned about the growing proliferation of gambling advertising on online platforms. There are of course particular concerns when it comes to the risk around those advertisements being accessed by children.”

“There are additional concerns about the risk of online gambling advertisements to the adult population as well.”

“The government does recognise there is ongoing community concern about harms associated with online gambling, and that’s exactly why we have established an inquiry into online gambling and its impacts on those experiencing gambling harm.”

“Greenwashing” Advertising (CPRC)

Through the Australian Ad Observatory, the Consumer Policy Research Centre (CPRC) has undercovered online advertisements that use vague and misleading environmental and sustainability claims in their messaging to consumers.
Findings from this research will be used to inform regulators and policy makers about addressing unsubstantiated green claims.

Read more: Research investigates “greenwashing” advertising on social media

Alcohol Advertising (FARE)
The Ad Observatory project will be working with the Foundation for Alcohol Research & Education (FARE) to provide further analysis on the content of alcohol advertisements on social media.

A recent report released by FARE revealed that 39,820 distinct alcohol ads were placed on Facebook and Instagram last year, often combined with a button prompting users to “shop now”.

Through a search of Meta’s ad library, FARE found that big brands placed an average of 765 alcohol ads each week on the Meta platforms.

The report Alcohol advertising on social media: a 1-year snapshot, found that alcohol advertising on Instagram and Facebook is intrinsically linked to the online sale and delivery of alcohol directly into the home.

Meta’s ad library enabled insight into the amount and type of content being distributed by alcohol advertisers on Meta platforms, however it failed to provide information on advertising targeting, spend and reach of advertisements (except for political advertisements).

By partnering with the Australian Ad Observatory, FARE will further it’s investigation into alcohol advertising to develop a holistic understanding of alcohol marketing on these platforms, including understanding how often people are exposed to these advertisements and the ways in which people are being targeted with alcohol advertising on these platforms.

Read more: Alcohol companies ply community with 40,000 alcohol advertisements a year on Facebook and Instagram

Alongside the work with ABC, CPRC and FARE, the Australian Ad Observatory project will be using the ad collection to investigate consumer finance advertising, and advertising of unhealthy foods.

The Australian Ad Observatory has already collected over 700,000 advertisements from 2,000 volunteers, but is still looking for more people to sign up. A large pool of diverse participants of different ages, backgrounds and from different parts of Australia will help us better understand how particular groups in society are being targeted with particular kinds of ads.

To find out more and join the project visit The Australian Ad Observatory

SEE ALSO

Visual mis/disinformation in journalism and public communications article wins top paper award

Visual mis/disinformation in journalism and public communications article wins top paper award

Author Kathy Nickels
Date 20 March 2023

ADM+S researcher Prof Daniel Angus is co-author on the paper Visual mis/disinformation in journalism and public communications: Current verification practices, challenges, and future opportunities, which has been voted as top paper published in the Q1 journal, Journalism Practice, in 2022-23.

The paper led by Dr TJ Thomson at QUT’s Digital Media Research Centre and co-authored by Prof Daniel Angus, A/Prof Paula Dootson, Dr Edward Hurcombe, and Mr Adam Smith has accrued more than 11,000 views since being published, making it the 14th most-read article in the journal of all time.  

The study provides a state-of-the-art review of current journalistic image verification practices, examines a number of existing and emerging image verification technologies that could be deployed or adapted to aid in this endeavour, and identifies the strengths and limitations of the most promising extant technical approaches. 

Independent peer reviewers note this work provides “a framework for understanding the current and future considerations of visual media verification,” “provides an excellent understanding of visual disinformation” and makes “a strong contribution to the field.”

 The QUT team’s paper will compete against two other papers, the top papers published in Journalism Studies and Digital Journalism over the same timeframe, for the 2022 Bob Franklin Journal Article Award, which seeks to recognise the article that best contributes to our understanding of connections between culture and society and journalism practices, journalism studies and/or digital media/new technologies.

Links to all of the other short- and long-listed papers can be found here.

Republished with permission from QUT Digital Research Media Centre 

Read the original article QUT team wins top paper honour

SEE ALSO

Research investigates “greenwashing” advertising on social media

A washing machine with green earth landscape inside

Research investigates “greenwashing” advertising on social media

Author Kathy Nickels
Date 8 March 2023

Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) are uncovering vague and misleading green advertising on social media, with the help of the Australian consumers who are being targeted.

So far researchers have observed that many advertisers, especially those in the clothing and footwear, personal care, and food and food packaging industries, market themselves with green claims.

Many of these claims are vague and unsubstantiated, and have the potential to mislead consumers.

Professor Christine Parker, Chief Investigator at the ADM+S Centre, says the practice of making misleading claims about a product’s environmental sustainability, known as “greenwashing”, is likely to be on the rise.

Increased consumer demand for more sustainable products, increased understanding of the need for business to take action on the climate crisis, and the need to shift to a circular economy are likely to be driving green claims.

“Some advertisers are using vague wording alongside green imagery to give an impression of environmental action – but with no clear information and substantiation of exactly what the company is doing to achieve its environmental and climate promises or how the product is contributing to a circular economy,” says Professor Parker.

In a recent audit, the Australian Competition and Consumer Commission (ACCC) found that more than half of organisations advertising online made concerning claims about their environmental or sustainability practices.

The Consumer Policy and Research Centre (CPRC) found similar results in a 24-hour sweep of online advertising conducted last year. The CPRC also found that many consumers believe some authority is checking green claims before they are made – which is not in fact the case.

“Conscientious consumers may well be targeted with a whole string of green ads that make them feel like business is doing the right thing and we are on a good environmental path”

“But this might be a completely misleading impression. Many of these claims may not be substantiated.”

In collaboration with the Consumer Policy Research Centre (CPRC), the ADM+S Centre is investigating whether Facebook users are seeing ads that are misleading, harmful or unlawful.

This research is conducted through the Centre’s Australian Ad Observatory, a project that relies on citizen scientists to share the ads that they see on Facebook.

“This approach is important because it gives us a way to see how Facebook advertising is targeted to individual users – a practice that is normally hidden from public view and regulatory scrutiny,“ says Professor Parker.

The recent ACCC report investigated green claims made in publicly visible online advertising, while research by the ADM+S Centre will help uncover advertising usually hidden from public scrutiny.

Professor Parker says “it is possible that advertisers could engage in less responsible advertising practices on social media where they are less likely to face regulatory scrutiny.”

Researchers are investigating how frequently consumers are targeted with green advertising, and how misleading these claims are. Findings from this research will be used to inform regulators and policy makers about addressing unsubstantiated green claims.

Australians are invited to join this research project by visiting The Australian Ad Observatory website.

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is funded by the Australian Government through the Australian Research Council.

View the original media release

SEE ALSO

How should we respond to ChatGPT?

Plastic figure resembling a human who sits on a table infront of a laptop in a dark room. Long shadows disseminate a gloomy mood.
Max Gruber / Better Images of AI / Clickworker 3d-printed / CC-BY 4.0

How should we respond to ChatGPT?

Author Kathy Nickels
Date 28 February 2023

ChatGPT is a controversial new language assistant powered by AI. It can write essays, do coding and even structure complex research briefs, all in a matter of seconds.

Launched late November 2022, it now has more than 100 million users according to estimates. 

This new tool, developed by US company OpenAI, is causing concern amongst schools and universities, with fears that students will use the program to write their assignments.

ChatGPT is likely to change the way that students are assessed and force us to rethink what it means to be genuinely creative. 

ADM+S researcher Dr Aaron Snoswell spoke to Athony Funnell on a recent episode of ABC Radio National Future Tense about ChatGPT.

Dr Snoswell suggests that safeguarding and responses to the technology needs to be wide ranging and needs to include Government bodies, experts in the AI industry, system users as well as media.

“Government bodies have a role to play in terms of coming up with regulations, policies, and best practices,” says Dr Snoswell.

He said that organisations and individual experts in the AI industry are key stakeholders here.

“[They] need to take the ethical dimensions and implications of their work much more seriously than it’s currently done.”

He also says that it’s important that people who are going to interact with the systems should understand how they work. 

“Teaching students about how to safely and responsibly use these tools, I think, is a really important thing as well.”

And finally, Dr Snoswell says “news and media organisations need to do their part as well by reporting on this type of technology with a large grain of salt and not catastrophising, or overhyping.”

Listen to the full discussion on ABC Radio National Future Tense Chat GPT – the hype, the limitations and the potential 

Broadcast Sunday 26 February 2023, 11:30am

SEE ALSO

Meta targets content creators with new blue tick verification bundle

Influencer creating contect on phone

Meta targets content creators with new blue tick verification bundle

Author Kathy Nickels
Date 24 February 2023

Meta, the parent company of Facebook and Instagram has announced it will be testing a new paid verification subscription for users to pay to prove they are real. 

The new offering, called Meta Verified, assigns users a blue verification badge on their profile in exchange for AUD$20 a month.

Professor of Digital Media and Associate Director of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), Jean Burgess spoke to Alex Easton on ABC Southern Queensland Radio about this latest move.

It seems from Meta’s press release that the paid verification is primarily aimed at influencers and content creators, says Prof Burgess.

The paid subscription bundles the blue tick verified badge with other premium features including increased visibility and reach, proactive account monitoring for impersonators, and access to a real person for help with common account issues.

Prof Burgess says the move is also about competing with Tik Tok as a platform for the creator economy. 

“I think this is a move to try to get the content creators that provide the value to [platforms like] Tik Tok , Instagram, Facebook and Twitter to be more invested in signing up to Meta as a ‘safer place’”. 

In comparison to Twitter’s blue tick badge, “Meta’s verification process would absolutely have more robust systems,” says Prof Burgess.

Part of this verification process includes submitting a government-issued ID that matches the name on your profile and profile photo.

“This raises other questions about how much we want to be trusting Meta with our personal information,” she says.

Listen to the full discussion on ABC Southern Queensland Radio from 2:18:00
This episode was broadcast Wed 22 Feb 2023 at 3:00pm

SEE ALSO

Prof Deborah Lupton appointed Honorary Doctor at the University of Skövde

Flickr: anna_thorn

Prof Deborah Lupton appointed Honorary Doctor at the University of Skövde

Author Kathy Nickels
Date 22 February 2023

The University of Skövde has appointed Prof Deborah Lupton, its first Honorary Doctorate in the field of Health in the Digital Society.

“It is an amazing feeling to be so honored, particularly as I already have strong connections to and collaborations with colleagues in Sweden and the other Nordic countries. I have always felt very welcome and appreciated in these countries, with lots to talk about in terms of shared interests. This Honorary Doctorate means that I will always have a special relationship with the University of Skövde,” says Professor Lupton.

Prof Lupton is Chief Investigator at the ARC Centre of Excellence for Automated-Decision Making and Society,  node leader of the University of New South Wales, leads the Health focus area and co-leads the People program.

With a background in sociology, as well as media and cultural studies,  Prof Lupton combines qualitative and innovative social research methods with sociocultural theory. Her research focuses on the use of new digital media in medicine and public health. She studies how those media are the focus of increasing interest in society, how they can have unexpected effects for patients and healthcare professionals, and how they can influence how society works with digital technologies in public health and healthcare.

The honorary doctorate nomination recognises Prof Lupton’s work in digital health as a source of inspiration for the School of Health Sciences at the University of Skövde. The nomination states that her focus on interdisciplinary perspectives has been important for the development of the field of Digital Health at the University and that her work has inspired aspects of the Master’s Program “Public Health Science: Digital Health and Communication” at the University.

“It is wonderful to see the development of the new program in digital health research at the University of Skövde and to be made aware that my research has contributed to this exciting initiative,” says Professor Lupton.

Her work and thoughts have also contributed to the University’s research in the field.

“Professor Lupton’s work has been important for the University’s development. During the autumn, the University received degree-awarding powers on a doctorate level in the field of Health in the Digital Society, which is the University’s second degree-awarding powers on a doctorate level. Researchers like Professor Lupton are a great source of inspiration for this broad field and its applications,” says Alexandra Krettek, Professor of Public Health Sciences and Dean at the University of Skövde.

SEE ALSO

Twitter data appears to support claims new algorithm inflated reach of Elon Musk’s tweets

Twitter data appears to support claims new algorithm inflated reach of Elon Musk’s tweets

Author Kathy Nickels
Date 21 February 2023

Data collected by Queensland University of Technology node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S) researcher via Twitter’s API appears to support media claims the reach of the tweets of the platform’s billionaire owner Elon Musk have been artificially inflated.

Last week, the tech news site Platformer reported 80 Twitter engineers had been engaged to tweak the platform’s algorithm after Musk noticed a tweet from the US president, Joe Biden, about the Super Bowl outperformed his own, despite Musk having more than three times the number of followers.

The report claimed engineers deployed a new algorithm to artificially inflate Musk’s tweets by a factor of 1,000, ensuring that more than 90% of Musk’s 128.9 million followers would see them. The change reportedly also ensured users who don’t personally follow Musk would see his tweets in their “for you” tab.

Assoc Prof Timothy Graham, Associate Investigator at the Queensland University of Technology node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S) said data he extracted from Twitter using its application program interface appeared to support much of this reporting.

The graphs produced by Assoc Prof Graham show that in the hours when the algorithm change was reported to have occurred, Musk’s impressions went up 737%, and his daily impressions have close to tripled.

Graham, who typically researches bot behaviour and other trends on social media, says he was able to track Musk’s tweet data via access to Twitter’s API, which he can currently access for free.

Twitter has announced it will cut off free access to this service – including for researchers. Instead it will charge a minimum US$100 a month for access.

“The Twitter API may shut down any moment – if this is the last data I ever collect it’ll totally be worth it,” Graham tweeted last week.

Read the full story published in The Guardian

SEE ALSO

Facebook and Instagram to trial paid verification in Australia as Twitter charges for two-factor SMS authentication

Meta logo on mobile phone with Facebook, Instagram and Whatsapp logos in the background

Facebook and Instagram to trial paid verification in Australia as Twitter charges for two-factor SMS authentication

Author Kathy Nickels
Date 21 February 2023

Facebook and Instagram parent company Meta is introducing a paid subscription for users to verify their accounts with a blue tick. 

Meta Platforms has announced it will be testing the monthly subscription service called Meta verified in Australia and New Zealand from this week.

The company says the service will increase the visibility of users’ posts and provide extra protection against impersonation. The move comes after Elon Musk, the owner of Twitter, implemented the premium Twitter Blue subscription back in November. 

Professor of Digital Communication and Chief Investigator at the ARC Centre of Excellence for Automated Decision-making and Society at QUT, Daniel Angus said he doubts that paid verification will make any difference to curb the spread of mis- and dis-information on the platform. 

“We’ve been tracking the spread of this dis-information for many years now, often of a very personal nature … This move will do nothing to actually curb the spread of that.” he said.

“It’s profitable for the platform to maintain pages and groups which spread disinformation. Verifying profiles and extracting more rent from users for doing so is not going to do anything to curb that spread, in fact it may make things worse.”

 

He said that the decision to introduce this subscription is extortionate as they are asking users to pay for something that should be an ordinary function of a social media service.

Separately, Twitter announced on Friday it would provide SMS-based two-factor authentication only to users who are subscribed to the US$8-a-month ($11.65) Twitter Blue service from 20 March.

Prof Angus says that the removal of the SMS-based two-factor authentication will make it far easier for accounts with weak passwords to be hacked. 

“The fact remains that you can’t extort users around basic security features. [Providing security to your users] is something that’s part and parcel of running a successful social media operation.” said Prof Angus.

“The fact that they’re asking for payment for [these features] shows that they’re out of ideas and we are very much in the late stage of these platforms losing their power.”

Prof Daniel Angus, Dr Belinda Barnet,  Senior Lecturer in Media and Communications at Swinburne University and Prof Tama Leaver, Professor of Internet Studies and Chief Investigator in the ARC Centre of Excellence for the Digital Child at Curtin University with reporter Scott Wales on ABC Radio National.

Listen to the interview on ABC News

Prof Daniel Angus with reporter Scott Wales ABC News, Melbourne.

Read the full transcript here 

SEE ALSO

3 in 4 people experience abuse on dating apps. How do we balance prevention with policing?

Girl using phone at night
Shutterstock

3 in 4 people experience abuse on dating apps. How do we balance prevention with policing?

Authors Kath Albury and Daniel Reeders
Date 30 January 2023

A 2022 survey by the Australian Institute of Criminology found three in four app users surveyed had experienced online abuse or harassment when using dating apps. This included image-based abuse and abusive and threatening messages. A further third experienced in-person or off-app abuse from people they met on apps.

These figures set the scene for a national roundtable convened on Wednesday by Communications Minister Michelle Rowland and Social Services Minister Amanda Rishworth.

Experiences of abuse on apps are strongly gendered and reflect preexisting patterns of marginalisation. Those targeted are typically women and members of LGBTIQA+ communities, while perpetrators are commonly men. People with disabilities, Aboriginal and Torres Strait Islander people, and people from migrant backgrounds report being directly targeted based on their perceived differences.

What do these patterns tell us? That abuse on apps isn’t new or specific to digital technologies. It reflects longstanding trends in offline behaviour. Perpetrators simply exploit the possibilities dating apps offer. With this in mind, how might we begin to solve the problem of abuse on dating apps?

Trying to find solutions

Survivors of app-related abuse and violence say apps have been slow to respond, and have failed to offer meaningful responses. In the past, users have reported abusive behaviours, only to be met with a chatbot. Also, blocking or reporting an abusive user doesn’t automatically reduce in-app violence. It just leaves the abuser free to abuse another person.

Wednesday’s roundtable considered how app-makers can work better with law enforcement agencies to respond to serious and persistent offenders. Although no formal outcomes have been announced, it has been suggested that app users should provide 100 points of identification to verify their profiles.

But this proposal raises privacy concerns. It would create a database of the real-world identities of people in marginalised groups, including LGBTIQA+ communities. If these data were leaked, it could cause untold harm.

Prevention is key

Moreover, even if the profile verification process was bolstered, regulators could still only respond to the most serious cases of harm, and after abuse has already occurred. That’s why prevention is vital when it comes to abuse on dating apps. And this is where research into everyday patterns and understanding of app use adds value.

Often, abuse and harassment are fuelled by stereotypical beliefs about men having a “right” to sexual attention. They also play on widely held assumptions that women, queer people and other marginalised groups do not deserve equal levels of respect and care in all their sexual encounters and relationships – from lifelong partnerships to casual hookups.

In response, app-makers have engaged in PSA-style campaigns seeking to change the culture among their users. For example, Grindr has a long-running “Kindr” campaign that targets sexual racism and fatphobic abuse among the gay, bisexual and trans folk who use the platform.

A mobile screen shows various dating app icons
Match Group is one of the largest dating app companies. It owns Tinder, Match.com, Meetic, OkCupid, Hinge and PlentyOfFish, among others.
Shutterstock

Other apps have sought to build safety for women into the app itself. For instance, on Bumble only women are allowed to initiate a chat in a bid to prevent unwanted contact by men. Tinder also recently made its “Report” button more visible, and provided users safety advice in collaboration with WESNET.

Similarly, the Alannah & Madeline Foundation’s eSafety-funded “Crushed But Okay” intervention offers young men advice about responding to online rejection without becoming abusive. This content has been viewed and shared more than one million times on TikTok and Instagram.

In our research, app users told us they want education and guidance for antisocial users – not just policing. This could be achieved by apps collaborating with community support services, and advocating for a culture that challenges prevailing gender stereotypes.

Policy levers for change

Apps are widely used because they promote opportunities for conversation, personal connection and intimacy. But they are a for-profit enterprise, produced by multinational corporations that generate income by serving advertising and monetising users’ data.

Taking swift and effective action against app-based abuse is part of their social license to operate. We should consider stiff penalties for app-makers who violate that license.

The United Kingdom is just about to pass legislation that contemplates time in prison for social media executives who knowingly expose children to harmful content. Similar penalties that make a dent in app-makers’ bottom line may present more of an incentive to act.

In the age of widespread data breaches, app users already have good reason to mistrust demands to supply their personal identifying information. They will not necessarily feel safer if they are required to provide more data.

Our research indicates users want transparent, accountable and timely responses from app-makers when they report conduct that makes them feel unsafe or unwelcome. They want more than chatbot-style responses to reports of abusive conduct. At a platform policy level, this could be addressed by hiring more local staff who offer transparent, timely responses to complaints and concerns.

And while prevention is key, policing can still be an important part of the picture, particularly when abusive behaviour occurs after users have taken their conversation off the app itself. App-makers need to be responsive to police requests for access to data when this occurs. Many apps, including Tinder, already have clear policies regarding cooperation with law enforcement agencies.The Conversation

Kath Albury, Professor of Media and Communication and Associate Investigator, ARC Centre of Excellence for Automated Decision-Making + Society, Swinburne University of Technology and Daniel Reeders, PhD Candidate, ANU School of Regulation and Global Governance (RegNet), Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Elon’s Twitter ripe for a misinformation avalanche

Twitter on screen
Image: Shutterstock

Elon’s Twitter ripe for a misinformation avalanche

Author Daniel Angus
Date 17 January 2023

Seeing might not be believing as digital technologies make the fight against misinformation even trickier for embattled social media giants, writes Daniel Angus

In a grainy video, Ukrainian President Volodymyr Zelensky appears to tell his people to lay down their arms and surrender to Russia. The video – quickly debunked by Zelensky – was a deep fake, a digital imitation generated by artificial intelligence (AI) to mimic his voice and facial expressions.

High-profile forgeries like this are just the tip of what is likely to be a far bigger iceberg. There is a digital deception arms race underway, in which AI models are being created that can effectively deceive online audiences, while others are being developed to detect the potentially misleading or deceptive content generated by these same models. With the growing concern regarding AI text plagiarism, one model, Grover, is designed to discern news texts written by a human from articles generated by AI.

As online trickery and misinformation surges, the armour that platforms built against it are being stripped away. Since Elon Musk’s takeover of Twitter, he has trashed its online safety division and, as a result, misinformation is back on the rise.

Musk, like others, looks to technological fixes to solve his problems. He’s already signalled a plan for upping use of AI for Twitter’s content moderation. But this isn’t sustainable nor scalable, and is unlikely to be the silver bullet. Microsoft researcher Tarleton Gillespie suggests: “Automated tools are best used to identify the bulk of the cases, leaving the less obvious or more controversial identifications to human reviewers.”

Some human intervention remains in the automated decision-making systems embraced by news platforms but what shows up in newsfeeds is largely driven by algorithms. Similar tools act as important moderation methods to block inappropriate or illegal content.

The key problem remains that technology ‘fixes’ aren’t perfect and mistakes have consequences. Algorithms sometimes can’t catch harmful content fast enough and can be manipulated into amplifying misinformation. Sometimes an overzealous algorithm can also take down legitimate speech.

Beyond its fallibility, there are core questions about whether these algorithms help or hurt society. The technology can better engage people by tailoring news to align with readers’ interests. But to do so, algorithms feed off a trove of personal data, often accrued without a user’s full understanding.

There’s a need to know the nuts and bolts of how an algorithm works – that is opening the ‘black box’.

But, in many cases, knowing what’s inside an algorithmic system would still leave us wanting, particularly without knowing what data and user behaviours and cultures sustain these massive systems.

One way researchers may be able to understand automated systems better is by observing them from the perspective of users, an idea put forward by scholars Bernhard Rieder, from the University of Amsterdam, and Jeanette Hofmann, from the Berlin Social Science Centre.

Australian researchers also have taken up the call, enrolling citizen scientists to donate algorithmically personalised web content and examine how algorithms shape internet searches and how they target advertising. Early results suggest the personalisation of Google Web Search is less profound than we may expect, adding more evidence to debunk the ‘filter bubble’ myth, that we exist in highly personalised content communities. Instead it may be that search personalisation is more due to how people construct their online search queries.

Last year, several AI-powered language and media generation models entered the mainstream. Trained on hundreds of millions of data points (such as images and sentences), these ‘foundational’ AI models can be adapted to specific tasks. For instance, DALL-E 2 is a tool trained on millions of labelled images, linking images to their text captions.

This model is significantly larger and more sophisticated than previous models for the purpose of automatic image labelling, but also allows adaption to tasks like automatic image caption generation and even synthesising new images from text prompts. These models have seen a wave of creative apps and uses spring up, but concerns around artist copyright and their environmental footprint remain.

The ability to create seemingly realistic images or text at scale has also prompted concern from misinformation scholars – these replications can be convincing, especially as technology advances and more data is fed into the machine. Platforms need to be intelligent and nuanced in their approach to these increasingly powerful tools if they want to avoid furthering the AI-fuelled digital deception arms race.

Daniel Angus is professor of digital communication in the School of Communication, and leader of the Computational Communication and Culture program in QUT’s Digital Media Research Centre.

Originally published under Creative Commons by 360info™.

SEE ALSO

Op-ed: Why your smart TV might not last as long as you’d hope

TV remote control pointing at Smart TV

Op-ed: Why your smart TV might not last as long as you’d hope

Authors Alexa Scarlata and Ramon Lobato
Date 11 January 2023

TVs don’t just break down anymore. New problems include apps becoming obsolete and streamers cutting off support for your operating system.

A TV used to be a long-term investment – something you bought knowing it would see you through the next 10 or even 15 years.

Before TVs were ‘smart’, their main function  was to decode signals from broadcast television and from connected devices like DVD players and game consoles. If your TV suddenly stopped working, major hardware faults would hopefully be covered under your manufacturer warranty and statutory guarantees under consumer law.

When you’re buying a new TV today you need to think not just about the quality of the hardware, but about the lifespan of its software

But smart TVs are different because of the complexity of their inbuilt software. When you’re buying a new TV today you need to think not just about the quality of the hardware, but about the lifespan of its software.

This is because your TV’s functionality will change over time. Apps and platforms may not work as well, may refuse to open – or they may disappear altogether.

In this article we explain what you should keep in mind when you buy a smart TV, and what you can do if the functionality of your TV is compromised over time.

What do you get with a smart TV?

Almost all TVs sold in Australia today are smart TVs. This means they can connect to the internet and deliver streaming content via apps.

What many people don’t realise is that when you buy a smart TV you’re locked into using its specific operating system, such as Samsung’s Tizen or LG’s WebOS – just like you’re locked into iOS if you buy an Apple phone or Android if you buy a Samsung.

Each TV operating system works differently, and puts its own spin on interface design, menus, and navigation.

You need to brace yourself for the fact that most apps will eventually stop working on your TV in the years ahead

The operating system also determines the content you can access on your smart TV because it controls the app store. So you need to check before purchase that your preferred TV can run all the apps you might need – not just the big ones like Netflix and YouTube (which are preinstalled on most smart TVs), but also local apps from ABC, SBS, 7, 9 and 10, and any specific movie, sports or gaming apps that you might like to use.

Also, while we all want a smart TV that will work consistently over time, you need to brace yourself for the fact that most apps will eventually stop working on your TV in the years ahead. Even top-of-the-line TVs that cost tens of thousands of dollars are subject to this dreaded phenomenon of ‘app obsolescence’.

The inevitable obsolescence of apps

There are several reasons why apps can become obsolescent.

First, the process of developing and maintaining an app for multiple operating systems and smart TV models is expensive and resource-intensive. Streaming services such as Netflix and iView can see what smart TVs people are using and prioritise particular brands and models accordingly.

In some cases, streaming services will not bother updating apps designed for older TVs, and will instead focus their efforts on newer TVs that are likely to run more smoothly and reach more viewers. As such, they may stop supporting older-model TV apps, or they may remove their apps from particular platforms.

Streaming services will not bother updating apps designed for older TVs, and will instead focus their efforts on newer TVs that are likely to run more smoothly and reach more viewers

This happened in 2019, when Netflix announced that its app would no longer be supported on some Samsung and Panasonic smart TVs purchased in the early 2010s. These devices had “technical limitations” that did not support Netflix’s new digital rights management protocols.

Recently SBS made the “tough call” to remove SBS On Demand from Sony Linux televisions. The broadcaster claimed that these TVs no longer had the memory or processing power to support the best experience (that is, the enhanced features or improved ad experience) of SBS On Demand.

There are other reasons why apps can disappear from a smart TV or suddenly stop working. In the US, there have been some instances of platform blocking where apps have disappeared from smart TVs because of commercial disputes between apps and smart TV platform operators.

Additionally, your choice of TV can also affect your access to future apps that haven’t been released yet. Even if you have a very new and expensive TV, don’t expect that you’ll have immediate access to the latest apps that might arrive next year or sometime in the future.

Availability of new apps can be very uneven, because apps may prioritise launching on the largest smart TV platforms

Availability of new apps can be very uneven, because apps may prioritise launching on the largest smart TV platforms, such as Samsung’s Tizen and LG’s webOS, neglecting the smaller platforms – or the app might get held up in administrative red tape.

For example, when Disney+ launched in Australia in 2019, it was immediately available on TVs made by Samsung and LG, but would not run on Hisense’s proprietary VIDAA U operating system. Hisense users had to wait until late 2021 before they could access the official Disney+ app on their sets.

Similarly, while Kayo was available on Samsung TVs soon after launch, LG owners had to wait until late 2021 to access these apps.

How can I extend the life of my smart TV?

Unfortunately, you have little control over what apps your TV supports or abandons over time, but that doesn’t mean you have no options.

First, you can opt for a TV brand with a good history of delivering software updates. Check the CHOICE Community forums, the CHOICE reliability survey and other online resources to find out more about how TV brands perform with software updates.

Second, if an app becomes glitchy or won’t open, delete and install it again, if you can. You should also perform a manual software update for your smart TV, via the Settings menu. This will clarify exactly what is currently supported by your TV’s operating system.

If this doesn’t work and it’s clear that your TV has started to lose functionality, you don’t need to buy a replacement right away. Instead, you can ‘patch’ your TV using a streaming device that extends its lifespan to get the most out of the hardware.

You can ‘patch’ your TV using a streaming device that extends its lifespan to get the most out of the hardware

For example, if you plug in an Amazon Fire TV Stick, Google TV, games console (PlayStation, Xbox) or set-top box, then you can effectively bypass the smart TV’s outdated software and should be able to run a full range of apps from your external streaming device, so long as the device software is up to date.

Another alternative is to install your favourite video apps on your phone and then cast, mirror or AirPlay to your TV – or even plug your laptop directly into the TV using an HDMI cable.

These workarounds will help you get the maximum life span from your smart TV, and recoup your initial investment over time.

Good for you, good for the environment

In summary, we recommend that all consumers spend a little time playing around with the operating system of a smart TV before buying. Review the product specifications and make sure it can support the streaming services that you already, or might want to, subscribe to.

By making informed choices, you can get the most out of your investment and reduce the many harmful effects of e-waste.

Remember, there’s no reason to throw out a perfectly good TV display screen, even when the software is buggy – upcycle instead by casting, mirroring or adding a streaming device.

This article was originally published on CHOICE. Read the original article.

SEE ALSO

Workshop to investigate public interest litigation in harmful digital marketing awarded Academy of Social Sciences funding

Workshop to investigate public interest litigation in harmful digital marketing awarded Academy of Social Sciences funding

Authors Kathy Nickels
Date 19 December 2022

ADM+S researchers Prof Christine Parker, Prof Jeannie Paterson, Prof Kimberlee Weatherall and colleague Assoc. Prof Paula O’Brien have been awarded funding from the Academy of Social Sciences in Australia (ASSA) Workshops Program to convene leading stakeholders to investigate issues of harmful online advertising and the potential for public interest litigation.

The ASSA Workshops Program for 2023, awarded over $70,000 to convenors from ten different universities to advance research and policy agendas on nationally important issues. 

The ADM+S co-hosted workshop Strategic Public Interest Litigation for Transparency and Accountability of Harmful Digital Marketing: A Researcher-Regulator-Community Dialogue seeks to address challenges of harmful digital advertising. It will bring together key social science and socio legal researchers to investigate predatory and manipulative advertising practices across a range of harmful industries such as alcohol, unhealthy food, and gambling. 

Professor Christine Parker, Chief Investigator at ADM+S, University of Melbourne says that these practices are challenging to investigate. 

“Bringing together scholars, activists and regulators working on these issues in different industries will provide the opportunity to discuss our common challenges.

“We plan to also look at the potential benefits, challenges, and pitfalls of strategic public interest litigation to address these harms.” says Professor Parker. 

The ASSA Workshops Program has been operating for over 30 years. Each year the program supports 8-10 workshops with funding up to $9,000. 

The program supports multidisciplinary workshops with the purpose of being a catalyst for innovative ideas in social science research and social policy, to build capability amongst young researchers, and to foster networks across social science disciplines and with practitioners from government, the private sector, and the community sector on issues of common concern.

The workshop Strategic Public Interest Litigation for Transparency and Accountability of Harmful Digital Marketing: A Researcher-Regulator-Community Dialogue will be co-hosted by the ARC Centre of Excellence for Automated Decision Making and Society, the Centre for AI and Digital Ethics at University of Melbourne and the Health Law and Ethics Network at Melbourne Law School, at the University of Melbourne on 25-26 September 2023. 

SEE ALSO

ADM+S publications recognised in the APO’s Top Content for 2022

ADM+S publications recognised in the APO’s Top Content for 2022

Authors Kathy Nickels
Date 16 December 2022

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) publications have been named in the APO’s Top Content for 2022 released this week.

The APO’s Top Content for 2022 has listed A Data Capability Framework for the not-for-profit sector and Decentralising data governance: Decentralised Autonomous Organisations (DAOs) as data trusts and as most valuable players (MVPs) – a selection of the most interesting and influential content of the year. 

As part of the Top Content for 2022, the APO also named the Top Ten most clicked resources across 15 broad subject areas for the period December 2021 to November 2022.

The Manifesto for sex-positive social media was listed in both Technology and Communications subject areas and Automated decision making in transport mobilities: review of industry trends and visions for the future was listed in Technology. 

The ADM+S has contributed 17 publications, since joining the APO repository in May this year. 

ADM+S Centre Director, Distinguished Professor Julian Thomas, said “The APO collection has enabled us to communicate our key research work publicly, in detail, in a timely fashion, in a convenient digital format, and in a way which is open to everyone.

The Analysis & Policy Observatory (APO) is one of Australia’s leading open access research repositories. We share APO’s goal of supporting evidence-based policy and public debate on the critical challenges facing Australia, and we’re delighted to be working with APO to make ADM+S research more findable, more useable, and more accessible.”

Listed in APO MVPs 2022

A Data Capability Framework for the not-for-profit sector
Anthony McCosker, Frances Shaw, Xiaofang Yao, and Kath Albury.
This report provides a framework that distils the challenges and successes of organisations worked with. It represents both the factors that underpin effective data capability and the pathways to achieving it. In other words, as technologies and data science techniques continue to change, data capability is both an outcome to aspire to, and a dynamic, ongoing process of experimentation and adaption.

Decentralising data governance: Decentralised Autonomous Organisations (DAOs) as data trusts
Kelsie Nabben
This paper explores the idea that Decentralised Autonomous Organisations (DAOs) are a new type of data trust for decentralised governance of data. This publication lends itself to further scholarly research and industry practice to test DAO data trusts as a data governance model for greater individual autonomy, verifiability, and accountability.

Named in APO Top Tens 2022

Manifesto for sex-positive social media
Zahra Stardust, Emily van der Nagel, Katrin Tiidenberg, Jiz Lee, Em Coombes, and Mireille Miller-Young.
This publication sets out guiding principles that platforms, governments, policy-makers and other stakeholders should take into account in their design, moderation and regulation practices. It builds upon the generative work currently underway with the proliferation of alternative, independent collectives and cooperatives, who are designing new spaces, ethical standards and governance mechanisms for sexual content.

Automated decision making in transport mobilities: review of industry trends and visions for the future
Emma Quilty, Sarah Pink, Thao Phan, and Jeni Lee.
This report maps and analyses the social implications of the visions of our transport future. The report examines the assumptions underpinning these visions, as they are represented and projected in recent transport and mobilities stakeholder reporting.

Visit the ADM+S Collection on the APO

SEE ALSO

Arjun Srinivas to lead research for MediaFutures supported project

Arjun Srinivas to lead research for MediaFutures supported project

Authors Kathy Nickels
Date 13 December 2022

ADM+S researcher Arjun Srinivas and colleagues from Kaivalya Plays have received grant funding and support from MediaFutures to produce an interactive theatre performance highlighting the effects of hyper-partisanship and hate speech in India. The performance will particularly focus on the targeting and persecution of minority Muslim women online.

The project  ‘Mining Hate’ will include improvised scenes built from media narratives and content generated by the audience to demonstrate how malicious online actors identify, target and harass victims of online scams and hate speech. Drawing on the Brechtian principle of Verfremdungseffekt, or the “distancing effect”, the live performance seeks to engage audience members in the emotional fallout of being victims.  

Media Futures is a European funded consortium that supports artists to address challenges of disinformation and hate speech in the digital media ecosystem. The recent round of Artists for Media grants were awarded to artists with an innovative artwork concept and production process that critically and materially explores data and technology to question and comment on its impact on individuals and society.

For six months, Arjun and his colleagues will receive technical, legal and ethical support as well as data and computational resources, training and mentorship from partners of the consortium, including Leibniz University of Hannover, King’s College London, KU Leuven and Open Data Institute.

Arjun will be leading the research and data component of the project. During this time he will analyze data from the MediaCloud and Twitter among other sources, to capture media trends and to understand social media discourses on hyper-partisanship and misinformation in India. 

“I’m really stoked to work on this project as it is a seamless integration of my research, theatre practice and my professional identity as a journalist” said Arjun.

“Through this project, we would like to shed light on the consequences of online vitriol and hate speech on the intended victims, while also uncovering the means used by malicious actors to target them.” 

Mining Hate will be performed alongside other projects at the MediaFutures finalists demo day in May 2023.

SEE ALSO

The “Black box” of algorithms and automated decision-making

The “Black box” of algorithms and automated decision-making

Authors Kathy Nickels
Date 12 December 2022

ABC has published this informative interactive explainer Wrenching open the black box that examines the “black box” of algorithms that are increasingly making decisions about our online and offline lives.

Using relevant examples such as Centrelink’s Robodebt and UK’s visa-processing algorithm, this explainer illustrates how some decision-making systems can be flawed.

The article highlights research and tools in development that can help us understand— and challenge — the decisions that algorithms make about us, as individuals, while others can illuminate bias and discrimination embedded within a system.

The article features:

  • Professor Sandra Wachter, Oxford Internet Institute and a tool that generates a number of “nearby possible worlds” that helps to illustrate how different variables (e.g. postcode, gender) could generate different outcomes.
  • Professor Paul Henman, ADM+S at the University of Queensland with comment on structural biases in algorithmic systems.
  • The Algorithmic audit – a transparency tool used to verify if the algorithm meets standards of fairness.
    Professor Ed Santow, former Australian human rights commissioner explains that Australia is lagging behind other parts of the world on digital rights protections.

This is a recommended read for better understanding decision-making systems, as well as the current research and recommendations seeking to make these systems more transparent.

Visit the explainer Wrenching open the black box published on ABC news.

SEE ALSO

The eSafety Commissioner releases new position statement on recommender systems and algorithms

Cover of Position Statement: Recommnedar systems and algorithms

The eSafety Commissioner releases new position statement on recommender systems and algorithms

Authors Kathy Nickels
Date 12 December 2022

Last week the eSafety Commissioner released their new Tech Trends Position statement: Recommender systems and algorithms

The position statement takes a holistic view of recommender systems that encompasses their benefits and risks, broader uses, and complex interconnected ecosystems.

It is stated that “it is important to assess recommender systems holistically, thinking about their benefits and risks, their range of uses and how they may influence or be influenced by the wider digital environment and socio-political developments”.

This paper provides useful advice for users and guidance for industry. For example it recommends that companies take a more proactive Safety By Design approach to recommender algorithms by considering the risks they may pose at the outset and designing in appropriate guardrails.

 This could include:

  • features that allow users to curate how a recommender system applies to them individually and opt out of receiving certain content
  • enforcing content policies to reduce the pool of harmful content on a platform, which reduces its potential amplification
  • labelling content as potentially harmful or hazardous
  • introducing human “circuit breakers” to review fast-moving content before it goes viral.

It also recommends enhancing transparency through regular algorithmic audits and impact assessments, curating recommendations so they are age appropriate, and introducing prompts to encourage users to reconsider posting harmful content.

This position paper acknowledges the contribution made by experts in sharing their insights on recommender systems with eSafety including ADM+S researchers:

Read the Position statement: Recommender systems and algorithm

SEE ALSO

Young ICT Explorers competition finalists hosted at ADM+S

Left to Right: Alice Cartmill, Eleanor Angus, William Smyth and Rehan Dutta.
Left to Right: Alice Cartmill, Eleanor Angus, William Smyth and Rehan Dutta.

Young ICT Explorers competition finalists hosted at ADM+S

Author Kathy Nickels
Date 12 December 2022

East Brisbane State School students have been awarded second place in the National Young ICT Explorers (YICTE) competition 2022.

Leading up to the finals, more than 700 students submitted projects from across Australia to the competition. 

On 10 December, the ARC Centre of Excellence for Automated Decision-Making and Society at QUT hosted the East Brisbane State School Grade 6 students to pitch their project idea at the YICTE virtual finalist event. 

In their project “Runway Racket” the students used an Arduino (a single board microprocessor) with a custom microphone to measure environmental noise in conjunction with the Plane Finder website to identify planes associated with the noise. The monitors were placed in several homes along a flightpath over East Brisbane. 

Mairi McGregor, YICTE 2022 judge, praised the team for their creativity in defining the problem, and accuracy in capturing and measuring data as well as presenting the data. She said that the data and presentation was at a level that could be taken to authorities and companies but also had the potential, with expansion, for commercialisation.

The team included Eleanor Angus, Alice Cartmill, Rehan Dutta, and William Smyth.

Eleanor Angus said that it was fun to work as a team on the project.

“I really liked the community involvement. There were a number of locals and Facebook groups excited by our project” said Eleanor.

The team recently spoke to Rebecca Levingston on ABC Mornings.

Rehan Dutta told Rebecca “We [chose] this [project] because plane noise really is a big disruption in a lot of areas around Brisbane especially places under the flight path”

Now in its 13th year, the Young ICT Explorers (YICTE) is a non-profit competition supported by CSIRO Digital Careers, The Smith Family, Kinetic IT and School Bytes.  The annual competition encourages primary and high school students from years three to 12 to use their imagination and passion to create an invention that could change the world using the power of technology.

Congratulations to the Runway Racket team: Eleanor Angus, Alice Cartmill, Rehan Dutta, and William Smyth. 

You can listen to Rebecca Livingston on ABC mornings talk to the team from 1:48.

This story was updated 30/01/2023

SEE ALSO

What is happening outside of the digital town square? A glimpse into the street corners and alleyways that also make Internet social

What is happening outside of the digital town square? A glimpse into the street corners and alleyways that also make Internet social

Author Ashwin Nagappa
Date 1 December 2022

The recent rollercoaster of changes to Twitter have inevitably made it the most discussed topic on social media, in our daily conversations, in traditional press and in academia. There are obvious speculations about the future of Twitter. There is also an experience of mass grief and despair for many who benefited from it. And the question looming large is — if not Twitter, where else? This blogpost is not about quitting Twitter or finding a suitable alternative. Many handy resources[1] have already been authored in relation to these issues. Generally, there is a lot of chatter about and on the so-called digital town squares. This has turned attention toward other smaller gathering around street corners and alleyways such as Mastodon. Hence, this blogpost is a brief overview of alternative social media (ASM) platforms (Gehl, 2015) and what it could mean for the future of the Internet or social media as we know them.

The desire for alternative media is not new to social media platforms or the Internet. Before the world wide web (the web) became a commonplace, there were various initiatives across the globe to develop alternatives to dominant broadcast media systems at the time (Rennie, 2006). Community media or alternative media[2] initiatives aimed to create alternative media systems that decentralized decision-making and provided access to the production and circulation of media (Sandoval & Fuchs, 2010). The web[3] had all the capabilities of an alternative media and provided space for user-generated content (Van Dijck, 2009). However, commercial interests transformed the web into a platformized web, where digital platforms became central entities (Helmond, 2015).

Alternative social media (ASM) platforms emerged over a decade ago when commercial social media platforms and platform companies had already established dominance in the digital media ecosystem. ASM platforms aimed to create platforms without advertising revenue and algorithms for content curation or recommendation. And to shift the concentration of power from platform companies into a community of users. However, while the promise and aspirations of ASM platforms invited many users, it was a complicated task for users to operate or govern the platform; especially to understand the nuances of content licensing and the challenges that arose on a large network of users.

The earliest ASMs, such as *diaspora, Twister, and Ello, gained momentary popularity as the Facebook or Twitter killer apps (Zulli et al., 2020, p. 1189). However, they failed to scale up or find viable business or economic models. At the same time, little or no content moderation attracted hate speech or far-right actors to the space. Additionally, the technocratic characteristics of these platforms increased participation barriers (the platform design required new users to learn new technical skills) (Gehl, 2015).

Despite many challenges, ASM platforms did not disappear. Rather, decentralized platforms such as Mastodon were worked on to make them relatively user-friendly. Furthermore, several (open source) ASM projects came together over the years to develop a protocol[4] allowing users to participate across a range of decentralized platforms. This led to the birth of a “Fediverse”, a network of user-run social media platforms. While the “Fediverse has existed since 2018, the recent turn of events have drawn attention to it.

Fediverse (Senst & Kuketz, 2021)

With the development and adoption of blockchain technology across different industries, the discourse of decentralization has accelerated under the term web3. Web3 ‘suggests a progression from web2.0,…characterized by peer-to-peer transactions and an ability for users to decide who they share information with’ (Rennie et al., 2022, p. 5). Blockchain social media (BSM) platforms could be considered second generation ASM platforms. Many BSMs follow similar principles as ASM platforms to subvert platformization, ads and algorithms. However, the decentralizing characteristics of blockchain make it suitable for developing social media platform alternatives. Like ASMs, most BSM platforms insist that the community of users will govern all aspects of the platform, and no one will have a central authority, which is easier said than done.

Misuse is one of the many possible scenarios for ASM platforms. Gab is a prominent example of a Mastodon instance being run as a platform for far-right supporters. Although Gab was defederated from the “Fediverse”, it has become a part of a fringe platforms ecosystem. Similarly, DLive, a live-streaming BSM was used to broadcast Capitol hill violence on Jan 6th, 2021 (Browning & Lorenz, 2021). The DLive team had to intervene to take down the video since community members did not see the need to moderate the content.

These examples are exceptional cases that discredit ASM platforms. There are many instances of ASM platforms that provide space for marginalized communities or spaces that are not highly radicalized. For example, ASM platforms were sought after by transgender and queer users when Facebook restricted their profiles for violating the real name policy (Gehl, 2015, p. 8) and by thousands of Indians when Twitter blocked several users protesting the citizenship amendment bill in 2019 (Outlook Web Bureau, 2019; Bhargava & Nair, 2019).

ASM platforms are not magic silver bullets to the issues enveloping mainstream social media. However, they can help us understand the tensions between centralizing tendencies of digital platforms and the urge to decentralize power structures. They also expose the difference between automated or algorithmic systems of corporate social media platforms and user-driven platform governance. Finally, ASM platforms hint towards a public service internet or public interest internet as a possible future of the Internet. While digital town squares may serve corporate interests, communities also socialize on ASM platforms that can be perceived as street corners, alleyways, parks, markets, bus or train stations. These public spaces may be complicated to navigate. However, they may also bring relief from the chaos of town squares.

Notes
[1] Thinking of breaking up with Twitter? Here’s the right way to do itHow to Get Started on Mastodon
[2] There were several terms to refer to media initiatives led by non-institutional individuals or collectives. Community media was a popularly used term.
[3] The growth of technology along with the increased accessibility to devices and network.
[4]‘ActivityPub is a decentralized social networking protocol .. that provides a client to server API for creating, updating and deleting content, as well as a federated server to server API for delivering notifications and content’ (ActivityPub, n.d.)

References
Bhargava, Y., & Nair, S. K. (2019, November 8). Mastodon happening in IndiaThe Hindu.

Browning, K., & Lorenz, T. (2021, January 8). Pro-Trump Mob Livestreamed Its Rampage, and Made Money Doing It. The New York Times.

Gehl, R. W. (2015). The Case for Alternative Social Media. Social Media + Society, 1(2), 205630511560433.

Helmond, A. (2015). The Web as Platform: Data Flows in Social Media.

Senst, I., & Kuketz, M. (2021). English: The diagram shows the common Fediverse platforms with the underlying protocols. Here it is also shown in color which platforms can communicate with which and what functions are implemented. The platforms are illustrated by the predominant sense and purpose in the pattern of the Fediverse logo. File:Fediverse_small_information.png

Outlook Web Bureau. (2022, February 14). “Better, No Trolls”: Why Some Indians Are Boycotting Twitter And Switching To Mastodon.

Rennie, E. (2006). Community Media: A Global Introduction. Rowman & Littlefield Publishers.

Rennie, E., Zargham, M., Tan, J., Miller, L., Abbott, J., Nabben, K., & De Filippi, P. (2022). Toward a Participatory Digital Ethnography of Blockchain Governance. Qualitative Inquiry, 28(7), 837–847.

Sandoval, M., & Fuchs, C. (2010). Towards a critical theory of alternative media. Telematics and Informatics, 27(2), 141–150.

Van Dijck, J. (2009). Users like you? Theorizing agency in user-generated content. Media, Culture & Society, 31(1), 41–58. https://doi.org/10.1177/0163443708098245

Zulli, D., Liu, M., & Gehl, R. (2020). Rethinking the “social” in “social media”: Insights into topology, abstraction, and scale on the Mastodon social network. New Media & Society, 22(7), Article 7. https://doi.org/10.1177/1461444820912533

SEE ALSO

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

Galaxy
Tengyart / Unsplash

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

Authors Aaron Snoswell and Jean Burgess
Date 29 November 2022

Earlier this month, Meta announced new AI software called Galactica: “a large language model that can store, combine and reason about scientific knowledge”.

Launched with a public online demo, Galactica lasted only three days before going the way of other AI snafus like Microsoft’s infamous racist chatbot.

The online demo was disabled (though the code for the model is still available for anyone to use), and Meta’s outspoken chief AI scientist complained about the negative public response.

So what was Galactica all about, and what went wrong?

What’s special about Galactica?

Galactica is a language model, a type of AI trained to respond to natural language by repeatedly playing a fill-the-blank word-guessing game.

Most modern language models learn from text scraped from the internet. Galactica also used text from scientific papers uploaded to the (Meta-affiliated) website PapersWithCode. The designers highlighted specialised scientific information like citations, maths, code, chemical structures, and the working-out steps for solving scientific problems.

The preprint paper associated with the project (which is yet to undergo peer review) makes some impressive claims. Galactica apparently outperforms other models at problems like reciting famous equations (“Q: What is Albert Einstein’s famous mass-energy equivalence formula? A: E=mc²”), or predicting the products of chemical reactions (“Q: When sulfuric acid reacts with sodium chloride, what does it produce? A: NaHSO₄ + HCl”).

However, once Galactica was opened up for public experimentation, a deluge of criticism followed. Not only did Galactica reproduce many of the problems of bias and toxicity we have seen in other language models, it also specialised in producing authoritative-sounding scientific nonsense.

Authoritative, but subtly wrong bullshit generator

Galactica’s press release promoted its ability to explain technical scientific papers using general language. However, users quickly noticed that, while the explanations it generates sound authoritative, they are often subtly incorrect, biased, or just plain wrong.

We also asked Galactica to explain technical concepts from our own fields of research. We found it would use all the right buzzwords, but get the actual details wrong – for example, mixing up the details of related but different algorithms.

In practice, Galactica was enabling the generation of misinformation – and this is dangerous precisely because it deploys the tone and structure of authoritative scientific information. If a user already needs to be a subject matter expert in order to check the accuracy of Galactica’s “summaries”, then it has no use as an explanatory tool.

At best, it could provide a fancy autocomplete for people who are already fully competent in the area they’re writing about. At worst, it risks further eroding public trust in scientific research.

A galaxy of deep (science) fakes

Galactica could make it easier for bad actors to mass-produce fake, fraudulent or plagiarised scientific papers. This is to say nothing of exacerbating existing concerns about students using AI systems for plagiarism.

Fake scientific papers are nothing new. However, peer reviewers at academic journals and conferences are already time-poor, and this could make it harder than ever to weed out fake science.

Underlying bias and toxicity

Other critics reported that Galactica, like other language models trained on data from the internet, has a tendency to spit out toxic hate speech while unreflectively censoring politically inflected queries. This reflects the biases lurking in the model’s training data, and Meta’s apparent failure to apply appropriate checks around the responsible AI research.

The risks associated with large language models are well understood. Indeed, an influential paper highlighting these risks prompted Google to fire one of the paper’s authors in 2020, and eventually disband its AI ethics team altogether.

Machine-learning systems infamously exacerbate existing societal biases, and Galactica is no exception. For instance, Galactica can recommend possible citations for scientific concepts by mimicking existing citation patterns (“Q: Is there any research on the effect of climate change on the great barrier reef? A: Try the paper ‘Global warming transforms coral reef assemblages’ by Hughes, et al. in Nature 556 (2018)”).

For better or worse, citations are the currency of science – and by reproducing existing citation trends in its recommendations, Galactica risks reinforcing existing patterns of inequality and disadvantage. (Galactica’s developers acknowledge this risk in their paper.)

Citation bias is already a well-known issue in academic fields ranging from feminist scholarship to physics. However, tools like Galactica could make the problem worse unless they are used with careful guardrails in place.

A more subtle problem is that the scientific articles on which Galactica is trained are already biased towards certainty and positive results. (This leads to the so-called “replication crisis” and “p-hacking”, where scientists cherry-pick data and analysis techniques to make results appear significant.)

Galactica takes this bias towards certainty, combines it with wrong answers and delivers responses with supreme overconfidence: hardly a recipe for trustworthiness in a scientific information service.

These problems are dramatically heightened when Galactica tries to deal with contentious or harmful social issues, as the screenshot below shows.

Screenshots of papers generated by Galactica on 'The benefits of antisemitism' and 'The benefits of eating crushed glass'.
Galactica readily generates toxic and nonsensical content dressed up in the measured and authoritative language of science.
Tristan Greene / Galactica

Here we go again

Calls for AI research organisations to take the ethical dimensions of their work more seriously are now coming from key research bodies such as the National Academies of Science, Engineering and Medicine. Some AI research organisations, like OpenAI, are being more conscientious (though still imperfect).

Meta dissolved its Responsible Innovation team earlier this year. The team was tasked with addressing “potential harms to society” caused by the company’s products. They might have helped the company avoid this clumsy misstep.The Conversation

Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology and Jean Burgess, Professor and Associate Director, ARC Centre of Excellence for Automated Decision-Making and Society, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

A timeline of Twitter changes and commentary from ADM+S researchers

Woman holding phone with Twitter on the screen.

A timeline of Twitter changes and commentary from ADM+S researchers

Author Kathy Nickels
Date 20 December 2022

Since Elon Musk purchased Twitter on 27 October he has made a slew of chaotic changes in attempts to raise revenue and to grapple with the complexity of governing a social media platform. 

Although Twitter has only a fraction the amount of users when compared to Facebook, Instagram and WhatsApp, the platform plays a significant role in society and shaping public opinion.

Professor Jean Burgess, Associate Director of the ADM+S Centre says that “Twitter’s unique role is a result of the way it combines personal media use with public debate and discussion.

But this is a fragile and volatile mix – and one that has become increasingly difficult for the platform to manage.”

In managing the platform, Musk admits that “Twitter will do lots of dumb things in coming months”. 

We’ve provided a timeline to break down some of the recent changes to Twitter with commentary and explainers from ADM+S researchers in the field.

Twitter users vote for Elon Musk to step down as head of the company

20 December 2022

Elon Musk released a poll asking “Should I step down as head of Twitter? I will abide by the results of this poll”. More than half of the 17.5 million users who responded to the poll said the billionaire shouldn’t remain at the helm. In this 4BC News Talk episode, Prof Axel Bruns says that it makes sense for a new CEO to come onboard at this time.

What comes next for Twitter and it’s community?

9 December 2022

In this article Elon Musk, Twitter’s platform culture & what comes next, Prof Jean Burgess argues that despite the chaos brought on by Elon Musk in recent months, Twitter has always been much more than a tech company. Regardless of how the story of Twitter turns out, what its user community does next will help shape the future of our media and communication environment.

COVID, vaccine misinformation ‘spiking’ on Twitter

8 December 2022

The volume of COVID misinformation significantly jumps on Twitter, while anti-vaccination networks are reforming and reorganising. Assoc. Prof Timothy Graham provides data and analysis in this article COVID, vaccine misinformation ‘spiking’ on Twitter after Elon Musk fires moderators that clearly illustrates this rise. The spike in the second half of November is partly due to the launch of anti-vax propaganda documentary, Died Suddenly as well as a change to Twitter’s COVID-19 misinformation policy on 30 Nov which states they are “no longer enforcing the COVID-19 misleading information policy”.

For years, Twitter has served a vital function as an information-sharing and verification service. That’s being very rapidly eroded.

How could alternative social media platforms change the future of social media as we know it?

1 December 2022

In this article published on Medium What is happening outside of the digital town square? A glimpse into the street corners and alleyways that also make Internet social, ADM+S PhD Candidate Ashwin Nagappa describes how different alternative social media platforms work as well as the pros and cons of these de-centralised platforms compared to centralised platforms such as Twitter.

Twitter vulnerable to widespread outages and cyber attacks

22 November 2022

After a few chaotic weeks it’s clear Elon Musk is intent on taking Twitter in a direction that’s at odds with the prevailing cultures of the diverse users who call it home. With so many experienced staff gone there are concerns the platform will be vulnerable to widespread outages and cyber attacks.

In this article Thinking of breaking up with Twitter? Here’s the right way to do it , Prof Daniel Angus and Assoc Prof Timothy Graham provide tips on moving away from Twitter or better securing your data on the platform.

Concerns over volume of conspiracy theorising on Twitter during US midterms

18 November 2022

“The drastic reductions to moderation staff and changes to platform architecture and Twitter rules and policies will mean more [misinformation and disinformation] on the site and in different ways,” Assoc. Professor Timothy Graham, who researches online bots, trolls and disinformation, told RMIT Fact Lab CheckMate in the article Misinformation analyst concerned by ‘volume of conspiracy theorising’ on Twitter during US midterms.

Could Mastadon be the new Twitter?

16 November 2022

It is unclear whether users are replacing Twitter with Mastadon or whether they are sitting across both platforms. In this article Should Elon Musk really be afraid of Mastodon?, Professor Axel Bruns talks about what it would take for users to leave Twitter and what steps Mastodon would need to take to grow it’s current user base from 2.2 million to that of Twitters 238 million users.

Blue tick removed after flood of fake accounts

10 November 2022

The launch of paid verification badges resulted in a flood of fake accounts of public figures and brands with Twitter’s blue check mark. In response the company removed the paid verification badge option. On 17 November Musk tweeted “Punting relaunch of Blue Verified to November 29th to make sure that it is rock solid”.

Introduction of payment for blue tick verification is fatally flawed

7 November 2022

Primarily to raise revenue, Musk made the decision to charge US$8 a month for accounts to obtain the blue tick verification badge. Musk argued that this would solve hate speech and fake accounts by prioritising verified accounts in search, replies and mentions. If anything, this would have the opposite effect: those with enough money would dominate the public sphere.

In this article Is Twitter’s ‘blue tick’ a status symbol or ID badge? And what will happen if any can buy one?, Assoc Professor Timothy Graham revisits the controversial history of the blue tick and how this latest change would open the floodgates to inauthentic and harmful activity on the platform.

Twitter users seek alternative platforms

29 October 2022

One day after Musk closes the deal to buy Twitter Hashtags #TwitterMigration and #TwitterExodus gained popularity.

Twitter users start seeking alternative platforms with more than 70,000 users signing up to Mastodon, a microblogging site, with functions similar to Twitter.

Dr Nataliya Ilyushia, research fellow at the ADM+S, explains Mastadon, and how you can sign up to this platform in What is Mastodon, the ‘Twitter alternative’ people are flocking to? Here’s everything you need to know.

Changes to content moderation and platform governance

28 October 2022

In the Canberra times article Musk is proposing radical changes after his $US44 billion acquisition of Twitter Dr Daminao Spina says “The decision of the new CEO to fire engineers will impact the robustness of the platform, which is arguably the only thing you cannot replicate easily on other platforms.”

Musk announced that he will forgo any significant content moderation or account reinstatement decisions until after the formation of a new committee devoted to the issues. He said that “Twitter will be forming a content moderation council with widely diverse viewpoints,” and that “No major content decisions or account reinstatements will happen before that council convenes”.

Elon Musk announces interest in purchasing Twitter

27 April 2022

When Elon Musk first announced his interest in purchasing Twitter earlier in April 2022, he promised to prioritise “free speech” and return the social media platform to “the digital town square where matters vital to the future of humanity are debated.”

In this article The ‘digital town square’? What does it mean when billionaires own the online spaces where we gather? (theconversation.com), Prof Jean Burgess explores the meaning of “free speech” and what the Australian Government has been doing to create safer digital spaces in which the fundamental rights of all users of digital services are protected. Prof Burgess points to alternatives to for-profit social media platforms, such as the non-centralised platform Mastadon, and suggests a “blue-sky” idea – a public service internet.

SEE ALSO

What would an ad-free internet look like?

Advertising images
Internet advertising(Pascale Pirate Chickan / Creative Commons / Flickr.com)

What would an ad-free internet look like?

Authors Kathy Nickels
Date 30 November 2022

In this ABC Radio National Life Matters episode, reporter Nat Tencic explores the relationship between ads, the internet and us.

Nat Tencic does some personal research on advertising on her Twitter, Instagram and Facebook feeds. She said the results “weren’t comforting”. 

In 5 minutes of scrolling on each platform Nat found that on:

  • Instagram 28% of story slides were advertisements (12 ads within 43 story slides) 
  • Facebook 31% of posts were advertisements (21 ads within 68 posts)
  • Twitter 20% ot tweets were advertisements (1 in 5 tweets were promoted)
  • Tik Tok 21% of videos were ads (3 ads and 11 regular videos)

Nat talks to Prof Julian Thomas and Dr Jathan Sadowski from the ADM+S Centre to imagine an internet with new priorities. You also hear from James Clark, Executive Director of Digital Rights Watch about a possible hack to block ads at home. 

ADM+S researcher, Dr Jathan Sadowski says that the internet has been shaped by advertising in ways that are so fundamental and so ubiquitous that he believes it’s actually easier to think about the ways the internet has not been shaped by advertising. 

“Advertising is so integral to every aspect of the internet as we experience it, as it’s built, as it’s designed, as it’s operated.” he says. 

“The reasons why websites exist and the reasons why we experience them in the way that we do often comes down to advertising in some way. Whether it’s the collection of data for advertising, or the serving of advertising.”

Listen to the full episode What would an ad-free internet look like? on ABC Radio National Life Matters.

SEE ALSO

ADM+S Dark Ads Hackathon winners share new methods for better transparency in online advertising

Dark Ads Hackathon team presenting to Hack/Hackers group

ADM+S Dark Ads Hackathon winners share new methods for better transparency in online advertising

Author Kathy Nickels
Date 28 November 2022

Hacks/Hackers recently hosted the winning team of the ARC Centre of Excellence for Automated-Decision Making and Society (ADM+S) Dark Ads Hackathon at ABC Southbank to share their idea for identifying discriminatory patterns in online advertising data.

The idea presented by the multi-disciplinary team has the potential to provide better tools for informing policy, advancing public awareness, and building advocacy for vulnerable groups who are targeted by predatory advertising.

The Hackathon team’s approach draws on data gathered from the Australian Ad Observatory dataset (500,000+ ads donated by 2000 Australian Facebook users) to examine “why am I seeing this?” (WAIST) data, alongside other demographic indicators like income, postcode, and age, to identify discriminatory patterns such as proxy and price discrimination.

Members of Hacks/Hackers and the Hackathon team discussed how the methods could be used to identify advertisement practices across a range of harmful industries such as predatory consumer financial products, alcohol, gambling, and unhealthy foods. The innovative approach and methods developed by the Hackathon team can also be applied to other contexts, for example, to trace illegal advertising practices such as the promotion of vape products to young people.

Questions from the Hacks/Hackers group sparked conversations from a journalistic point of view on sampling vulnerable communities, the value of engaging particular demographic groups, and sharing narratives of the lived experience of predatory advertisements.

Dr Kelly Lewis, research fellow at the ADM+S Centre, Monash University is one of the eight Hackathon team members. She said that presenting to members of the Hacks/Hackers community was a meaningful way for the team to engage with a range of ideas and perspectives.

“The feedback we received provides a valuable resource for us to draw on as we continue to develop our approach for greater online advertising accountability. We would like to thank Hacks/Hackers for this opportunity”, said Dr Lewis.

Hackathon Team: Dr Kelly Lewis, Grant Nicholas, Ross Pearson, Alec Sathiyamoorthy, Vikram Sondergaard, Mingqiu Wang, and Guangnan (Rio) Zhu.

Mentors: Dr Abdul Obeid and Xue Ying (Jane) Tan

Read more about the research Identifying Discriminatory Patterns in Online Advertising Data

Hacks/Hackers is a rapidly expanding international grassroots journalism organisation with thousands of members across four continents. Their mission is to create a network of journalists (“hacks”) and technologists (“hackers”) who rethink the future of news and information.

SEE ALSO

Thinking of breaking up with Twitter? Here’s the right way to do it

John G. Mabanglo/EPA

Thinking of breaking up with Twitter? Here’s the right way to do it

Authors Daniel Angus and Timothy Graham
Date 22 November 2022

After a few chaotic weeks it’s clear Elon Musk is intent on taking Twitter in a direction that’s at odds with the prevailing cultures of the diverse users who call it home.

Musk has now begun reinstating high-profile users – including Donald Trump and Kanye West – who had been removed for repeated violations of community standards.

This comes off the back of a mass exodus of Twitter staff, including thousands that Musk unceremoniously fired via email. The latest wave of resignations came after an ultimatum from Musk: employees would have to face “extremely hardcore” working conditions (to fix the mess Musk created).

All of this points to a very different experience for users, who are now decamping the platform and heading to alternatives like Mastodon.

So what threats are we likely to see now? And how does one go about leaving Twitter safely?

#TwitterShutDown

With so many experienced staff leaving, users face the very real possibility that Twitter will experience significant and widespread outages in the coming weeks.

Enterprise software experts and Twitter insiders have already been raising alarms that with the World Cup under way, the subsequent increase in traffic – and any rise in opportunistic malicious behaviour – may be enough for Twitter to grind to a halt.

Aside from the site going dark, there are also risks user data could be breached in a cyberattack while the usual defences are down. Twitter was exposed in a massive cyberattack in August this year. A hacker was able to extract the personal details, including phone numbers and email addresses, of 5.4 million users.

One would be forgiven for thinking that such scenarios are impossible. However, common lore in the technology community is that the internet is held together by chewing gum and duct tape.

The apps, platforms and systems we interact with every day, particularly those with audiences in the millions or billions, may give the impression of being highly sophisticated. But the truth is we’re often riding on the edge of chaos.

Building and maintaining large-scale social software is like building a boat, on the open water, while being attacked by sharks. Keeping such software systems afloat requires designing teams that can work together to bail enough water out, while others reinforce the hull, and some look out for incoming threats.

To stretch the boat metaphor, Musk has just fired the software developers who knew where the nails and hammers are kept, the team tasked with deploying the shark bait, and the lookouts on the masts.

Can his already stretched and imperilled workforce plug the holes fast enough to keep the ship from sinking?

We’re likely to find out in the coming weeks. If Twitter does manage to stay afloat, the credit more than likely goes to many of the now ex-staff for building a robust system that a skeleton crew can maintain.

Hate speech and misinformation are back

Despite Twitter’s claims that hate speech is being “mitigated”, our analysis suggests it’s on the rise. And we’re not the only researchers observing an uptick in hate speech.

The graph below shows the number of tweets per hour containing hate speech terms over a two-week period. Using a peer-reviewed hate speech lexicon, we tracked the volume of 15 hateful terms and observed a clear increase after Musk’s acquisition.

Volume of tweets containing hate speech terms; the trend is increasing over time.
Volume of tweets containing hate speech terms.

Misinformation is also on the rise. Following Musk’s swift changes to blue tick verification, the site tumbled into chaos with a surge of parody accounts and misleading tweets. In response, he issued yet another stream-of-consciousness policy edict to remedy the previous ones.

With reports that the entire Asia-Pacific region has only one working content moderator left, false and misleading content will likely proliferate on Twitter – especially in non-English-speaking countries, which are especially at risk of the harmful effects of unchecked mis- and disinformation.

If this all sounds like a recipe for disaster, and you want out, what should you do?

Pack your bags

First, you may want to download an archive of your Twitter activity. This can be done by clicking through to Settings > Settings and Support > Settings and Privacy > Your Account > Download an archive of your data.

It can take several days for Twitter to compile and send you this archive. And it can be up to several gigabytes, depending on your level of activity.

Lock the door

While waiting for your archive, you can begin to protect your account. If your account was public, now might be a good time to switch it to protected.

In protected mode your tweets will no longer be searchable off the platform. Only your existing followers will see them on the platform.

If you’re planning to replace Twitter with another platform, you may wish to signal this in your bio by including a notice and your new username. But before you do this, consider whether you might have problematic followers who will try to follow you across.

Check out

Once you have downloaded your Twitter archive, you can choose to selectively delete any tweets from the platform as you wish. One of our colleagues, Philip Mai, has developed a free tool to help with this step.

It’s also important to consider any direct messages (DMs) you have on the platform. These are more cumbersome and problematic to remove, but also likely to be more sensitive.

You will have to remove each DM conversation individually, by clicking to the right of the conversation thread and selecting Delete conversation. Note that this only deletes it from your side. Every other member of a DM thread can still see your historic activity.

Park your account

For many users it’s advisable to “park” their account, rather than completely deactivate it. Parking means you clean out most of your data, maintain your username, and will have to log in every few months to keep it alive on the platform. This will prevent other (perhaps malicious) users from taking your deactivated username and impersonating you.

Parking means Twitter will retain some details, including potentially sensitive data such as your phone number and other bio information you’ve stored. It also means a return to the platform isn’t out of the question, should circumstances improve.

If you do decide to deactivate, know that this doesn’t mean all your details are necessarily wiped from Twitter’s servers. In its terms of service, Twitter notes it may retain some user information after account deactivation. Also, once your account is gone, your old username is up for grabs.

Reinforce the locks

If you haven’t already, now is the time to engage two-factor authentication on your Twitter account. You can do this by clicking Settings > Security and account access > Security > Two-factor authentication. This will help protect your account from being hacked.

Additional password protection (found in the same menu above) is also a good idea, as is changing your password to something that is different to any other password you use online.

Once that’s done, all that’s left is to sit back and pour one out for the bird site.

Correction: this piece originally stated Alex Jones had been reinstated on Twitter. This was not the case, so his name has been removed.The Conversation

Daniel Angus, Professor of Digital Communication, Queensland University of Technology and Timothy Graham, Associate Professor, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

3 Big Questions from the ADM+S Dark Ads Hackathon

3 Big Questions from the ADM+S Dark Ads Hackathon

Author Lauren Hayden
Date 10 November 2022

Digital advertising is microtargeted, ephemeral, and unobservable. Ads such as those seen on social media are shown only to select users based on behavioural, demographic, and psychographic data the platform has been able to collect about them. These ads may be published for a limited time, often less than 24 hours, and they are invisible from view after they expire.

This is referred to as ‘dark advertising’. Researchers, advocates, and governments have little ability to monitor online advertising and are therefore unable to hold advertisers accountable for potentially harmful practices such as targeting underage users with ads for alcohol or gambling.

The ADM+S Centre’s Tech for Good: Dark Ads Hackathon challenged teams to create a ‘pretotype’ solution for monitoring, analysing and studying dark advertising. The three-day event, held at RMIT in Melbourne from 28 – 30 September 2022, hosted attendees from across Australia and an array of presenters representing consumer advocacy organisations, research groups and the tech industry.

Although teams were working towards solutions, the Hackathon generated big questions that are only the beginning of the conversation.

Big Question #1 – How does dark advertising affect users?

Existing research shows that digital advertising has been used to target at-risk groups with excessive messaging around harmful products such as alcohol, gambling and unhealthy foods. The ‘dark’ nature of advertising hinders the ability to monitor and report these harmful targeting patterns.

As the first panel discussion highlighted, dark advertising is more than just predatory targeting. Advertisers have the power to artificially limit consumer choice, exploit dynamic pricing for optimal potential revenues, and employ dark patterns to nudge shopping behaviours. User data is sold to third parties which allows for further microtargeting and behavioural manipulation.

Dark advertising is shorthand for a broader, automated consumer culture which affects us all.

Big Question #2 – Who is responsible for making platforms safer and more fair?

Platforms, as the facilitators of dark advertising, receive the most criticism for enabling exploitive advertising practices in their digital spaces. Dr Laura Edelson, a postdoctoral researcher with the Cybersecurity for Democracy project at New York University, reminded participants that a collaborative effort among researchers, regulators and platforms is required to affect change. Users of digital platforms also have a critical role to play in identifying and reporting dark advertising, which was the focus of several Hackathon team designs. Dark advertising relies on a lack of visibility to operate. The mobilisation of users through data donation and reporting patterns of harmful advertising can highlight the extent of dark advertising and inform the development of regulatory frameworks around digital advertising.

Big Question #3 – What tools are needed to mobilise change around dark advertising?

Several tools have already been developed to examine dark advertising more closely. The Australian Ad Observatory is one project funded through the ADM+S Centre that allows users to “donate” the advertising they see on their Facebook feeds through a browser extension.

The web browser collects all sponsored content shown on the page and indexes the ads within a larger library used for research purposes. Users are also able to review the ads collected in a private archive. Further data collection tools and analytical frameworks are in development to assist researchers and regulators in evaluating potential harm in digital advertising.

These tools are a springboard for a larger, collaborative effort to regulate dark advertising which began to emerge at the Hackathon. Teams successfully generated a diverse array of conceptual tools that focus on empowering end users, analysing advertising data, and reporting harm to consumer advocacy organisations.

Most importantly, the Hackathon opened a conversation about dark advertising that will inform future development of responsible, ethical and inclusive automated decision-making systems.

SEE ALSO

Is Twitter’s ‘blue tick’ a status symbol or ID badge? And what will happen if anyone can buy one?

Twitter bird image generated by DALL-E
DALL-E

Is Twitter’s ‘blue tick’ a status symbol or ID badge? And what will happen if anyone can buy one?

Author Timothy Graham
Date 7 November 2022

Following Elon Musk’s acquisition of Twitter on October 27, the world’s richest man proposed a range of controversial changes to the platform. With mounting evidence that he is making it up as he goes along, these proposals are tweeted out in a stream-of-consciousness manner from Musk’s Twitter account.

Primarily to raise revenue, one of the ideas was to charge US$8 a month to obtain a verified status – that is, the coveted blue tick badge next to the account handle.

Within the space of a few days, the paid verification change has already been rolled out in several countries, including Australia, under the Twitter Blue subscription service.

More than just verification

According to Twitter, the blue tick lets people know an account of interest is authentic. Currently, there are seven categories of “public interest accounts”, such as government office accounts, news organisations and journalists, and influencers.

Yet this seemingly innocuous little blue icon is far from a simple verification tool in Twitter’s fight against impersonation and fraud.

In the public view, a verified status signifies social importance. It is a coveted status symbol to which users aspire, in large part because Twitter’s approval process has made it difficult to obtain.

That’s partly because the blue tick has a controversial history. After receiving widespread condemnation for verifying white supremacists in 2017, Twitter halted its verification process for more than three years.

There’s a fundamental mismatch between what Twitter wants the blue tick to mean versus how the public perceives it, something the Twitter Safety team itself acknowledged in 2017.

But they didn’t resolve it. When Twitter resumed verifying accounts systematically in 2021, it wasn’t long until the process began to fail again, with blue ticks being handed out to bots and fake accounts.

Moreover, the public is still confused about what the blue tick signifies, and views it as a status symbol.

Lords and peasants

Musk’s stream-of-consciousness policy proposals may reflect his own preference for interacting with verified accounts. Despite his repeated claims of “power to the people” and breaking the “lords and peasants” system of verified versus non-verified accounts, I ran a data analysis of 1,493 of Musk’s tweets during 2022, and found that more than half (57%) of his interactions were with verified accounts.

Evidently, having a verified status makes one worthy of his attention. Thus, Musk himself arguably views the blue tick as a status symbol, like everyone else (except Twitter).

However, Musk’s US$8 blue tick proposal is not only misguided but, ironically, likely to produce even more inauthenticity and harm on the platform.

A fatal flaw stems from the fact that “payment verification” is not, in fact, verification.

Fact from fraud

Although Twitter’s verification system is by no means perfect and is far from transparent, it did at least aspire to the kinds of verification practices journalists and researchers use to distinguish fact from fiction, and authenticity from fraud. It takes time and effort. You can’t just buy it.

Despite its flaws, the verification process largely succeeded in rooting out a sizable chunk of illegitimate activity on the platform, and highlighted notable accounts in the public interest. In contrast, Musk’s payment verification only verifies that a person has US$8.

Payment verification can’t guarantee the system won’t be exploited for social harm. For example, we already saw that conspiracy theory influencers such as “QAnon John” are at risk of becoming legitimised through the purchase of a blue tick.

Opening the floodgates for bots

The problem is even worse at larger scales. It is hard enough to detect and prevent bot and troll networks from poisoning the information landscape with disinformation and spam.

Now, for the low cost of US$800, foreign adversaries can launch a network of 100 verified bot accounts. The more you can pay, the more legitimacy you can purchase in the public sphere.

To make matters worse, Musk publicly stated that verified accounts who pay US$8 will be granted more visibility on the platform, while non-verified accounts will be suppressed algorithmically.

He believes this will solve hate speech and fake accounts by prioritising verified accounts in search, replies and mentions. If anything, it will have the opposite effect: those with enough money will dominate the public sphere. Think Russian bots and cryptocurrency spammers.

Consider also that the ability to participate anonymously on social media has many positive advantages, including safety for marginalised and at-risk groups.

Giving users tools to manage their public and personal spheres is crucial to self-identity and online culture. Punishing people who want to remain anonymous on Twitter is not the answer.

Worse yet, connecting social media profiles to payment verification could cause real harm if a person’s account is compromised and the attacker learns their identity through their payment records.

A cascade of consequences

Musk’s ideas are already causing a cascading series of unintended consequences on the platform. Accounts with blue ticks began changing their profile handle to “Elon Musk” and profile picture to parody him. In response, Musk tweeted a new policy proposal that Twitter handles engaging in impersonation would be suspended unless they specify being a “parody”.

Users will not even receive a warning, as comedian Kathy Griffin and her 2 million followers discovered when her account was suspended for parodying Musk.

Musk’s vision for user verification does not square up with that of Twitter or the internet research community.

While the existing system is flawed, at least it was systematic, somewhat transparent, and with the trappings of accountability. It was also revisable in the face of public criticism.

On the other hand, Musk’s policy approach is tyrannical and opaque. Having abolished the board of directors, the “Chief Twit” has all the power and almost no accountability.

We are left with a harrowing vision of a fragile and flawed online public square: in a world where everyone is verified, no one is verified.The Conversation

Timothy Graham, Associate Professor, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original

SEE ALSO

ADM+S Hackathon generates new ideas for investigating “dark advertising”

ADM+S Hackathon generates new ideas for investigating “dark advertising”

Author Kathy Nickels
Date 31 October 2022

The winning idea from the Tech for Good: ADM+S Dark Ads Hackathon proposes new methods to identify online advertising practices that could involve price discrimination.

Professor Daniel Angus, Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S Centre) said the winning idea presents a particularly powerful technique for uncovering forms of misleading and discriminatory advertising.

“This technique will mean that identifying some forms of dark advertising practices will no longer be like finding a needle in a haystack,” said Professor Angus.

The Tech for Good: ADM+S Dark Ads Hackathon 2.5 day event brought together over 40 participants from social science, humanities, and computer science to hack new ideas and methods for better transparency in online advertising.

“The diversity of ideas and potential for impact was extraordinary. While large technology firms continue to drag the chain on advertising accountability, it was refreshing to see our participants offer new ideas and approaches to these significant issues.” said Professor Angus.

The Hackathon was hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in collaboration with government and consumer rights organisations, who recognised an urgent need for better transparency and accountability  following recent examples of price discriminiation, scam advertising and predatory targeting in online advertising spaces.

The winning pitch, Using postcodes to identify discriminatory patterns in online advertising data,  used the existing ADM+S Australian Ad Observatory database of half a million advertisements donated by close to 2,000 participants alongside statistical data associated with postcodes to identify patterns of price discrimination based on userlocation. 

The team also suggested building a visual interface to help both researchers and consumers quickly identify discrimination and other unethical  advertising practices.

Other ideas presented at the hackathon included: 

Read more about the Hackathon and the team’s ideas on the Tech for Good: ADM+S Dark Ads Hackathon webpage.

Watch highlights from the event on YouTube

The winning team will be traveling to Brisbane in November to present their idea to the ABC’s Story Lab team a collection of journalists, developers, designers, social media and video specialists focused on data-driven, visual storytelling for Australian audiences and to Hack/Hackers a rapidly expanding international grassroots journalism organisation. 

Find out how social media advertising is targeting you and help researchers uncover harmful advertising practices, join the Australian Ad Observatory

The Tech for Good: ADM+S Dark Ads Hackathon included two public panels where researchers from the Australian Ad Observatory joined with consumer advocates and government representatives to discuss online harms and the future of advertising accountability.

Watch the Public Panel discussions on YouTube 

Panel 1: Key Issues in Online Advertising  

Panel 2: Accountability for Online Ads 

Listen to the Public Panel discussions on the ADM+S Podcast 

We thank the following judging panel for their time and feedback provided to the Hackathon teams:

  • Kate Bower – Consumer Data Advocate, CHOICE
  • Dr Aimee Brownbill – Senior Policy and Research Advisor, Foundation for Alcohol Research and Education (FARE)
  • Simon Elvery – Journalist and Developer at ABC News Story Lab, ABC
  • Samuel Kininmonth – Policy Officer, Australian Communications Consumer Action Network (ACCAN)
  • Yuan-Fang Li – Associate Professor at Faculty of IT, Monash University
  • Lucy Westerman – Commercial Determinants of Health Lead, VicHealth
  • Professor Kim Weatherall – Chief Investigator, ADM+S at The University of Sydney

The Hackathon was organised by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in collaboration with ABC, VicHealth, Digital Rights Watch, ACCAN (The Australian Communications Consumer Action Network), CHOICE, CPRC (Consumer Policy Research Centre), and FARE (Foundation for Alcohol Research and Education).

SEE ALSO

Dominique Carlon winner of the inaugural ADM+S HDR Essay Prize

Dominique Carlon presenting

Dominique Carlon winner of the inaugural ADM+S HDR Essay Prize

Author Kathy Nickels
Date 11 October 2022

ADM+S PhD candidate Dominique Carlon (QUT) has been announced as the winner of the inaugural ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) Higher Degree Research (HDR) Student Essay Prize.

Higher Degree Research students from the ADM+S were invited to submit a 2,000 word essay to challenge existing perspectives or suggest new directions of research in automated decision-making (ADM) in the field of news and media.

Carlon’s winning essay “Bots are more than human” argues that debates about the risks and benefits of bots having human-like qualities overlooks other creative and interesting possibilities that they could offer society. 

Dr James Meese (Co-leader of the News and Media Focus area at the ADM+S Centre, RMIT) said that the  essay was genuinely innovative.

“The essay will no doubt inspire academia and industry to think more deeply about how to best deploy bot technologies in the future” said Dr Meese.

Melanie Trezise (University of Sydney)  was awarded an honorable mention for her essay “‘If it bleeds, it leads’: What is human negativity bias teaching the machine?”. The essay explored how AI systems could potentially counteract negativity bias in the news.

Other submissions looked at ADM and the curation of news on Youtube, cognitive bias, and the dangers of newsworthiness criteria in journalism.

Essay submissions were judged according to originality and innovation; argument structure; and quality of analysis by the ADM+S HDR Essay Prize judging panel: Dr Ariadna Matamoros-Fernández, Dr James Meese, Dr Kylie Pappalardo and Professor Mark Sanderson, chaired by: Sally Storey.

The winner receives $2000 (AUD) and their essay has been published on the Automated Decision-Making and Society publication on Medium.com.

Read the winning essay, Bots as more than human on the ADM+S Medium publication and the ADM+S Website.

Listen to an interview with Dominique Carlon on the ADM+S Podcast: Bots as More Than Human? 

SEE ALSO

Dark ads public panel: Issues of online advertising and accountability

Dr Aimee Brownbill (FARE), Lucy Westerman (VicHealth), Kate Bower (CHOICE) and Erin Turner (CPRC).
Dr Aimee Brownbill (FARE), Lucy Westerman (VicHealth), Kate Bower (CHOICE) and Erin Turner (CPRC).

Dark ads public panel: Issues of online advertising and accountability

Author Kathy Nickels
Date 6 October 2022

The ADM+S Dark Ads public panel brought together government representatives, consumer rights organisations and researchers from the ARC Centre of Excellence for Automated Decision-Making and Society to discuss key issues in online advertising.

Panel experts discussed concerns about unregulated online advertising practices with examples of predatory advertising, price discrimination, and scam ads and how these practices impact vulnerable consumers. 

The panelists agreed that advertising is becoming harder than ever before to hold accountable and that there is an urgent need for better online advertising transparency and accountability.

Associate Professor Nicholas Carah (ADM+S, UQ) moderated the discussion on the key issues in online advertising.

“We can’t see the ads [that are being delivered online] and this is a concern as advertising plays such a fundamental role in shaping our public life” said Associate Professor Nicholas Carah

“And for some categories we have real questions and concerns for vulnerable consumers and harmful products that we need to be able to address collectively”. 

During the discussion, questions were raised on whether the expanding hyper-personalisation of online advertising still fits within the traditional definition of advertising. 

Panelists discussed future directions for increasing transparency and accountability including policy and regulation, journalistic practices, citizen science approaches and further research.

The discussion amongst this diverse group of panelists helped to raise concerns from different perspectives and highlighted the need for a multi-disciplinary approach to tackle these issues.

Panel 1: Key Issues in Online Advertising

Associate Professor Nicholas Carah (ADM+S, Monash University) moderated the discussion with Kate Bower (CHOICE), Dr Aimee Brownbill (FARE), Erin Turner (Consumer Policy Research Centre) and Lucy Westerman (VicHealth).

Panel 2: Accountability for Online Ads 

Professor Daniel Angus (ADM+S, QUT) moderated the discussion with Simon Elvery (ABC), Samuel Kinnonmonth (ACCAN – The Australian Communications Consumer Action Network), Lizzie O’Shea (Digital Rights Watch), Xue Ying Tan (Jane) (ADM+S, QUT), and Dr Verity Trott (ADM+S, Monash University)

The Hackathon was organised by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in collaboration with ABC, VicHealth, Digital Rights Watch, ACCAN, CHOICE, CPRC, and FARE.

View panel 1 discussion on ADM+S YouTube 

View panel 2 discussion on ADM+S YouTube 

SEE ALSO

Doomscrolling is literally bad for your health. Here are 4 tips to help you stop

Becca Tapert/Unsplash

Doomscrolling is literally bad for your health. Here are 4 tips to help you stop

Authors Kate Mannell and James Meese
Date 9 September 2022

Doomscrolling can be a normal reaction to living through uncertain times. It’s natural to want to understand dramatic events unfolding around you and to seek out information when you’re afraid. But becoming absorbed in bad news for too long can be detrimental.

A newly published study has found that people with high levels of problematic news consumption are also more likely to have worse mental and physical health. So what can you do about it?

We spoke to Australians in the state of Victoria about their lengthy lockdown experiences and found how they managed to stop doomscrolling. Here are some tips to help you do the same.

Doomscrolling – unhelpful and harmful

“Doomscrolling” describes what happens when someone continues to consume negative news and information online, including on social media. There is increasing evidence that this kind of overconsumption of bad news may have negative impacts.

Research suggests doomscrolling during crises is unhelpful and even harmful. During the early COVID-19 pandemic, consuming a lot of news made people feel overwhelmed. One study found people who consumed more news about the pandemic were also more anxious about it.

Research into earlier crises, like 9/11 and the Boston Marathon bombings, also found that sustained exposure to news about catastrophes is linked to negative mental health outcomes.

Choosing to take control

During the peak of COVID-19 spread, many found themselves doomscrolling. There was lots of bad news and, for many people, lots more spare time. Several studies, including our own, have found that limiting news exposure helped people to cope.

Melbourne, the state capital of Victoria, experienced some of the longest-running lockdowns in the world. Wanting to know how Victorians were managing their news consumption during this time, we launched a survey and held interviews with people who limited news consumption for their own wellbeing.

We found that many people increased their news consumption when the lockdowns began. However, most of our participants gradually introduced strategies to curb their doomscrolling because they realised it was making them feel anxious or angry, and distracted from daily tasks.

Our research found these news-reduction strategies were highly beneficial. People reported feeling less stressed and found it easier to connect with others. Here are some of their strategies, which you might want to try.

1. Make a set time to check news

Rather than checking news periodically across the day, set aside a specific time and consider what time of day is going to have the most positive impacts for you.

One participant would check the news while waiting for her morning cup of tea to brew, as this set a time limit on her scrolling. Other participants preferred saving their news engagement for later in the day so that they could start their morning being settled and focused.

2. Avoid having news ‘pushed’ to you

Coming across news unexpectedly can lure you into a doomscrolling spiral. Several participants managed this by avoiding having news “pushed” to them, allowing them to engage on their own terms instead. Examples included unfollowing news-related accounts on social media or turning off push notifications for news and social media apps.

3. Add ‘friction’ to break the habit

If you find yourself consuming news in a mindless or habitual way, making it slightly harder to access news can give you an opportunity to pause and think.

One participant moved all her social media and news apps into a folder which she hid on the last page of her smartphone home screen. She told us this strategy helped her significantly reduce doomscrolling. Other participants deleted browser bookmarks that provided shortcuts to news sites, deleted news and social media apps from their phones, and stopped taking their phone into their bedroom at night.

4. Talk with others in your household

If you’re trying to manage your news consumption better, tell other people in your household so they can support you. Many of our participants found it hard to limit their consumption when other household members watched, listened to, or talked about a lot of news.

In the best cases, having a discussion helped people come to common agreements, even when one person found the news comforting and another found it upsetting. One couple in our study agreed that one of them would watch the midday news while the other went for a walk, but they’d watch the evening news together.

Staying informed is still important

Crucially, none of these practices involve avoiding news entirely. Staying informed is important, especially in crisis situations where you need to know how to keep safe. Our research shows there are ways of balancing the need to stay informed with the need to protect your wellbeing.

So if your news consumption has become problematic, or you’re in a crisis situation where negative news can become overwhelming, these strategies can help you strike that balance. This is going to remain an important challenge as we continue to navigate an unstable world.The Conversation

Kate Mannell, Research Fellow in Digital Childhoods , Deakin University and James Meese, Research Fellow, Technology, Communication and Policy Lab, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

How dark is ‘dark advertising’? We audited Facebook, Google and other platforms to find out

Ashkar Dave/Unsplash

How dark is ‘dark advertising’? We audited Facebook, Google and other platforms to find out

Authors Nicholas Carah, Aimee Brownbill, Amy Shields Dobson, Brady Robards, Daniel Angus, Kiah Hawker, Lauren Hayden and Xue Ying Tan
Date 7 September 2022

Once upon a time, most advertisements were public. If we wanted to see what advertisers were doing, we could easily find it – on TV, in newspapers and magazines, and on billboards around the city.

This meant governments, civil society and citizens could keep advertisers in check, especially when they advertised products that might be harmful – such as alcohol, tobacco, gambling, pharmaceuticals, financial services or unhealthy food.

However, the rise of online ads has led to a kind of “dark advertising”. Ads are often only visible to their intended targets, they disappear moments after they have been seen, and no one except the platforms knows how, when, where or why the ads appear.

In a new study conducted for the Foundation for Alcohol Research and Education (FARE), we audited the advertising transparency of seven major digital platforms. The results were grim: none of the platforms are transparent enough for the public to understand what advertising they publish, and how it is targeted.

Why does transparency matter?

Dark ads on digital platforms shape public life. They have been used to spread political falsehoods, target racial groups, and perpetuate gender bias.

Dark advertising on digital platforms is also a problem when it comes to addictive and harmful products such as alcohol, gambling and unhealthy food.

In a recent study with VicHealth, we found age-restricted products such as alcohol and gambling were targeted to people under the age of 18 on digital platforms. At present, however, there is no way to systematically monitor what kinds of alcohol and gambling advertisements children are seeing.

Advertisements are optimised to drive engagement, such as through clicks or purchases, and target people who are the most likely to engage. For example, people identified as high-volume alcohol consumers will likely receive more alcohol ads.

This optimisation can have extreme results. A study by the Foundation for Alcohol Research and Education (FARE) and Cancer Council WA found one user received 107 advertisements for alcohol products on Facebook and Instagram in a single hour on a Friday night in April 2020.

How transparent is advertising on digital platforms?

We evaluated the transparency of advertising on major digital platforms – Facebook, Instagram, Google search, YouTube, Twitter, Snapchat and TikTok – by asking the following nine questions:

  • is there a comprehensive and permanent archive of all the ads published on the platform?
  • can the archive be accessed using an application programming interface (API)?
  • is there a public searchable dashboard that is updated in real time?
  • are ads stored in the archive permanently?
  • can we access deleted advertisements?
  • can we download the ads for analysis?
  • are we able to see what types of users the ad targeted?
  • how much did it cost to run the advertisement?
  • can we tell how many people the advertisement reached?

All platforms included in our evaluation failed to meet basic transparency criteria, meaning advertising on the platform is not observable by civil society, researchers or regulators. For the most part, advertising can only be seen by its targets.

Notably, TikTok had no transparency measures at all to allow observation of advertising on the platform.

Advertising transparency on these major digital platforms leaves a lot to be desired. From ‘Advertisements on digital platforms: How transparent and observable are they?’, Author provided

Other platforms weren’t much better, with none offering a comprehensive or permanent advertising archive. This means that once an advertising campaign has ended, there is no way to observe what ads were disseminated.

Facebook and Instagram are the only platforms to publish a list of all currently active advertisements. However, most of these ads are deleted after the campaign becomes inactive and are no longer observable.

Platforms also fail to provide contextual information for advertisements, such as advertising spend and reach, or how advertisements are being targeted.

Without this information, it is difficult to understand who is being targeted with advertising on these platforms. For example, we can’t be sure companies selling harmful and addictive products aren’t targeting children or people recovering from addiction. Platforms and advertisers ask us to simply trust them.

We did find platforms are starting to provide some information on one narrowly defined category of advertising: “issues, elections or politics”. This shows there is no technical reason for keeping information about other kinds of advertising from the public. Rather, platforms are choosing to keep it secret.

Bringing advertising back into public view

When digital advertising can be systematically monitored, it will be possible to hold digital platforms and marketers accountable for their business practices.

Our assessment of advertising transparency on digital platforms demonstrates that they are not currently observable or accountable to the public. Consumers, civil society, regulators and even advertisers all have a stake in ensuring a stronger public understanding of how the dark advertising models of digital platforms operate.

The limited steps platforms have taken to create public archives, particularly in the case of political advertising, demonstrate that change is possible. And the detailed dashboards about ad performance they offer advertisers illustrate there are no technical barriers to accountability.The Conversation

Nicholas Carah, Associate Professor in Digital Media, The University of Queensland; Aimee Brownbill, Honorary Fellow, Public Health, The University of Queensland; Amy Shields Dobson, Lecturer in Digital and Social Media, Curtin University; Brady Robards, Associate Professor in Sociology, Monash University; Daniel Angus, Professor of Digital Communication, Queensland University of Technology; Kiah Hawker, Assistant researcher, Digital Media, The University of Queensland; Lauren Hayden, PhD Candidate and Research Assistant, The University of Queensland, and Xue Ying Tan, Software Engineer, Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO