Project to counter misinformation receives Meta Foundational Integrity Research funding

Meta logo on phone screen

Project to counter misinformation receives Meta Foundational Integrity Research funding

Author Kathy Nickels
Date 30 March 2023

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) researcher Dr Silvia Montaña-Niño and her colleagues have been awarded funding from Meta’s Foundational Integrity Research for their comparative study which seeks to counter misinformation in the Southern Hemisphere.

Meta’s Foundational Integrity Research request for proposals (RFP) was launched in September 2022 and attracted 503 proposals from 349 universities and institutions around the world. 

A total of $1,000,000 USD funding was awarded to research that would enrich the understanding of challenges related to integrity issues on social media and social technology platforms.

The project Countering misinformation in the Southern Hemisphere: A comparative study to be will be led by Dr Michelle Riedlinger (QUT) with colleagues Dr Silvia Montaña-Niño (QUT), Dr Marina Joubert (Stellenbosch University), and Assoc. Prof Víctor García-Perdomo (Universidad de La Sabana) was one of 11 projects to receive the funding. 

Dr Michelle Riedlinger from the School of Communication at QUT is leading the project.

“We have an amazing team of researchers from Australia, Latin America and Africa involved in this project and we’re keen to get started,” says Dr Riedlinger.

The project will investigate what fact checkers are doing in regions outside of North America and Europe.

Dr Silvia Montaña-Niño, research fellow at the ADM+S Centre at QUT, says “We’ve done some initial work and found that fact checkers are packaging their content into reusable ‘checktainment’ explainer formats using video, memes, and infographics to engage local social media users. We’re keen to explore the regional differences a bit more.”

Through the research funding, Meta aims to support the growth of scientific knowledge and contribute to a shared understanding across the broader scientific community and technology industry on how social technology companies can better address integrity issues on their platforms. 

“We are excited to grant these awards to cultivate new knowledge on integrity and establish deeper connections with global social science researchers,” says Umer Farooq, Director of Research for Integrity at Meta.

SEE ALSO

Edward Small selected for research program at the University of Bristol

Edward Small presenting poster at the 2022 ADM+S Symposium

Edward Small selected for research program at the University of Bristol

Author Kathy Nickels
Date 27 March 2023

Edward Small, higher degree research student at the ARC Centre for Excellence for Automated Decision-Making and Society (ADM+S), RMIT University has been selected to undertake a four-month research program with Machine Learning and Computer Vision (MaVi) at the University of Bristol.

Applicants for this program are selected based on their academic excellence, previous experiences and references. 

Edward will receive supervision and support from Associate Professor Raul Santos-Rodriguez, to develop explainable Artificial Intelligence (XAI) tools in collaboration with Bristol General Hospital. 

Edward says he is incredibly excited to work with the University of Bristol and Prof. Raul Santos-Rodriguez. 

“Being a top 10 UK institution, and part of the Russel Group, Bristol has a strong track record in AI research that I hope to contribute to, and Raul is a leading researcher in human-centric machine learning and explainability,” he said

“I expect I will learn a lot, and I hope to come back to Australia to apply this new knowledge in innovative ways. I am very lucky to be a part of a centre like ADM+S, without whom an opportunity like this would be impossible to take up.”

At the ADM+S Edward researches fairness, explainability, and transparency in automated decision-making with supervisors Prof Flora Salim, Dr Jeffrey Chan and Dr Kacper Sokol. 

His research examines the robustness and stability of current fairness strategies, and looks to resolve the mathematical conflict between group fairness and individual fairness. Edward’s work also looks at the scalability of automated explanations for machine learning models and questions whether explainable artificial intelligence induces fairness and utility or reduces it.

Edward will receive support from the ADM+S and Bristol University to undertake this research program.

SEE ALSO

Dr Kacper Sokol visits Università della Svizzera italiana to deliver new course on machine learning explainability

Dr Kacper Sokol visits Università della Svizzera italiana to deliver new course on machine learning explainability

Author Kathy Nickels
Date 27 March 2023

Research Fellow Dr Kacper Sokol from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), RMIT University has recently visited Università della Svizzera italiana (USI) in Lugano, Switzerland to deliver training on machine learning explainability.

The training was developed to bridge the gap between the theoretical and practical aspects of explainability and interpretability of predictive models based on artificial intelligence and machine learning algorithms, and builds upon Dr Sokol’s research in this area. 

Dr Sokol says that the course differs from others that commonly take an abstract approach. 

“It takes an adversarial perspective and breaks these techniques up into core functional blocks, studies their role and configuration, and reassembles them to create bespoke explainers with well-understood properties, thus making them suitable for the problem at hand,” he said.

The course was offered along with other training opportunities available to postgraduate students from the informatics department at USI. 

“Given its good reception and high modularity of the teaching materials, it will be adapted to support a variety of future training sessions,” said Dr Skolol.

The course resources are available online at Machine Learning Explainability: Exploring Automated Decision-Making Through Transparent Modelling and Peeking Inside Black Boxes.

This training is the most recent output stemming from Dr Sokol’s ongoing collaboration with Professor Marc Langheinrich and his Ubiquitous Computing Research Group at USI. Together they work on advancing explainability and interpretability of machine learning models. They recently presented BayCon: Model-agnostic Bayesian Counterfactual Generator at the 31st International Joint Conference on Artificial Intelligence 2022 (IJCAI-22) in Vienna, Austria.

SEE ALSO

The Australian Ad Observatory uncovering the hidden world of targeted advertising

Create an Ad screen on Facebook
Shutterstock/PixieMe

The Australian Ad Observatory uncovering the hidden world of targeted advertising

Author Kathy Nickels
Date 23 March 2023

Millions of Australians are exposed to online advertising every day as they use social media and browse the internet. Advertisers on these platforms target audiences using a mix of data and profile information gathered from our activities online, but there is little publicly available knowledge about who is being targeted by which advertisers.

The Australian Ad Observatory project conducted at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is working to understand the hidden world of advertising by asking volunteers to donate their Facebook ads.

Professor Daniel Angus, one of the Chief Investigators on the project, says the problem with online advertising is that it is hidden from public view, and so it may break the rules that have been put in place to prevent consumer harm, without being noticed.

“We are seeing ads that have been able to slip through the net because humans aren’t involved in making judgements,” he says. 

“The concern there is that if these ads can slip through the net, what other forms of advertising are also making their way through that, that maybe perhaps in violation of existing codes and practices?”

Over the past year more than 2,000 volunteers have donated their ads to the Australian Ad Observatory. 

This research benefits our understanding of platform-based advertising and is enabling independent research into the role that algorithmically targeted advertising plays in society.  

Online Casinos (ABC)

The ABC recently partnered with the Australian Ad Observatory to find gambling ads that were illegally targeting Australians on Facebook. This report asks who should be responsible for monitoring illegal online advertising and whether advertising rules can be better enforced by the Australian Communications and Media Authority (ACAM).  

Read more:  Online casinos based offshore are illegally targeting Australians on Facebook. Who is responsible?

The issue of gambling advertising was raised in parliament this week by Senator David Pocock who asked whether the government was aware that Australians are being exposed, on their social media feeds, to illegal advertisements from online casinos?

Senator Watt, currently representing the Minister for Communications, said “Australians are  concerned about the growing proliferation of gambling advertising on online platforms. There are of course particular concerns when it comes to the risk around those advertisements being accessed by children.”

“There are additional concerns about the risk of online gambling advertisements to the adult population as well.”

“The government does recognise there is ongoing community concern about harms associated with online gambling, and that’s exactly why we have established an inquiry into online gambling and its impacts on those experiencing gambling harm.”

“Greenwashing” Advertising (CPRC)

Through the Australian Ad Observatory, the Consumer Policy Research Centre (CPRC) has undercovered online advertisements that use vague and misleading environmental and sustainability claims in their messaging to consumers.
Findings from this research will be used to inform regulators and policy makers about addressing unsubstantiated green claims.

Read more: Research investigates “greenwashing” advertising on social media

Alcohol Advertising (FARE)
The Ad Observatory project will be working with the Foundation for Alcohol Research & Education (FARE) to provide further analysis on the content of alcohol advertisements on social media.

A recent report released by FARE revealed that 39,820 distinct alcohol ads were placed on Facebook and Instagram last year, often combined with a button prompting users to “shop now”.

Through a search of Meta’s ad library, FARE found that big brands placed an average of 765 alcohol ads each week on the Meta platforms.

The report Alcohol advertising on social media: a 1-year snapshot, found that alcohol advertising on Instagram and Facebook is intrinsically linked to the online sale and delivery of alcohol directly into the home.

Meta’s ad library enabled insight into the amount and type of content being distributed by alcohol advertisers on Meta platforms, however it failed to provide information on advertising targeting, spend and reach of advertisements (except for political advertisements).

By partnering with the Australian Ad Observatory, FARE will further it’s investigation into alcohol advertising to develop a holistic understanding of alcohol marketing on these platforms, including understanding how often people are exposed to these advertisements and the ways in which people are being targeted with alcohol advertising on these platforms.

Read more: Alcohol companies ply community with 40,000 alcohol advertisements a year on Facebook and Instagram

Alongside the work with ABC, CPRC and FARE, the Australian Ad Observatory project will be using the ad collection to investigate consumer finance advertising, and advertising of unhealthy foods.

The Australian Ad Observatory has already collected over 700,000 advertisements from 2,000 volunteers, but is still looking for more people to sign up. A large pool of diverse participants of different ages, backgrounds and from different parts of Australia will help us better understand how particular groups in society are being targeted with particular kinds of ads.

To find out more and join the project visit The Australian Ad Observatory

SEE ALSO

Visual mis/disinformation in journalism and public communications article wins top paper award

Visual mis/disinformation in journalism and public communications article wins top paper award

Author Kathy Nickels
Date 20 March 2023

ADM+S researcher Prof Dan Angus is co-author on the paper Visual mis/disinformation in journalism and public communications: Current verification practices, challenges, and future opportunities, which has been voted as top paper published in the Q1 journal, Journalism Practice, in 2022-23.

The paper led by Dr TJ Thomson at QUT’s Digital Media Research Centre and co-authored by Prof Daniel Angus, A/Prof Paula Dootson, Dr Edward Hurcombe, and Mr Adam Smith has accrued more than 11,000 views since being published, making it the 14th most-read article in the journal of all time.  

The study provides a state-of-the-art review of current journalistic image verification practices, examines a number of existing and emerging image verification technologies that could be deployed or adapted to aid in this endeavour, and identifies the strengths and limitations of the most promising extant technical approaches. 

Independent peer reviewers note this work provides “a framework for understanding the current and future considerations of visual media verification,” “provides an excellent understanding of visual disinformation” and makes “a strong contribution to the field.”

 The QUT team’s paper will compete against two other papers, the top papers published in Journalism Studies and Digital Journalism over the same timeframe, for the 2022 Bob Franklin Journal Article Award, which seeks to recognise the article that best contributes to our understanding of connections between culture and society and journalism practices, journalism studies and/or digital media/new technologies.

Links to all of the other short- and long-listed papers can be found here.

Republished with permission from QUT Digital Research Media Centre 

Read the original article QUT team wins top paper honour

SEE ALSO

Research investigates “greenwashing” advertising on social media

A washing machine with green earth landscape inside

Research investigates “greenwashing” advertising on social media

Author Kathy Nickels
Date 8 March 2023

Researchers from the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) are uncovering vague and misleading green advertising on social media, with the help of the Australian consumers who are being targeted.

So far researchers have observed that many advertisers, especially those in the clothing and footwear, personal care, and food and food packaging industries, market themselves with green claims.

Many of these claims are vague and unsubstantiated, and have the potential to mislead consumers.

Professor Christine Parker, Chief Investigator at the ADM+S Centre, says the practice of making misleading claims about a product’s environmental sustainability, known as “greenwashing”, is likely to be on the rise.

Increased consumer demand for more sustainable products, increased understanding of the need for business to take action on the climate crisis, and the need to shift to a circular economy are likely to be driving green claims.

“Some advertisers are using vague wording alongside green imagery to give an impression of environmental action – but with no clear information and substantiation of exactly what the company is doing to achieve its environmental and climate promises or how the product is contributing to a circular economy,” says Professor Parker.

In a recent audit, the Australian Competition and Consumer Commission (ACCC) found that more than half of organisations advertising online made concerning claims about their environmental or sustainability practices.

The Consumer Policy and Research Centre (CPRC) found similar results in a 24-hour sweep of online advertising conducted last year. The CPRC also found that many consumers believe some authority is checking green claims before they are made – which is not in fact the case.

“Conscientious consumers may well be targeted with a whole string of green ads that make them feel like business is doing the right thing and we are on a good environmental path”

“But this might be a completely misleading impression. Many of these claims may not be substantiated.”

In collaboration with the Consumer Policy Research Centre (CPRC), the ADM+S Centre is investigating whether Facebook users are seeing ads that are misleading, harmful or unlawful.

This research is conducted through the Centre’s Australian Ad Observatory, a project that relies on citizen scientists to share the ads that they see on Facebook.

“This approach is important because it gives us a way to see how Facebook advertising is targeted to individual users – a practice that is normally hidden from public view and regulatory scrutiny,“ says Professor Parker.

The recent ACCC report investigated green claims made in publicly visible online advertising, while research by the ADM+S Centre will help uncover advertising usually hidden from public scrutiny.

Professor Parker says “it is possible that advertisers could engage in less responsible advertising practices on social media where they are less likely to face regulatory scrutiny.”

Researchers are investigating how frequently consumers are targeted with green advertising, and how misleading these claims are. Findings from this research will be used to inform regulators and policy makers about addressing unsubstantiated green claims.

Australians are invited to join this research project by visiting The Australian Ad Observatory website.

The ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) is funded by the Australian Government through the Australian Research Council.

View the original media release

SEE ALSO

How should we respond to ChatGPT?

Plastic figure resembling a human who sits on a table infront of a laptop in a dark room. Long shadows disseminate a gloomy mood.
Max Gruber / Better Images of AI / Clickworker 3d-printed / CC-BY 4.0

How should we respond to ChatGPT?

Author Kathy Nickels
Date 28 February 2023

ChatGPT is a controversial new language assistant powered by AI. It can write essays, do coding and even structure complex research briefs, all in a matter of seconds.

Launched late November 2022, it now has more than 100 million users according to estimates. 

This new tool, developed by US company OpenAI, is causing concern amongst schools and universities, with fears that students will use the program to write their assignments.

ChatGPT is likely to change the way that students are assessed and force us to rethink what it means to be genuinely creative. 

ADM+S researcher Dr Aaron Snoswell spoke to Athony Funnell on a recent episode of ABC Radio National Future Tense about ChatGPT.

Dr Snoswell suggests that safeguarding and responses to the technology needs to be wide ranging and needs to include Government bodies, experts in the AI industry, system users as well as media.

“Government bodies have a role to play in terms of coming up with regulations, policies, and best practices,” says Dr Snoswell.

He said that organisations and individual experts in the AI industry are key stakeholders here.

“[They] need to take the ethical dimensions and implications of their work much more seriously than it’s currently done.”

He also says that it’s important that people who are going to interact with the systems should understand how they work. 

“Teaching students about how to safely and responsibly use these tools, I think, is a really important thing as well.”

And finally, Dr Snoswell says “news and media organisations need to do their part as well by reporting on this type of technology with a large grain of salt and not catastrophising, or overhyping.”

Listen to the full discussion on ABC Radio National Future Tense Chat GPT – the hype, the limitations and the potential 

Broadcast Sunday 26 February 2023, 11:30am

SEE ALSO

Meta targets content creators with new blue tick verification bundle

Influencer creating contect on phone

Meta targets content creators with new blue tick verification bundle

Author Kathy Nickels
Date 24 February 2023

Meta, the parent company of Facebook and Instagram has announced it will be testing a new paid verification subscription for users to pay to prove they are real. 

The new offering, called Meta Verified, assigns users a blue verification badge on their profile in exchange for AUD$20 a month.

Professor of Digital Media and Associate Director of the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), Jean Burgess spoke to Alex Easton on ABC Southern Queensland Radio about this latest move.

It seems from Meta’s press release that the paid verification is primarily aimed at influencers and content creators, says Prof Burgess.

The paid subscription bundles the blue tick verified badge with other premium features including increased visibility and reach, proactive account monitoring for impersonators, and access to a real person for help with common account issues.

Prof Burgess says the move is also about competing with Tik Tok as a platform for the creator economy. 

“I think this is a move to try to get the content creators that provide the value to [platforms like] Tik Tok , Instagram, Facebook and Twitter to be more invested in signing up to Meta as a ‘safer place’”. 

In comparison to Twitter’s blue tick badge, “Meta’s verification process would absolutely have more robust systems,” says Prof Burgess.

Part of this verification process includes submitting a government-issued ID that matches the name on your profile and profile photo.

“This raises other questions about how much we want to be trusting Meta with our personal information,” she says.

Listen to the full discussion on ABC Southern Queensland Radio from 2:18:00
This episode was broadcast Wed 22 Feb 2023 at 3:00pm

SEE ALSO

Prof Deborah Lupton appointed Honorary Doctor at the University of Skövde

Flickr: anna_thorn

Prof Deborah Lupton appointed Honorary Doctor at the University of Skövde

Author Kathy Nickels
Date 22 February 2023

The University of Skövde has appointed Prof Deborah Lupton, its first Honorary Doctorate in the field of Health in the Digital Society.

“It is an amazing feeling to be so honored, particularly as I already have strong connections to and collaborations with colleagues in Sweden and the other Nordic countries. I have always felt very welcome and appreciated in these countries, with lots to talk about in terms of shared interests. This Honorary Doctorate means that I will always have a special relationship with the University of Skövde,” says Professor Lupton.

Prof Lupton is Chief Investigator at the ARC Centre of Excellence for Automated-Decision Making and Society,  node leader of the University of New South Wales, leads the Health focus area and co-leads the People program.

With a background in sociology, as well as media and cultural studies,  Prof Lupton combines qualitative and innovative social research methods with sociocultural theory. Her research focuses on the use of new digital media in medicine and public health. She studies how those media are the focus of increasing interest in society, how they can have unexpected effects for patients and healthcare professionals, and how they can influence how society works with digital technologies in public health and healthcare.

The honorary doctorate nomination recognises Prof Lupton’s work in digital health as a source of inspiration for the School of Health Sciences at the University of Skövde. The nomination states that her focus on interdisciplinary perspectives has been important for the development of the field of Digital Health at the University and that her work has inspired aspects of the Master’s Program “Public Health Science: Digital Health and Communication” at the University.

“It is wonderful to see the development of the new program in digital health research at the University of Skövde and to be made aware that my research has contributed to this exciting initiative,” says Professor Lupton.

Her work and thoughts have also contributed to the University’s research in the field.

“Professor Lupton’s work has been important for the University’s development. During the autumn, the University received degree-awarding powers on a doctorate level in the field of Health in the Digital Society, which is the University’s second degree-awarding powers on a doctorate level. Researchers like Professor Lupton are a great source of inspiration for this broad field and its applications,” says Alexandra Krettek, Professor of Public Health Sciences and Dean at the University of Skövde.

SEE ALSO

Twitter data appears to support claims new algorithm inflated reach of Elon Musk’s tweets

Twitter data appears to support claims new algorithm inflated reach of Elon Musk’s tweets

Author Kathy Nickels
Date 21 February 2023

Data collected by Queensland University of Technology node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S) researcher via Twitter’s API appears to support media claims the reach of the tweets of the platform’s billionaire owner Elon Musk have been artificially inflated.

Last week, the tech news site Platformer reported 80 Twitter engineers had been engaged to tweak the platform’s algorithm after Musk noticed a tweet from the US president, Joe Biden, about the Super Bowl outperformed his own, despite Musk having more than three times the number of followers.

The report claimed engineers deployed a new algorithm to artificially inflate Musk’s tweets by a factor of 1,000, ensuring that more than 90% of Musk’s 128.9 million followers would see them. The change reportedly also ensured users who don’t personally follow Musk would see his tweets in their “for you” tab.

Assoc Prof Timothy Graham, Associate Investigator at the Queensland University of Technology node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S) said data he extracted from Twitter using its application program interface appeared to support much of this reporting.

The graphs produced by Assoc Prof Graham show that in the hours when the algorithm change was reported to have occurred, Musk’s impressions went up 737%, and his daily impressions have close to tripled.

Graham, who typically researches bot behaviour and other trends on social media, says he was able to track Musk’s tweet data via access to Twitter’s API, which he can currently access for free.

Twitter has announced it will cut off free access to this service – including for researchers. Instead it will charge a minimum US$100 a month for access.

“The Twitter API may shut down any moment – if this is the last data I ever collect it’ll totally be worth it,” Graham tweeted last week.

Read the full story published in The Guardian

SEE ALSO

Facebook and Instagram to trial paid verification in Australia as Twitter charges for two-factor SMS authentication

Facebook and Instagram to trial paid verification in Australia as Twitter charges for two-factor SMS authentication

Author Kathy Nickels
Date 21 February 2023

Facebook and Instagram parent company Meta is introducing a paid subscription for users to verify their accounts with a blue tick. 

Meta Platforms has announced it will be testing the monthly subscription service called Meta verified in Australia and New Zealand from this week.

The company says the service will increase the visibility of users’ posts and provide extra protection against impersonation. The move comes after Elon Musk, the owner of Twitter, implemented the premium Twitter Blue subscription back in November. 

Professor of Digital Communication and Chief Investigator at the ARC Centre of Excellence for Automated Decision-making and Society at QUT, Daniel Angus said he doubts that paid verification will make any difference to curb the spread of mis- and dis-information on the platform. 

“We’ve been tracking the spread of this dis-information for many years now, often of a very personal nature … This move will do nothing to actually curb the spread of that.” he said.

“It’s profitable for the platform to maintain pages and groups which spread disinformation. Verifying profiles and extracting more rent from users for doing so is not going to do anything to curb that spread, in fact it may make things worse.”

 

He said that the decision to introduce this subscription is extortionate as they are asking users to pay for something that should be an ordinary function of a social media service.

Separately, Twitter announced on Friday it would provide SMS-based two-factor authentication only to users who are subscribed to the US$8-a-month ($11.65) Twitter Blue service from 20 March.

Prof Angus says that the removal of the SMS-based two-factor authentication will make it far easier for accounts with weak passwords to be hacked. 

“The fact remains that you can’t extort users around basic security features. [Providing security to your users] is something that’s part and parcel of running a successful social media operation.” said Prof Angus.

“The fact that they’re asking for payment for [these features] shows that they’re out of ideas and we are very much in the late stage of these platforms losing their power.”

Prof Daniel Angus, Dr Belinda Barnet,  Senior Lecturer in Media and Communications at Swinburne University and Prof Tama Leaver, Professor of Internet Studies and Chief Investigator in the ARC Centre of Excellence for the Digital Child at Curtin University with reporter Scott Wales on ABC Radio National.

Listen to the interview on ABC News

Prof Daniel Angus with reporter Scott Wales ABC News, Melbourne.

Read the full transcript here 

SEE ALSO

3 in 4 people experience abuse on dating apps. How do we balance prevention with policing?

Girl using phone at night
Shutterstock

3 in 4 people experience abuse on dating apps. How do we balance prevention with policing?

Authors Kath Albury and Daniel Reeders
Date 30 January 2023

A 2022 survey by the Australian Institute of Criminology found three in four app users surveyed had experienced online abuse or harassment when using dating apps. This included image-based abuse and abusive and threatening messages. A further third experienced in-person or off-app abuse from people they met on apps.

These figures set the scene for a national roundtable convened on Wednesday by Communications Minister Michelle Rowland and Social Services Minister Amanda Rishworth.

Experiences of abuse on apps are strongly gendered and reflect preexisting patterns of marginalisation. Those targeted are typically women and members of LGBTIQA+ communities, while perpetrators are commonly men. People with disabilities, Aboriginal and Torres Strait Islander people, and people from migrant backgrounds report being directly targeted based on their perceived differences.

What do these patterns tell us? That abuse on apps isn’t new or specific to digital technologies. It reflects longstanding trends in offline behaviour. Perpetrators simply exploit the possibilities dating apps offer. With this in mind, how might we begin to solve the problem of abuse on dating apps?

Trying to find solutions

Survivors of app-related abuse and violence say apps have been slow to respond, and have failed to offer meaningful responses. In the past, users have reported abusive behaviours, only to be met with a chatbot. Also, blocking or reporting an abusive user doesn’t automatically reduce in-app violence. It just leaves the abuser free to abuse another person.

Wednesday’s roundtable considered how app-makers can work better with law enforcement agencies to respond to serious and persistent offenders. Although no formal outcomes have been announced, it has been suggested that app users should provide 100 points of identification to verify their profiles.

But this proposal raises privacy concerns. It would create a database of the real-world identities of people in marginalised groups, including LGBTIQA+ communities. If these data were leaked, it could cause untold harm.

Prevention is key

Moreover, even if the profile verification process was bolstered, regulators could still only respond to the most serious cases of harm, and after abuse has already occurred. That’s why prevention is vital when it comes to abuse on dating apps. And this is where research into everyday patterns and understanding of app use adds value.

Often, abuse and harassment are fuelled by stereotypical beliefs about men having a “right” to sexual attention. They also play on widely held assumptions that women, queer people and other marginalised groups do not deserve equal levels of respect and care in all their sexual encounters and relationships – from lifelong partnerships to casual hookups.

In response, app-makers have engaged in PSA-style campaigns seeking to change the culture among their users. For example, Grindr has a long-running “Kindr” campaign that targets sexual racism and fatphobic abuse among the gay, bisexual and trans folk who use the platform.

A mobile screen shows various dating app icons
Match Group is one of the largest dating app companies. It owns Tinder, Match.com, Meetic, OkCupid, Hinge and PlentyOfFish, among others.
Shutterstock

Other apps have sought to build safety for women into the app itself. For instance, on Bumble only women are allowed to initiate a chat in a bid to prevent unwanted contact by men. Tinder also recently made its “Report” button more visible, and provided users safety advice in collaboration with WESNET.

Similarly, the Alannah & Madeline Foundation’s eSafety-funded “Crushed But Okay” intervention offers young men advice about responding to online rejection without becoming abusive. This content has been viewed and shared more than one million times on TikTok and Instagram.

In our research, app users told us they want education and guidance for antisocial users – not just policing. This could be achieved by apps collaborating with community support services, and advocating for a culture that challenges prevailing gender stereotypes.

Policy levers for change

Apps are widely used because they promote opportunities for conversation, personal connection and intimacy. But they are a for-profit enterprise, produced by multinational corporations that generate income by serving advertising and monetising users’ data.

Taking swift and effective action against app-based abuse is part of their social license to operate. We should consider stiff penalties for app-makers who violate that license.

The United Kingdom is just about to pass legislation that contemplates time in prison for social media executives who knowingly expose children to harmful content. Similar penalties that make a dent in app-makers’ bottom line may present more of an incentive to act.

In the age of widespread data breaches, app users already have good reason to mistrust demands to supply their personal identifying information. They will not necessarily feel safer if they are required to provide more data.

Our research indicates users want transparent, accountable and timely responses from app-makers when they report conduct that makes them feel unsafe or unwelcome. They want more than chatbot-style responses to reports of abusive conduct. At a platform policy level, this could be addressed by hiring more local staff who offer transparent, timely responses to complaints and concerns.

And while prevention is key, policing can still be an important part of the picture, particularly when abusive behaviour occurs after users have taken their conversation off the app itself. App-makers need to be responsive to police requests for access to data when this occurs. Many apps, including Tinder, already have clear policies regarding cooperation with law enforcement agencies.The Conversation

Kath Albury, Professor of Media and Communication and Associate Investigator, ARC Centre of Excellence for Automated Decision-Making + Society, Swinburne University of Technology and Daniel Reeders, PhD Candidate, ANU School of Regulation and Global Governance (RegNet), Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Elon’s Twitter ripe for a misinformation avalanche

Twitter on screen
Image: Shutterstock

Elon’s Twitter ripe for a misinformation avalanche

Author Daniel Angus
Date 17 January 2023

Seeing might not be believing as digital technologies make the fight against misinformation even trickier for embattled social media giants, writes Daniel Angus

In a grainy video, Ukrainian President Volodymyr Zelensky appears to tell his people to lay down their arms and surrender to Russia. The video – quickly debunked by Zelensky – was a deep fake, a digital imitation generated by artificial intelligence (AI) to mimic his voice and facial expressions.

High-profile forgeries like this are just the tip of what is likely to be a far bigger iceberg. There is a digital deception arms race underway, in which AI models are being created that can effectively deceive online audiences, while others are being developed to detect the potentially misleading or deceptive content generated by these same models. With the growing concern regarding AI text plagiarism, one model, Grover, is designed to discern news texts written by a human from articles generated by AI.

As online trickery and misinformation surges, the armour that platforms built against it are being stripped away. Since Elon Musk’s takeover of Twitter, he has trashed its online safety division and, as a result, misinformation is back on the rise.

Musk, like others, looks to technological fixes to solve his problems. He’s already signalled a plan for upping use of AI for Twitter’s content moderation. But this isn’t sustainable nor scalable, and is unlikely to be the silver bullet. Microsoft researcher Tarleton Gillespie suggests: “Automated tools are best used to identify the bulk of the cases, leaving the less obvious or more controversial identifications to human reviewers.”

Some human intervention remains in the automated decision-making systems embraced by news platforms but what shows up in newsfeeds is largely driven by algorithms. Similar tools act as important moderation methods to block inappropriate or illegal content.

The key problem remains that technology ‘fixes’ aren’t perfect and mistakes have consequences. Algorithms sometimes can’t catch harmful content fast enough and can be manipulated into amplifying misinformation. Sometimes an overzealous algorithm can also take down legitimate speech.

Beyond its fallibility, there are core questions about whether these algorithms help or hurt society. The technology can better engage people by tailoring news to align with readers’ interests. But to do so, algorithms feed off a trove of personal data, often accrued without a user’s full understanding.

There’s a need to know the nuts and bolts of how an algorithm works – that is opening the ‘black box’.

But, in many cases, knowing what’s inside an algorithmic system would still leave us wanting, particularly without knowing what data and user behaviours and cultures sustain these massive systems.

One way researchers may be able to understand automated systems better is by observing them from the perspective of users, an idea put forward by scholars Bernhard Rieder, from the University of Amsterdam, and Jeanette Hofmann, from the Berlin Social Science Centre.

Australian researchers also have taken up the call, enrolling citizen scientists to donate algorithmically personalised web content and examine how algorithms shape internet searches and how they target advertising. Early results suggest the personalisation of Google Web Search is less profound than we may expect, adding more evidence to debunk the ‘filter bubble’ myth, that we exist in highly personalised content communities. Instead it may be that search personalisation is more due to how people construct their online search queries.

Last year, several AI-powered language and media generation models entered the mainstream. Trained on hundreds of millions of data points (such as images and sentences), these ‘foundational’ AI models can be adapted to specific tasks. For instance, DALL-E 2 is a tool trained on millions of labelled images, linking images to their text captions.

This model is significantly larger and more sophisticated than previous models for the purpose of automatic image labelling, but also allows adaption to tasks like automatic image caption generation and even synthesising new images from text prompts. These models have seen a wave of creative apps and uses spring up, but concerns around artist copyright and their environmental footprint remain.

The ability to create seemingly realistic images or text at scale has also prompted concern from misinformation scholars – these replications can be convincing, especially as technology advances and more data is fed into the machine. Platforms need to be intelligent and nuanced in their approach to these increasingly powerful tools if they want to avoid furthering the AI-fuelled digital deception arms race.

Daniel Angus is professor of digital communication in the School of Communication, and leader of the Computational Communication and Culture program in QUT’s Digital Media Research Centre.

Originally published under Creative Commons by 360info™.

SEE ALSO

Op-ed: Why your smart TV might not last as long as you’d hope

TV remote control pointing at Smart TV

Op-ed: Why your smart TV might not last as long as you’d hope

Authors Alexa Scarlata and Ramon Lobato
Date 11 January 2023

TVs don’t just break down anymore. New problems include apps becoming obsolete and streamers cutting off support for your operating system.

A TV used to be a long-term investment – something you bought knowing it would see you through the next 10 or even 15 years.

Before TVs were ‘smart’, their main function  was to decode signals from broadcast television and from connected devices like DVD players and game consoles. If your TV suddenly stopped working, major hardware faults would hopefully be covered under your manufacturer warranty and statutory guarantees under consumer law.

When you’re buying a new TV today you need to think not just about the quality of the hardware, but about the lifespan of its software

But smart TVs are different because of the complexity of their inbuilt software. When you’re buying a new TV today you need to think not just about the quality of the hardware, but about the lifespan of its software.

This is because your TV’s functionality will change over time. Apps and platforms may not work as well, may refuse to open – or they may disappear altogether.

In this article we explain what you should keep in mind when you buy a smart TV, and what you can do if the functionality of your TV is compromised over time.

What do you get with a smart TV?

Almost all TVs sold in Australia today are smart TVs. This means they can connect to the internet and deliver streaming content via apps.

What many people don’t realise is that when you buy a smart TV you’re locked into using its specific operating system, such as Samsung’s Tizen or LG’s WebOS – just like you’re locked into iOS if you buy an Apple phone or Android if you buy a Samsung.

Each TV operating system works differently, and puts its own spin on interface design, menus, and navigation.

You need to brace yourself for the fact that most apps will eventually stop working on your TV in the years ahead

The operating system also determines the content you can access on your smart TV because it controls the app store. So you need to check before purchase that your preferred TV can run all the apps you might need – not just the big ones like Netflix and YouTube (which are preinstalled on most smart TVs), but also local apps from ABC, SBS, 7, 9 and 10, and any specific movie, sports or gaming apps that you might like to use.

Also, while we all want a smart TV that will work consistently over time, you need to brace yourself for the fact that most apps will eventually stop working on your TV in the years ahead. Even top-of-the-line TVs that cost tens of thousands of dollars are subject to this dreaded phenomenon of ‘app obsolescence’.

The inevitable obsolescence of apps

There are several reasons why apps can become obsolescent.

First, the process of developing and maintaining an app for multiple operating systems and smart TV models is expensive and resource-intensive. Streaming services such as Netflix and iView can see what smart TVs people are using and prioritise particular brands and models accordingly.

In some cases, streaming services will not bother updating apps designed for older TVs, and will instead focus their efforts on newer TVs that are likely to run more smoothly and reach more viewers. As such, they may stop supporting older-model TV apps, or they may remove their apps from particular platforms.

Streaming services will not bother updating apps designed for older TVs, and will instead focus their efforts on newer TVs that are likely to run more smoothly and reach more viewers

This happened in 2019, when Netflix announced that its app would no longer be supported on some Samsung and Panasonic smart TVs purchased in the early 2010s. These devices had “technical limitations” that did not support Netflix’s new digital rights management protocols.

Recently SBS made the “tough call” to remove SBS On Demand from Sony Linux televisions. The broadcaster claimed that these TVs no longer had the memory or processing power to support the best experience (that is, the enhanced features or improved ad experience) of SBS On Demand.

There are other reasons why apps can disappear from a smart TV or suddenly stop working. In the US, there have been some instances of platform blocking where apps have disappeared from smart TVs because of commercial disputes between apps and smart TV platform operators.

Additionally, your choice of TV can also affect your access to future apps that haven’t been released yet. Even if you have a very new and expensive TV, don’t expect that you’ll have immediate access to the latest apps that might arrive next year or sometime in the future.

Availability of new apps can be very uneven, because apps may prioritise launching on the largest smart TV platforms

Availability of new apps can be very uneven, because apps may prioritise launching on the largest smart TV platforms, such as Samsung’s Tizen and LG’s webOS, neglecting the smaller platforms – or the app might get held up in administrative red tape.

For example, when Disney+ launched in Australia in 2019, it was immediately available on TVs made by Samsung and LG, but would not run on Hisense’s proprietary VIDAA U operating system. Hisense users had to wait until late 2021 before they could access the official Disney+ app on their sets.

Similarly, while Kayo was available on Samsung TVs soon after launch, LG owners had to wait until late 2021 to access these apps.

How can I extend the life of my smart TV?

Unfortunately, you have little control over what apps your TV supports or abandons over time, but that doesn’t mean you have no options.

First, you can opt for a TV brand with a good history of delivering software updates. Check the CHOICE Community forums, the CHOICE reliability survey and other online resources to find out more about how TV brands perform with software updates.

Second, if an app becomes glitchy or won’t open, delete and install it again, if you can. You should also perform a manual software update for your smart TV, via the Settings menu. This will clarify exactly what is currently supported by your TV’s operating system.

If this doesn’t work and it’s clear that your TV has started to lose functionality, you don’t need to buy a replacement right away. Instead, you can ‘patch’ your TV using a streaming device that extends its lifespan to get the most out of the hardware.

You can ‘patch’ your TV using a streaming device that extends its lifespan to get the most out of the hardware

For example, if you plug in an Amazon Fire TV Stick, Google TV, games console (PlayStation, Xbox) or set-top box, then you can effectively bypass the smart TV’s outdated software and should be able to run a full range of apps from your external streaming device, so long as the device software is up to date.

Another alternative is to install your favourite video apps on your phone and then cast, mirror or AirPlay to your TV – or even plug your laptop directly into the TV using an HDMI cable.

These workarounds will help you get the maximum life span from your smart TV, and recoup your initial investment over time.

Good for you, good for the environment

In summary, we recommend that all consumers spend a little time playing around with the operating system of a smart TV before buying. Review the product specifications and make sure it can support the streaming services that you already, or might want to, subscribe to.

By making informed choices, you can get the most out of your investment and reduce the many harmful effects of e-waste.

Remember, there’s no reason to throw out a perfectly good TV display screen, even when the software is buggy – upcycle instead by casting, mirroring or adding a streaming device.

This article was originally published on CHOICE. Read the original article.

SEE ALSO

Workshop to investigate public interest litigation in harmful digital marketing awarded Academy of Social Sciences funding

Workshop to investigate public interest litigation in harmful digital marketing awarded Academy of Social Sciences funding

Authors Kathy Nickels
Date 19 December 2022

ADM+S researchers Prof Christine Parker, Prof Jeannie Paterson, Prof Kimberlee Weatherall and colleague Assoc. Prof Paula O’Brien have been awarded funding from the Academy of Social Sciences in Australia (ASSA) Workshops Program to convene leading stakeholders to investigate issues of harmful online advertising and the potential for public interest litigation.

The ASSA Workshops Program for 2023, awarded over $70,000 to convenors from ten different universities to advance research and policy agendas on nationally important issues. 

The ADM+S co-hosted workshop Strategic Public Interest Litigation for Transparency and Accountability of Harmful Digital Marketing: A Researcher-Regulator-Community Dialogue seeks to address challenges of harmful digital advertising. It will bring together key social science and socio legal researchers to investigate predatory and manipulative advertising practices across a range of harmful industries such as alcohol, unhealthy food, and gambling. 

Professor Christine Parker, Chief Investigator at ADM+S, University of Melbourne says that these practices are challenging to investigate. 

“Bringing together scholars, activists and regulators working on these issues in different industries will provide the opportunity to discuss our common challenges.

“We plan to also look at the potential benefits, challenges, and pitfalls of strategic public interest litigation to address these harms.” says Professor Parker. 

The ASSA Workshops Program has been operating for over 30 years. Each year the program supports 8-10 workshops with funding up to $9,000. 

The program supports multidisciplinary workshops with the purpose of being a catalyst for innovative ideas in social science research and social policy, to build capability amongst young researchers, and to foster networks across social science disciplines and with practitioners from government, the private sector, and the community sector on issues of common concern.

The workshop Strategic Public Interest Litigation for Transparency and Accountability of Harmful Digital Marketing: A Researcher-Regulator-Community Dialogue will be co-hosted by the ARC Centre of Excellence for Automated Decision Making and Society, the Centre for AI and Digital Ethics at University of Melbourne and the Health Law and Ethics Network at Melbourne Law School, at the University of Melbourne on 25-26 September 2023. 

SEE ALSO

ADM+S publications recognised in the APO’s Top Content for 2022

ADM+S publications recognised in the APO’s Top Content for 2022

Authors Kathy Nickels
Date 16 December 2022

ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) publications have been named in the APO’s Top Content for 2022 released this week.

The APO’s Top Content for 2022 has listed A Data Capability Framework for the not-for-profit sector and Decentralising data governance: Decentralised Autonomous Organisations (DAOs) as data trusts and as most valuable players (MVPs) – a selection of the most interesting and influential content of the year. 

As part of the Top Content for 2022, the APO also named the Top Ten most clicked resources across 15 broad subject areas for the period December 2021 to November 2022.

The Manifesto for sex-positive social media was listed in both Technology and Communications subject areas and Automated decision making in transport mobilities: review of industry trends and visions for the future was listed in Technology. 

The ADM+S has contributed 17 publications, since joining the APO repository in May this year. 

ADM+S Centre Director, Distinguished Professor Julian Thomas, said “The APO collection has enabled us to communicate our key research work publicly, in detail, in a timely fashion, in a convenient digital format, and in a way which is open to everyone.

The Analysis & Policy Observatory (APO) is one of Australia’s leading open access research repositories. We share APO’s goal of supporting evidence-based policy and public debate on the critical challenges facing Australia, and we’re delighted to be working with APO to make ADM+S research more findable, more useable, and more accessible.”

Listed in APO MVPs 2022

A Data Capability Framework for the not-for-profit sector
Anthony McCosker, Frances Shaw, Xiaofang Yao, and Kath Albury.
This report provides a framework that distils the challenges and successes of organisations worked with. It represents both the factors that underpin effective data capability and the pathways to achieving it. In other words, as technologies and data science techniques continue to change, data capability is both an outcome to aspire to, and a dynamic, ongoing process of experimentation and adaption.

Decentralising data governance: Decentralised Autonomous Organisations (DAOs) as data trusts
Kelsie Nabben
This paper explores the idea that Decentralised Autonomous Organisations (DAOs) are a new type of data trust for decentralised governance of data. This publication lends itself to further scholarly research and industry practice to test DAO data trusts as a data governance model for greater individual autonomy, verifiability, and accountability.

Named in APO Top Tens 2022

Manifesto for sex-positive social media
Zahra Stardust, Emily van der Nagel, Katrin Tiidenberg, Jiz Lee, Em Coombes, and Mireille Miller-Young.
This publication sets out guiding principles that platforms, governments, policy-makers and other stakeholders should take into account in their design, moderation and regulation practices. It builds upon the generative work currently underway with the proliferation of alternative, independent collectives and cooperatives, who are designing new spaces, ethical standards and governance mechanisms for sexual content.

Automated decision making in transport mobilities: review of industry trends and visions for the future
Emma Quilty, Sarah Pink, Thao Phan, and Jeni Lee.
This report maps and analyses the social implications of the visions of our transport future. The report examines the assumptions underpinning these visions, as they are represented and projected in recent transport and mobilities stakeholder reporting.

Visit the ADM+S Collection on the APO

SEE ALSO

Arjun Srinivas to lead research for MediaFutures supported project

Arjun Srinivas

Arjun Srinivas to lead research for MediaFutures supported project

Authors Kathy Nickels
Date 13 December 2022

ADM+S researcher Arjun Srinivas and colleagues from Kaivalya Plays have received grant funding and support from MediaFutures to produce an interactive theatre performance highlighting the effects of hyper-partisanship and hate speech in India. The performance will particularly focus on the targeting and persecution of minority Muslim women online.

The project  ‘Mining Hate’ will include improvised scenes built from media narratives and content generated by the audience to demonstrate how malicious online actors identify, target and harass victims of online scams and hate speech. Drawing on the Brechtian principle of Verfremdungseffekt, or the “distancing effect”, the live performance seeks to engage audience members in the emotional fallout of being victims.  

Media Futures is a European funded consortium that supports artists to address challenges of disinformation and hate speech in the digital media ecosystem. The recent round of Artists for Media grants were awarded to artists with an innovative artwork concept and production process that critically and materially explores data and technology to question and comment on its impact on individuals and society.

For six months, Arjun and his colleagues will receive technical, legal and ethical support as well as data and computational resources, training and mentorship from partners of the consortium, including Leibniz University of Hannover, King’s College London, KU Leuven and Open Data Institute.

Arjun will be leading the research and data component of the project. During this time he will analyze data from the MediaCloud and Twitter among other sources, to capture media trends and to understand social media discourses on hyper-partisanship and misinformation in India. 

“I’m really stoked to work on this project as it is a seamless integration of my research, theatre practice and my professional identity as a journalist” said Arjun.

“Through this project, we would like to shed light on the consequences of online vitriol and hate speech on the intended victims, while also uncovering the means used by malicious actors to target them.” 

Mining Hate will be performed alongside other projects at the MediaFutures finalists demo day in May 2023.

SEE ALSO

The “Black box” of algorithms and automated decision-making

The “Black box” of algorithms and automated decision-making

Authors Kathy Nickels
Date 12 December 2022

ABC has published this informative interactive explainer Wrenching open the black box that examines the “black box” of algorithms that are increasingly making decisions about our online and offline lives.

Using relevant examples such as Centrelink’s Robodebt and UK’s visa-processing algorithm, this explainer illustrates how some decision-making systems can be flawed.

The article highlights research and tools in development that can help us understand— and challenge — the decisions that algorithms make about us, as individuals, while others can illuminate bias and discrimination embedded within a system.

The article features:

  • Professor Sandra Wachter, Oxford Internet Institute and a tool that generates a number of “nearby possible worlds” that helps to illustrate how different variables (e.g. postcode, gender) could generate different outcomes.
  • Professor Paul Henman, ADM+S at the University of Queensland with comment on structural biases in algorithmic systems.
  • The Algorithmic audit – a transparency tool used to verify if the algorithm meets standards of fairness.
    Professor Ed Santow, former Australian human rights commissioner explains that Australia is lagging behind other parts of the world on digital rights protections.

This is a recommended read for better understanding decision-making systems, as well as the current research and recommendations seeking to make these systems more transparent.

Visit the explainer Wrenching open the black box published on ABC news.

SEE ALSO

The eSafety Commissioner releases new position statement on recommender systems and algorithms

Cover of Position Statement: Recommnedar systems and algorithms

The eSafety Commissioner releases new position statement on recommender systems and algorithms

Authors Kathy Nickels
Date 12 December 2022

Last week the eSafety Commissioner released their new Tech Trends Position statement: Recommender systems and algorithms

The position statement takes a holistic view of recommender systems that encompasses their benefits and risks, broader uses, and complex interconnected ecosystems.

It is stated that “it is important to assess recommender systems holistically, thinking about their benefits and risks, their range of uses and how they may influence or be influenced by the wider digital environment and socio-political developments”.

This paper provides useful advice for users and guidance for industry. For example it recommends that companies take a more proactive Safety By Design approach to recommender algorithms by considering the risks they may pose at the outset and designing in appropriate guardrails.

 This could include:

  • features that allow users to curate how a recommender system applies to them individually and opt out of receiving certain content
  • enforcing content policies to reduce the pool of harmful content on a platform, which reduces its potential amplification
  • labelling content as potentially harmful or hazardous
  • introducing human “circuit breakers” to review fast-moving content before it goes viral.

It also recommends enhancing transparency through regular algorithmic audits and impact assessments, curating recommendations so they are age appropriate, and introducing prompts to encourage users to reconsider posting harmful content.

This position paper acknowledges the contribution made by experts in sharing their insights on recommender systems with eSafety including ADM+S researchers:

Read the Position statement: Recommender systems and algorithm

SEE ALSO

Young ICT Explorers competition finalists hosted at ADM+S

Left to Right: Alice Cartmill, Eleanor Angus, William Smyth and Rehan Dutta.

Young ICT Explorers competition finalists hosted at ADM+S

Author Kathy Nickels
Date 12 December 2022

East Brisbane State School students have been awarded second place in the National Young ICT Explorers (YICTE) competition 2022.

Leading up to the finals, more than 700 students submitted projects from across Australia to the competition. 

On 10 December, the ARC Centre of Excellence for Automated Decision-Making and Society at QUT hosted the East Brisbane State School Grade 6 students to pitch their project idea at the YICTE virtual finalist event. 

In their project “Runway Racket” the students used an Arduino (a single board microprocessor) with a custom microphone to measure environmental noise in conjunction with the Plane Finder website to identify planes associated with the noise. The monitors were placed in several homes along a flightpath over East Brisbane. 

Mairi McGregor, YICTE 2022 judge, praised the team for their creativity in defining the problem, and accuracy in capturing and measuring data as well as presenting the data. She said that the data and presentation was at a level that could be taken to authorities and companies but also had the potential, with expansion, for commercialisation.

The team included Eleanor Angus, Alice Cartmill, Rehan Dutta, and William Smyth.

Eleanor Angus said that it was fun to work as a team on the project.

“I really liked the community involvement. There were a number of locals and Facebook groups excited by our project” said Eleanor.

The team recently spoke to Rebecca Levingston on ABC Mornings.

Rehan Dutta told Rebecca “We [chose] this [project] because plane noise really is a big disruption in a lot of areas around Brisbane especially places under the flight path”

Now in its 13th year, the Young ICT Explorers (YICTE) is a non-profit competition supported by CSIRO Digital Careers, The Smith Family, Kinetic IT and School Bytes.  The annual competition encourages primary and high school students from years three to 12 to use their imagination and passion to create an invention that could change the world using the power of technology.

Congratulations to the Runway Racket team: Eleanor Angus, Alice Cartmill, Rehan Dutta, and William Smyth. 

You can listen to Rebecca Livingston on ABC mornings talk to the team from 1:48.

This story was updated 30/01/2023

SEE ALSO

What is happening outside of the digital town square? A glimpse into the street corners and alleyways that also make Internet social

What is happening outside of the digital town square? A glimpse into the street corners and alleyways that also make Internet social

Author Ashwin Nagappa
Date 1 December 2022

The recent rollercoaster of changes to Twitter have inevitably made it the most discussed topic on social media, in our daily conversations, in traditional press and in academia. There are obvious speculations about the future of Twitter. There is also an experience of mass grief and despair for many who benefited from it. And the question looming large is — if not Twitter, where else? This blogpost is not about quitting Twitter or finding a suitable alternative. Many handy resources[1] have already been authored in relation to these issues. Generally, there is a lot of chatter about and on the so-called digital town squares. This has turned attention toward other smaller gathering around street corners and alleyways such as Mastodon. Hence, this blogpost is a brief overview of alternative social media (ASM) platforms (Gehl, 2015) and what it could mean for the future of the Internet or social media as we know them.

The desire for alternative media is not new to social media platforms or the Internet. Before the world wide web (the web) became a commonplace, there were various initiatives across the globe to develop alternatives to dominant broadcast media systems at the time (Rennie, 2006). Community media or alternative media[2] initiatives aimed to create alternative media systems that decentralized decision-making and provided access to the production and circulation of media (Sandoval & Fuchs, 2010). The web[3] had all the capabilities of an alternative media and provided space for user-generated content (Van Dijck, 2009). However, commercial interests transformed the web into a platformized web, where digital platforms became central entities (Helmond, 2015).

Alternative social media (ASM) platforms emerged over a decade ago when commercial social media platforms and platform companies had already established dominance in the digital media ecosystem. ASM platforms aimed to create platforms without advertising revenue and algorithms for content curation or recommendation. And to shift the concentration of power from platform companies into a community of users. However, while the promise and aspirations of ASM platforms invited many users, it was a complicated task for users to operate or govern the platform; especially to understand the nuances of content licensing and the challenges that arose on a large network of users.

The earliest ASMs, such as *diaspora, Twister, and Ello, gained momentary popularity as the Facebook or Twitter killer apps (Zulli et al., 2020, p. 1189). However, they failed to scale up or find viable business or economic models. At the same time, little or no content moderation attracted hate speech or far-right actors to the space. Additionally, the technocratic characteristics of these platforms increased participation barriers (the platform design required new users to learn new technical skills) (Gehl, 2015).

Despite many challenges, ASM platforms did not disappear. Rather, decentralized platforms such as Mastodon were worked on to make them relatively user-friendly. Furthermore, several (open source) ASM projects came together over the years to develop a protocol[4] allowing users to participate across a range of decentralized platforms. This led to the birth of a “Fediverse”, a network of user-run social media platforms. While the “Fediverse has existed since 2018, the recent turn of events have drawn attention to it.

Fediverse (Senst & Kuketz, 2021)

With the development and adoption of blockchain technology across different industries, the discourse of decentralization has accelerated under the term web3. Web3 ‘suggests a progression from web2.0,…characterized by peer-to-peer transactions and an ability for users to decide who they share information with’ (Rennie et al., 2022, p. 5). Blockchain social media (BSM) platforms could be considered second generation ASM platforms. Many BSMs follow similar principles as ASM platforms to subvert platformization, ads and algorithms. However, the decentralizing characteristics of blockchain make it suitable for developing social media platform alternatives. Like ASMs, most BSM platforms insist that the community of users will govern all aspects of the platform, and no one will have a central authority, which is easier said than done.

Misuse is one of the many possible scenarios for ASM platforms. Gab is a prominent example of a Mastodon instance being run as a platform for far-right supporters. Although Gab was defederated from the “Fediverse”, it has become a part of a fringe platforms ecosystem. Similarly, DLive, a live-streaming BSM was used to broadcast Capitol hill violence on Jan 6th, 2021 (Browning & Lorenz, 2021). The DLive team had to intervene to take down the video since community members did not see the need to moderate the content.

These examples are exceptional cases that discredit ASM platforms. There are many instances of ASM platforms that provide space for marginalized communities or spaces that are not highly radicalized. For example, ASM platforms were sought after by transgender and queer users when Facebook restricted their profiles for violating the real name policy (Gehl, 2015, p. 8) and by thousands of Indians when Twitter blocked several users protesting the citizenship amendment bill in 2019 (Outlook Web Bureau, 2019; Bhargava & Nair, 2019).

ASM platforms are not magic silver bullets to the issues enveloping mainstream social media. However, they can help us understand the tensions between centralizing tendencies of digital platforms and the urge to decentralize power structures. They also expose the difference between automated or algorithmic systems of corporate social media platforms and user-driven platform governance. Finally, ASM platforms hint towards a public service internet or public interest internet as a possible future of the Internet. While digital town squares may serve corporate interests, communities also socialize on ASM platforms that can be perceived as street corners, alleyways, parks, markets, bus or train stations. These public spaces may be complicated to navigate. However, they may also bring relief from the chaos of town squares.

Notes
[1] Thinking of breaking up with Twitter? Here’s the right way to do itHow to Get Started on Mastodon
[2] There were several terms to refer to media initiatives led by non-institutional individuals or collectives. Community media was a popularly used term.
[3] The growth of technology along with the increased accessibility to devices and network.
[4]‘ActivityPub is a decentralized social networking protocol .. that provides a client to server API for creating, updating and deleting content, as well as a federated server to server API for delivering notifications and content’ (ActivityPub, n.d.)

References
Bhargava, Y., & Nair, S. K. (2019, November 8). Mastodon happening in IndiaThe Hindu.

Browning, K., & Lorenz, T. (2021, January 8). Pro-Trump Mob Livestreamed Its Rampage, and Made Money Doing It. The New York Times.

Gehl, R. W. (2015). The Case for Alternative Social Media. Social Media + Society, 1(2), 205630511560433.

Helmond, A. (2015). The Web as Platform: Data Flows in Social Media.

Senst, I., & Kuketz, M. (2021). English: The diagram shows the common Fediverse platforms with the underlying protocols. Here it is also shown in color which platforms can communicate with which and what functions are implemented. The platforms are illustrated by the predominant sense and purpose in the pattern of the Fediverse logo. File:Fediverse_small_information.png

Outlook Web Bureau. (2022, February 14). “Better, No Trolls”: Why Some Indians Are Boycotting Twitter And Switching To Mastodon.

Rennie, E. (2006). Community Media: A Global Introduction. Rowman & Littlefield Publishers.

Rennie, E., Zargham, M., Tan, J., Miller, L., Abbott, J., Nabben, K., & De Filippi, P. (2022). Toward a Participatory Digital Ethnography of Blockchain Governance. Qualitative Inquiry, 28(7), 837–847.

Sandoval, M., & Fuchs, C. (2010). Towards a critical theory of alternative media. Telematics and Informatics, 27(2), 141–150.

Van Dijck, J. (2009). Users like you? Theorizing agency in user-generated content. Media, Culture & Society, 31(1), 41–58. https://doi.org/10.1177/0163443708098245

Zulli, D., Liu, M., & Gehl, R. (2020). Rethinking the “social” in “social media”: Insights into topology, abstraction, and scale on the Mastodon social network. New Media & Society, 22(7), Article 7. https://doi.org/10.1177/1461444820912533

SEE ALSO

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

Galaxy
Tengyart / Unsplash

The Galactica AI model was trained on scientific knowledge – but it spat out alarmingly plausible nonsense

Authors Aaron Snoswell and Jean Burgess
Date 29 November 2022

Earlier this month, Meta announced new AI software called Galactica: “a large language model that can store, combine and reason about scientific knowledge”.

Launched with a public online demo, Galactica lasted only three days before going the way of other AI snafus like Microsoft’s infamous racist chatbot.

The online demo was disabled (though the code for the model is still available for anyone to use), and Meta’s outspoken chief AI scientist complained about the negative public response.

So what was Galactica all about, and what went wrong?

What’s special about Galactica?

Galactica is a language model, a type of AI trained to respond to natural language by repeatedly playing a fill-the-blank word-guessing game.

Most modern language models learn from text scraped from the internet. Galactica also used text from scientific papers uploaded to the (Meta-affiliated) website PapersWithCode. The designers highlighted specialised scientific information like citations, maths, code, chemical structures, and the working-out steps for solving scientific problems.

The preprint paper associated with the project (which is yet to undergo peer review) makes some impressive claims. Galactica apparently outperforms other models at problems like reciting famous equations (“Q: What is Albert Einstein’s famous mass-energy equivalence formula? A: E=mc²”), or predicting the products of chemical reactions (“Q: When sulfuric acid reacts with sodium chloride, what does it produce? A: NaHSO₄ + HCl”).

However, once Galactica was opened up for public experimentation, a deluge of criticism followed. Not only did Galactica reproduce many of the problems of bias and toxicity we have seen in other language models, it also specialised in producing authoritative-sounding scientific nonsense.

Authoritative, but subtly wrong bullshit generator

Galactica’s press release promoted its ability to explain technical scientific papers using general language. However, users quickly noticed that, while the explanations it generates sound authoritative, they are often subtly incorrect, biased, or just plain wrong.

We also asked Galactica to explain technical concepts from our own fields of research. We found it would use all the right buzzwords, but get the actual details wrong – for example, mixing up the details of related but different algorithms.

In practice, Galactica was enabling the generation of misinformation – and this is dangerous precisely because it deploys the tone and structure of authoritative scientific information. If a user already needs to be a subject matter expert in order to check the accuracy of Galactica’s “summaries”, then it has no use as an explanatory tool.

At best, it could provide a fancy autocomplete for people who are already fully competent in the area they’re writing about. At worst, it risks further eroding public trust in scientific research.

A galaxy of deep (science) fakes

Galactica could make it easier for bad actors to mass-produce fake, fraudulent or plagiarised scientific papers. This is to say nothing of exacerbating existing concerns about students using AI systems for plagiarism.

Fake scientific papers are nothing new. However, peer reviewers at academic journals and conferences are already time-poor, and this could make it harder than ever to weed out fake science.

Underlying bias and toxicity

Other critics reported that Galactica, like other language models trained on data from the internet, has a tendency to spit out toxic hate speech while unreflectively censoring politically inflected queries. This reflects the biases lurking in the model’s training data, and Meta’s apparent failure to apply appropriate checks around the responsible AI research.

The risks associated with large language models are well understood. Indeed, an influential paper highlighting these risks prompted Google to fire one of the paper’s authors in 2020, and eventually disband its AI ethics team altogether.

Machine-learning systems infamously exacerbate existing societal biases, and Galactica is no exception. For instance, Galactica can recommend possible citations for scientific concepts by mimicking existing citation patterns (“Q: Is there any research on the effect of climate change on the great barrier reef? A: Try the paper ‘Global warming transforms coral reef assemblages’ by Hughes, et al. in Nature 556 (2018)”).

For better or worse, citations are the currency of science – and by reproducing existing citation trends in its recommendations, Galactica risks reinforcing existing patterns of inequality and disadvantage. (Galactica’s developers acknowledge this risk in their paper.)

Citation bias is already a well-known issue in academic fields ranging from feminist scholarship to physics. However, tools like Galactica could make the problem worse unless they are used with careful guardrails in place.

A more subtle problem is that the scientific articles on which Galactica is trained are already biased towards certainty and positive results. (This leads to the so-called “replication crisis” and “p-hacking”, where scientists cherry-pick data and analysis techniques to make results appear significant.)

Galactica takes this bias towards certainty, combines it with wrong answers and delivers responses with supreme overconfidence: hardly a recipe for trustworthiness in a scientific information service.

These problems are dramatically heightened when Galactica tries to deal with contentious or harmful social issues, as the screenshot below shows.

Screenshots of papers generated by Galactica on 'The benefits of antisemitism' and 'The benefits of eating crushed glass'.
Galactica readily generates toxic and nonsensical content dressed up in the measured and authoritative language of science.
Tristan Greene / Galactica

Here we go again

Calls for AI research organisations to take the ethical dimensions of their work more seriously are now coming from key research bodies such as the National Academies of Science, Engineering and Medicine. Some AI research organisations, like OpenAI, are being more conscientious (though still imperfect).

Meta dissolved its Responsible Innovation team earlier this year. The team was tasked with addressing “potential harms to society” caused by the company’s products. They might have helped the company avoid this clumsy misstep.The Conversation

Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology and Jean Burgess, Professor and Associate Director, ARC Centre of Excellence for Automated Decision-Making and Society, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

A timeline of Twitter changes and commentary from ADM+S researchers

Woman holding phone with Twitter on the screen.

A timeline of Twitter changes and commentary from ADM+S researchers

Author Kathy Nickels
Date 20 December 2022

Since Elon Musk purchased Twitter on 27 October he has made a slew of chaotic changes in attempts to raise revenue and to grapple with the complexity of governing a social media platform. 

Although Twitter has only a fraction the amount of users when compared to Facebook, Instagram and WhatsApp, the platform plays a significant role in society and shaping public opinion.

Professor Jean Burgess, Associate Director of the ADM+S Centre says that “Twitter’s unique role is a result of the way it combines personal media use with public debate and discussion.

But this is a fragile and volatile mix – and one that has become increasingly difficult for the platform to manage.”

In managing the platform, Musk admits that “Twitter will do lots of dumb things in coming months”. 

We’ve provided a timeline to break down some of the recent changes to Twitter with commentary and explainers from ADM+S researchers in the field.

Twitter users vote for Elon Musk to step down as head of the company

20 December 2022

Elon Musk released a poll asking “Should I step down as head of Twitter? I will abide by the results of this poll”. More than half of the 17.5 million users who responded to the poll said the billionaire shouldn’t remain at the helm. In this 4BC News Talk episode, Prof Axel Bruns says that it makes sense for a new CEO to come onboard at this time.

What comes next for Twitter and it’s community?

9 December 2022

In this article Elon Musk, Twitter’s platform culture & what comes next, Prof Jean Burgess argues that despite the chaos brought on by Elon Musk in recent months, Twitter has always been much more than a tech company. Regardless of how the story of Twitter turns out, what its user community does next will help shape the future of our media and communication environment.

COVID, vaccine misinformation ‘spiking’ on Twitter

8 December 2022

The volume of COVID misinformation significantly jumps on Twitter, while anti-vaccination networks are reforming and reorganising. Assoc. Prof Timothy Graham provides data and analysis in this article COVID, vaccine misinformation ‘spiking’ on Twitter after Elon Musk fires moderators that clearly illustrates this rise. The spike in the second half of November is partly due to the launch of anti-vax propaganda documentary, Died Suddenly as well as a change to Twitter’s COVID-19 misinformation policy on 30 Nov which states they are “no longer enforcing the COVID-19 misleading information policy”.

For years, Twitter has served a vital function as an information-sharing and verification service. That’s being very rapidly eroded.

How could alternative social media platforms change the future of social media as we know it?

1 December 2022

In this article published on Medium What is happening outside of the digital town square? A glimpse into the street corners and alleyways that also make Internet social, ADM+S PhD Candidate Ashwin Nagappa describes how different alternative social media platforms work as well as the pros and cons of these de-centralised platforms compared to centralised platforms such as Twitter.

Twitter vulnerable to widespread outages and cyber attacks

22 November 2022

After a few chaotic weeks it’s clear Elon Musk is intent on taking Twitter in a direction that’s at odds with the prevailing cultures of the diverse users who call it home. With so many experienced staff gone there are concerns the platform will be vulnerable to widespread outages and cyber attacks.

In this article Thinking of breaking up with Twitter? Here’s the right way to do it , Prof Daniel Angus and Assoc Prof Timothy Graham provide tips on moving away from Twitter or better securing your data on the platform.

Concerns over volume of conspiracy theorising on Twitter during US midterms

18 November 2022

“The drastic reductions to moderation staff and changes to platform architecture and Twitter rules and policies will mean more [misinformation and disinformation] on the site and in different ways,” Assoc. Professor Timothy Graham, who researches online bots, trolls and disinformation, told RMIT Fact Lab CheckMate in the article Misinformation analyst concerned by ‘volume of conspiracy theorising’ on Twitter during US midterms.

Could Mastadon be the new Twitter?

16 November 2022

It is unclear whether users are replacing Twitter with Mastadon or whether they are sitting across both platforms. In this article Should Elon Musk really be afraid of Mastodon?, Professor Axel Bruns talks about what it would take for users to leave Twitter and what steps Mastodon would need to take to grow it’s current user base from 2.2 million to that of Twitters 238 million users.

Blue tick removed after flood of fake accounts

10 November 2022

The launch of paid verification badges resulted in a flood of fake accounts of public figures and brands with Twitter’s blue check mark. In response the company removed the paid verification badge option. On 17 November Musk tweeted “Punting relaunch of Blue Verified to November 29th to make sure that it is rock solid”.

Introduction of payment for blue tick verification is fatally flawed

7 November 2022

Primarily to raise revenue, Musk made the decision to charge US$8 a month for accounts to obtain the blue tick verification badge. Musk argued that this would solve hate speech and fake accounts by prioritising verified accounts in search, replies and mentions. If anything, this would have the opposite effect: those with enough money would dominate the public sphere.

In this article Is Twitter’s ‘blue tick’ a status symbol or ID badge? And what will happen if any can buy one?, Assoc Professor Timothy Graham revisits the controversial history of the blue tick and how this latest change would open the floodgates to inauthentic and harmful activity on the platform.

Twitter users seek alternative platforms

29 October 2022

One day after Musk closes the deal to buy Twitter Hashtags #TwitterMigration and #TwitterExodus gained popularity.

Twitter users start seeking alternative platforms with more than 70,000 users signing up to Mastodon, a microblogging site, with functions similar to Twitter.

Dr Nataliya Ilyushia, research fellow at the ADM+S, explains Mastadon, and how you can sign up to this platform in What is Mastodon, the ‘Twitter alternative’ people are flocking to? Here’s everything you need to know.

Changes to content moderation and platform governance

28 October 2022

In the Canberra times article Musk is proposing radical changes after his $US44 billion acquisition of Twitter Dr Daminao Spina says “The decision of the new CEO to fire engineers will impact the robustness of the platform, which is arguably the only thing you cannot replicate easily on other platforms.”

Musk announced that he will forgo any significant content moderation or account reinstatement decisions until after the formation of a new committee devoted to the issues. He said that “Twitter will be forming a content moderation council with widely diverse viewpoints,” and that “No major content decisions or account reinstatements will happen before that council convenes”.

Elon Musk announces interest in purchasing Twitter

27 April 2022

When Elon Musk first announced his interest in purchasing Twitter earlier in April 2022, he promised to prioritise “free speech” and return the social media platform to “the digital town square where matters vital to the future of humanity are debated.”

In this article The ‘digital town square’? What does it mean when billionaires own the online spaces where we gather? (theconversation.com), Prof Jean Burgess explores the meaning of “free speech” and what the Australian Government has been doing to create safer digital spaces in which the fundamental rights of all users of digital services are protected. Prof Burgess points to alternatives to for-profit social media platforms, such as the non-centralised platform Mastadon, and suggests a “blue-sky” idea – a public service internet.

SEE ALSO

What would an ad-free internet look like?

Advertising images
Internet advertising(Pascale Pirate Chickan / Creative Commons / Flickr.com)

What would an ad-free internet look like?

Authors Kathy Nickels
Date 30 November 2022

In this ABC Radio National Life Matters episode, reporter Nat Tencic explores the relationship between ads, the internet and us.

Nat Tencic does some personal research on advertising on her Twitter, Instagram and Facebook feeds. She said the results “weren’t comforting”. 

In 5 minutes of scrolling on each platform Nat found that on:

  • Instagram 28% of story slides were advertisements (12 ads within 43 story slides) 
  • Facebook 31% of posts were advertisements (21 ads within 68 posts)
  • Twitter 20% ot tweets were advertisements (1 in 5 tweets were promoted)
  • Tik Tok 21% of videos were ads (3 ads and 11 regular videos)

Nat talks to Prof Julian Thomas and Dr Jathan Sadowski from the ADM+S Centre to imagine an internet with new priorities. You also hear from James Clark, Executive Director of Digital Rights Watch about a possible hack to block ads at home. 

ADM+S researcher, Dr Jathan Sadowski says that the internet has been shaped by advertising in ways that are so fundamental and so ubiquitous that he believes it’s actually easier to think about the ways the internet has not been shaped by advertising. 

“Advertising is so integral to every aspect of the internet as we experience it, as it’s built, as it’s designed, as it’s operated.” he says. 

“The reasons why websites exist and the reasons why we experience them in the way that we do often comes down to advertising in some way. Whether it’s the collection of data for advertising, or the serving of advertising.”

Listen to the full episode What would an ad-free internet look like? on ABC Radio National Life Matters.

SEE ALSO

ADM+S Dark Ads Hackathon winners share new methods for better transparency in online advertising

Dark Ads Hackathon team presenting to Hack/Hackers group

ADM+S Dark Ads Hackathon winners share new methods for better transparency in online advertising

Author Kathy Nickels
Date 28 November 2022

Hacks/Hackers recently hosted the winning team of the ARC Centre of Excellence for Automated-Decision Making and Society (ADM+S) Dark Ads Hackathon at ABC Southbank to share their idea for identifying discriminatory patterns in online advertising data.

The idea presented by the multi-disciplinary team has the potential to provide better tools for informing policy, advancing public awareness, and building advocacy for vulnerable groups who are targeted by predatory advertising.

The Hackathon team’s approach draws on data gathered from the Australian Ad Observatory dataset (500,000+ ads donated by 2000 Australian Facebook users) to examine “why am I seeing this?” (WAIST) data, alongside other demographic indicators like income, postcode, and age, to identify discriminatory patterns such as proxy and price discrimination.

Members of Hacks/Hackers and the Hackathon team discussed how the methods could be used to identify advertisement practices across a range of harmful industries such as predatory consumer financial products, alcohol, gambling, and unhealthy foods. The innovative approach and methods developed by the Hackathon team can also be applied to other contexts, for example, to trace illegal advertising practices such as the promotion of vape products to young people.

Questions from the Hacks/Hackers group sparked conversations from a journalistic point of view on sampling vulnerable communities, the value of engaging particular demographic groups, and sharing narratives of the lived experience of predatory advertisements.

Dr Kelly Lewis, research fellow at the ADM+S Centre, Monash University is one of the eight Hackathon team members. She said that presenting to members of the Hacks/Hackers community was a meaningful way for the team to engage with a range of ideas and perspectives.

“The feedback we received provides a valuable resource for us to draw on as we continue to develop our approach for greater online advertising accountability. We would like to thank Hacks/Hackers for this opportunity”, said Dr Lewis.

Hackathon Team: Dr Kelly Lewis, Grant Nicholas, Ross Pearson, Alec Sathiyamoorthy, Vikram Sondergaard, Mingqiu Wang, and Guangnan (Rio) Zhu.

Mentors: Dr Abdul Obeid and Xue Ying (Jane) Tan

Read more about the research Identifying Discriminatory Patterns in Online Advertising Data

Hacks/Hackers is a rapidly expanding international grassroots journalism organisation with thousands of members across four continents. Their mission is to create a network of journalists (“hacks”) and technologists (“hackers”) who rethink the future of news and information.

SEE ALSO

Thinking of breaking up with Twitter? Here’s the right way to do it

John G. Mabanglo/EPA

Thinking of breaking up with Twitter? Here’s the right way to do it

Authors Daniel Angus and Timothy Graham
Date 22 November 2022

After a few chaotic weeks it’s clear Elon Musk is intent on taking Twitter in a direction that’s at odds with the prevailing cultures of the diverse users who call it home.

Musk has now begun reinstating high-profile users – including Donald Trump and Kanye West – who had been removed for repeated violations of community standards.

This comes off the back of a mass exodus of Twitter staff, including thousands that Musk unceremoniously fired via email. The latest wave of resignations came after an ultimatum from Musk: employees would have to face “extremely hardcore” working conditions (to fix the mess Musk created).

All of this points to a very different experience for users, who are now decamping the platform and heading to alternatives like Mastodon.

So what threats are we likely to see now? And how does one go about leaving Twitter safely?

#TwitterShutDown

With so many experienced staff leaving, users face the very real possibility that Twitter will experience significant and widespread outages in the coming weeks.

Enterprise software experts and Twitter insiders have already been raising alarms that with the World Cup under way, the subsequent increase in traffic – and any rise in opportunistic malicious behaviour – may be enough for Twitter to grind to a halt.

Aside from the site going dark, there are also risks user data could be breached in a cyberattack while the usual defences are down. Twitter was exposed in a massive cyberattack in August this year. A hacker was able to extract the personal details, including phone numbers and email addresses, of 5.4 million users.

One would be forgiven for thinking that such scenarios are impossible. However, common lore in the technology community is that the internet is held together by chewing gum and duct tape.

The apps, platforms and systems we interact with every day, particularly those with audiences in the millions or billions, may give the impression of being highly sophisticated. But the truth is we’re often riding on the edge of chaos.

Building and maintaining large-scale social software is like building a boat, on the open water, while being attacked by sharks. Keeping such software systems afloat requires designing teams that can work together to bail enough water out, while others reinforce the hull, and some look out for incoming threats.

To stretch the boat metaphor, Musk has just fired the software developers who knew where the nails and hammers are kept, the team tasked with deploying the shark bait, and the lookouts on the masts.

Can his already stretched and imperilled workforce plug the holes fast enough to keep the ship from sinking?

We’re likely to find out in the coming weeks. If Twitter does manage to stay afloat, the credit more than likely goes to many of the now ex-staff for building a robust system that a skeleton crew can maintain.

Hate speech and misinformation are back

Despite Twitter’s claims that hate speech is being “mitigated”, our analysis suggests it’s on the rise. And we’re not the only researchers observing an uptick in hate speech.

The graph below shows the number of tweets per hour containing hate speech terms over a two-week period. Using a peer-reviewed hate speech lexicon, we tracked the volume of 15 hateful terms and observed a clear increase after Musk’s acquisition.

Volume of tweets containing hate speech terms; the trend is increasing over time.
Volume of tweets containing hate speech terms.

Misinformation is also on the rise. Following Musk’s swift changes to blue tick verification, the site tumbled into chaos with a surge of parody accounts and misleading tweets. In response, he issued yet another stream-of-consciousness policy edict to remedy the previous ones.

With reports that the entire Asia-Pacific region has only one working content moderator left, false and misleading content will likely proliferate on Twitter – especially in non-English-speaking countries, which are especially at risk of the harmful effects of unchecked mis- and disinformation.

If this all sounds like a recipe for disaster, and you want out, what should you do?

Pack your bags

First, you may want to download an archive of your Twitter activity. This can be done by clicking through to Settings > Settings and Support > Settings and Privacy > Your Account > Download an archive of your data.

It can take several days for Twitter to compile and send you this archive. And it can be up to several gigabytes, depending on your level of activity.

Lock the door

While waiting for your archive, you can begin to protect your account. If your account was public, now might be a good time to switch it to protected.

In protected mode your tweets will no longer be searchable off the platform. Only your existing followers will see them on the platform.

If you’re planning to replace Twitter with another platform, you may wish to signal this in your bio by including a notice and your new username. But before you do this, consider whether you might have problematic followers who will try to follow you across.

Check out

Once you have downloaded your Twitter archive, you can choose to selectively delete any tweets from the platform as you wish. One of our colleagues, Philip Mai, has developed a free tool to help with this step.

It’s also important to consider any direct messages (DMs) you have on the platform. These are more cumbersome and problematic to remove, but also likely to be more sensitive.

You will have to remove each DM conversation individually, by clicking to the right of the conversation thread and selecting Delete conversation. Note that this only deletes it from your side. Every other member of a DM thread can still see your historic activity.

Park your account

For many users it’s advisable to “park” their account, rather than completely deactivate it. Parking means you clean out most of your data, maintain your username, and will have to log in every few months to keep it alive on the platform. This will prevent other (perhaps malicious) users from taking your deactivated username and impersonating you.

Parking means Twitter will retain some details, including potentially sensitive data such as your phone number and other bio information you’ve stored. It also means a return to the platform isn’t out of the question, should circumstances improve.

If you do decide to deactivate, know that this doesn’t mean all your details are necessarily wiped from Twitter’s servers. In its terms of service, Twitter notes it may retain some user information after account deactivation. Also, once your account is gone, your old username is up for grabs.

Reinforce the locks

If you haven’t already, now is the time to engage two-factor authentication on your Twitter account. You can do this by clicking Settings > Security and account access > Security > Two-factor authentication. This will help protect your account from being hacked.

Additional password protection (found in the same menu above) is also a good idea, as is changing your password to something that is different to any other password you use online.

Once that’s done, all that’s left is to sit back and pour one out for the bird site.

Correction: this piece originally stated Alex Jones had been reinstated on Twitter. This was not the case, so his name has been removed.The Conversation

Daniel Angus, Professor of Digital Communication, Queensland University of Technology and Timothy Graham, Associate Professor, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

3 Big Questions from the ADM+S Dark Ads Hackathon

3 Big Questions from the ADM+S Dark Ads Hackathon

Author Lauren Hayden
Date 10 November 2022

Digital advertising is microtargeted, ephemeral, and unobservable. Ads such as those seen on social media are shown only to select users based on behavioural, demographic, and psychographic data the platform has been able to collect about them. These ads may be published for a limited time, often less than 24 hours, and they are invisible from view after they expire.

This is referred to as ‘dark advertising’. Researchers, advocates, and governments have little ability to monitor online advertising and are therefore unable to hold advertisers accountable for potentially harmful practices such as targeting underage users with ads for alcohol or gambling.

The ADM+S Centre’s Tech for Good: Dark Ads Hackathon challenged teams to create a ‘pretotype’ solution for monitoring, analysing and studying dark advertising. The three-day event, held at RMIT in Melbourne from 28 – 30 September 2022, hosted attendees from across Australia and an array of presenters representing consumer advocacy organisations, research groups and the tech industry.

Although teams were working towards solutions, the Hackathon generated big questions that are only the beginning of the conversation.

Big Question #1 – How does dark advertising affect users?

Existing research shows that digital advertising has been used to target at-risk groups with excessive messaging around harmful products such as alcohol, gambling and unhealthy foods. The ‘dark’ nature of advertising hinders the ability to monitor and report these harmful targeting patterns.

As the first panel discussion highlighted, dark advertising is more than just predatory targeting. Advertisers have the power to artificially limit consumer choice, exploit dynamic pricing for optimal potential revenues, and employ dark patterns to nudge shopping behaviours. User data is sold to third parties which allows for further microtargeting and behavioural manipulation.

Dark advertising is shorthand for a broader, automated consumer culture which affects us all.

Big Question #2 – Who is responsible for making platforms safer and more fair?

Platforms, as the facilitators of dark advertising, receive the most criticism for enabling exploitive advertising practices in their digital spaces. Dr Laura Edelson, a postdoctoral researcher with the Cybersecurity for Democracy project at New York University, reminded participants that a collaborative effort among researchers, regulators and platforms is required to affect change. Users of digital platforms also have a critical role to play in identifying and reporting dark advertising, which was the focus of several Hackathon team designs. Dark advertising relies on a lack of visibility to operate. The mobilisation of users through data donation and reporting patterns of harmful advertising can highlight the extent of dark advertising and inform the development of regulatory frameworks around digital advertising.

Big Question #3 – What tools are needed to mobilise change around dark advertising?

Several tools have already been developed to examine dark advertising more closely. The Australian Ad Observatory is one project funded through the ADM+S Centre that allows users to “donate” the advertising they see on their Facebook feeds through a browser extension.

The web browser collects all sponsored content shown on the page and indexes the ads within a larger library used for research purposes. Users are also able to review the ads collected in a private archive. Further data collection tools and analytical frameworks are in development to assist researchers and regulators in evaluating potential harm in digital advertising.

These tools are a springboard for a larger, collaborative effort to regulate dark advertising which began to emerge at the Hackathon. Teams successfully generated a diverse array of conceptual tools that focus on empowering end users, analysing advertising data, and reporting harm to consumer advocacy organisations.

Most importantly, the Hackathon opened a conversation about dark advertising that will inform future development of responsible, ethical and inclusive automated decision-making systems.

SEE ALSO

Is Twitter’s ‘blue tick’ a status symbol or ID badge? And what will happen if anyone can buy one?

Twitter bird image generated by DALL-E
DALL-E

Is Twitter’s ‘blue tick’ a status symbol or ID badge? And what will happen if anyone can buy one?

Author Timothy Graham
Date 7 November 2022

Following Elon Musk’s acquisition of Twitter on October 27, the world’s richest man proposed a range of controversial changes to the platform. With mounting evidence that he is making it up as he goes along, these proposals are tweeted out in a stream-of-consciousness manner from Musk’s Twitter account.

Primarily to raise revenue, one of the ideas was to charge US$8 a month to obtain a verified status – that is, the coveted blue tick badge next to the account handle.

Within the space of a few days, the paid verification change has already been rolled out in several countries, including Australia, under the Twitter Blue subscription service.

More than just verification

According to Twitter, the blue tick lets people know an account of interest is authentic. Currently, there are seven categories of “public interest accounts”, such as government office accounts, news organisations and journalists, and influencers.

Yet this seemingly innocuous little blue icon is far from a simple verification tool in Twitter’s fight against impersonation and fraud.

In the public view, a verified status signifies social importance. It is a coveted status symbol to which users aspire, in large part because Twitter’s approval process has made it difficult to obtain.

That’s partly because the blue tick has a controversial history. After receiving widespread condemnation for verifying white supremacists in 2017, Twitter halted its verification process for more than three years.

There’s a fundamental mismatch between what Twitter wants the blue tick to mean versus how the public perceives it, something the Twitter Safety team itself acknowledged in 2017.

But they didn’t resolve it. When Twitter resumed verifying accounts systematically in 2021, it wasn’t long until the process began to fail again, with blue ticks being handed out to bots and fake accounts.

Moreover, the public is still confused about what the blue tick signifies, and views it as a status symbol.

Lords and peasants

Musk’s stream-of-consciousness policy proposals may reflect his own preference for interacting with verified accounts. Despite his repeated claims of “power to the people” and breaking the “lords and peasants” system of verified versus non-verified accounts, I ran a data analysis of 1,493 of Musk’s tweets during 2022, and found that more than half (57%) of his interactions were with verified accounts.

Evidently, having a verified status makes one worthy of his attention. Thus, Musk himself arguably views the blue tick as a status symbol, like everyone else (except Twitter).

However, Musk’s US$8 blue tick proposal is not only misguided but, ironically, likely to produce even more inauthenticity and harm on the platform.

A fatal flaw stems from the fact that “payment verification” is not, in fact, verification.

Fact from fraud

Although Twitter’s verification system is by no means perfect and is far from transparent, it did at least aspire to the kinds of verification practices journalists and researchers use to distinguish fact from fiction, and authenticity from fraud. It takes time and effort. You can’t just buy it.

Despite its flaws, the verification process largely succeeded in rooting out a sizable chunk of illegitimate activity on the platform, and highlighted notable accounts in the public interest. In contrast, Musk’s payment verification only verifies that a person has US$8.

Payment verification can’t guarantee the system won’t be exploited for social harm. For example, we already saw that conspiracy theory influencers such as “QAnon John” are at risk of becoming legitimised through the purchase of a blue tick.

Opening the floodgates for bots

The problem is even worse at larger scales. It is hard enough to detect and prevent bot and troll networks from poisoning the information landscape with disinformation and spam.

Now, for the low cost of US$800, foreign adversaries can launch a network of 100 verified bot accounts. The more you can pay, the more legitimacy you can purchase in the public sphere.

To make matters worse, Musk publicly stated that verified accounts who pay US$8 will be granted more visibility on the platform, while non-verified accounts will be suppressed algorithmically.

He believes this will solve hate speech and fake accounts by prioritising verified accounts in search, replies and mentions. If anything, it will have the opposite effect: those with enough money will dominate the public sphere. Think Russian bots and cryptocurrency spammers.

Consider also that the ability to participate anonymously on social media has many positive advantages, including safety for marginalised and at-risk groups.

Giving users tools to manage their public and personal spheres is crucial to self-identity and online culture. Punishing people who want to remain anonymous on Twitter is not the answer.

Worse yet, connecting social media profiles to payment verification could cause real harm if a person’s account is compromised and the attacker learns their identity through their payment records.

A cascade of consequences

Musk’s ideas are already causing a cascading series of unintended consequences on the platform. Accounts with blue ticks began changing their profile handle to “Elon Musk” and profile picture to parody him. In response, Musk tweeted a new policy proposal that Twitter handles engaging in impersonation would be suspended unless they specify being a “parody”.

Users will not even receive a warning, as comedian Kathy Griffin and her 2 million followers discovered when her account was suspended for parodying Musk.

Musk’s vision for user verification does not square up with that of Twitter or the internet research community.

While the existing system is flawed, at least it was systematic, somewhat transparent, and with the trappings of accountability. It was also revisable in the face of public criticism.

On the other hand, Musk’s policy approach is tyrannical and opaque. Having abolished the board of directors, the “Chief Twit” has all the power and almost no accountability.

We are left with a harrowing vision of a fragile and flawed online public square: in a world where everyone is verified, no one is verified.The Conversation

Timothy Graham, Associate Professor, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original

SEE ALSO

ADM+S Hackathon generates new ideas for investigating “dark advertising”

(From left) Team members Grant Nicholas, Kelly Lewis, Alec Sathiyamoorthy, Mingqiu Wang, Vikram Sondergaard, Ross Pearson and Guangnan (Rio) Zhu with team mentors (front from left) Xue Ying (Jane) Tan and Dr Abdul Obeid.
(From left) Team members Grant Nicholas, Kelly Lewis, Alec Sathiyamoorthy, Mingqiu Wang, Vikram Sondergaard, Ross Pearson and Guangnan (Rio) Zhu with team mentors (front from left) Xue Ying (Jane) Tan and Dr Abdul Obeid.

ADM+S Hackathon generates new ideas for investigating “dark advertising”

Author Kathy Nickels
Date 31 October 2022

The winning idea from the Tech for Good: ADM+S Dark Ads Hackathon proposes new methods to identify online advertising practices that could involve price discrimination.

Professor Daniel Angus, Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S Centre) said the winning idea presents a particularly powerful technique for uncovering forms of misleading and discriminatory advertising.

“This technique will mean that identifying some forms of dark advertising practices will no longer be like finding a needle in a haystack,” said Professor Angus.

The Tech for Good: ADM+S Dark Ads Hackathon 2.5 day event brought together over 40 participants from social science, humanities, and computer science to hack new ideas and methods for better transparency in online advertising.

“The diversity of ideas and potential for impact was extraordinary. While large technology firms continue to drag the chain on advertising accountability, it was refreshing to see our participants offer new ideas and approaches to these significant issues.” said Professor Angus.

The Hackathon was hosted by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in collaboration with government and consumer rights organisations, who recognised an urgent need for better transparency and accountability  following recent examples of price discriminiation, scam advertising and predatory targeting in online advertising spaces.

The winning pitch, Using postcodes to identify discriminatory patterns in online advertising data,  used the existing ADM+S Australian Ad Observatory database of half a million advertisements donated by close to 2,000 participants alongside statistical data associated with postcodes to identify patterns of price discrimination based on userlocation. 

The team also suggested building a visual interface to help both researchers and consumers quickly identify discrimination and other unethical  advertising practices.

Other ideas presented at the hackathon included: 

Read more about the Hackathon and the team’s ideas on the Tech for Good: ADM+S Dark Ads Hackathon webpage.

Watch highlights from the event on YouTube

The winning team will be traveling to Brisbane in November to present their idea to the ABC’s Story Lab team a collection of journalists, developers, designers, social media and video specialists focused on data-driven, visual storytelling for Australian audiences and to Hack/Hackers a rapidly expanding international grassroots journalism organisation. 

Find out how social media advertising is targeting you and help researchers uncover harmful advertising practices, join the Australian Ad Observatory

The Tech for Good: ADM+S Dark Ads Hackathon included two public panels where researchers from the Australian Ad Observatory joined with consumer advocates and government representatives to discuss online harms and the future of advertising accountability.

Watch the Public Panel discussions on YouTube 

Panel 1: Key Issues in Online Advertising  

Panel 2: Accountability for Online Ads 

Listen to the Public Panel discussions on the ADM+S Podcast 

We thank the following judging panel for their time and feedback provided to the Hackathon teams:

  • Kate Bower – Consumer Data Advocate, CHOICE
  • Dr Aimee Brownbill – Senior Policy and Research Advisor, Foundation for Alcohol Research and Education (FARE)
  • Simon Elvery – Journalist and Developer at ABC News Story Lab, ABC
  • Samuel Kininmonth – Policy Officer, Australian Communications Consumer Action Network (ACCAN)
  • Yuan-Fang Li – Associate Professor at Faculty of IT, Monash University
  • Lucy Westerman – Commercial Determinants of Health Lead, VicHealth
  • Professor Kim Weatherall – Chief Investigator, ADM+S at The University of Sydney

The Hackathon was organised by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in collaboration with ABC, VicHealth, Digital Rights Watch, ACCAN (The Australian Communications Consumer Action Network), CHOICE, CPRC (Consumer Policy Research Centre), and FARE (Foundation for Alcohol Research and Education).

SEE ALSO

Dominique Carlon winner of the inaugural ADM+S HDR Essay Prize

Dominique Carlon presenting

Dominique Carlon winner of the inaugural ADM+S HDR Essay Prize

Author Kathy Nickels
Date 11 October 2022

ADM+S PhD candidate Dominique Carlon (QUT) has been announced as the winner of the inaugural ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) Higher Degree Research (HDR) Student Essay Prize.

Higher Degree Research students from the ADM+S were invited to submit a 2,000 word essay to challenge existing perspectives or suggest new directions of research in automated decision-making (ADM) in the field of news and media.

Carlon’s winning essay “Bots are more than human” argues that debates about the risks and benefits of bots having human-like qualities overlooks other creative and interesting possibilities that they could offer society. 

Dr James Meese (Co-leader of the News and Media Focus area at the ADM+S Centre, RMIT) said that the  essay was genuinely innovative.

“The essay will no doubt inspire academia and industry to think more deeply about how to best deploy bot technologies in the future” said Dr Meese.

Melanie Trezise (University of Sydney)  was awarded an honorable mention for her essay “‘If it bleeds, it leads’: What is human negativity bias teaching the machine?”. The essay explored how AI systems could potentially counteract negativity bias in the news.

Other submissions looked at ADM and the curation of news on Youtube, cognitive bias, and the dangers of newsworthiness criteria in journalism.

Essay submissions were judged according to originality and innovation; argument structure; and quality of analysis by the ADM+S HDR Essay Prize judging panel: Dr Ariadna Matamoros-Fernández, Dr James Meese, Dr Kylie Pappalardo and Professor Mark Sanderson, chaired by: Sally Storey.

The winner receives $2000 (AUD) and their essay has been published on the Automated Decision-Making and Society publication on Medium.com.

Read the winning essay, Bots as more than human on the ADM+S Medium publication and the ADM+S Website.

Listen to an interview with Dominique Carlon on the ADM+S Podcast: Bots as More Than Human? 

SEE ALSO

Dark ads public panel: Issues of online advertising and accountability

Dr Aimee Brownbill (FARE), Lucy Westerman (VicHealth), Kate Bower (CHOICE) and Erin Turner (CPRC).
Dr Aimee Brownbill (FARE), Lucy Westerman (VicHealth), Kate Bower (CHOICE) and Erin Turner (CPRC).

Dark ads public panel: Issues of online advertising and accountability

Author Kathy Nickels
Date 6 October 2022

The ADM+S Dark Ads public panel brought together government representatives, consumer rights organisations and researchers from the ARC Centre of Excellence for Automated Decision-Making and Society to discuss key issues in online advertising.

Panel experts discussed concerns about unregulated online advertising practices with examples of predatory advertising, price discrimination, and scam ads and how these practices impact vulnerable consumers. 

The panelists agreed that advertising is becoming harder than ever before to hold accountable and that there is an urgent need for better online advertising transparency and accountability.

Associate Professor Nicholas Carah (ADM+S, UQ) moderated the discussion on the key issues in online advertising.

“We can’t see the ads [that are being delivered online] and this is a concern as advertising plays such a fundamental role in shaping our public life” said Associate Professor Nicholas Carah

“And for some categories we have real questions and concerns for vulnerable consumers and harmful products that we need to be able to address collectively”. 

During the discussion, questions were raised on whether the expanding hyper-personalisation of online advertising still fits within the traditional definition of advertising. 

Panelists discussed future directions for increasing transparency and accountability including policy and regulation, journalistic practices, citizen science approaches and further research.

The discussion amongst this diverse group of panelists helped to raise concerns from different perspectives and highlighted the need for a multi-disciplinary approach to tackle these issues.

Panel 1: Key Issues in Online Advertising

Associate Professor Nicholas Carah (ADM+S, Monash University) moderated the discussion with Kate Bower (CHOICE), Dr Aimee Brownbill (FARE), Erin Turner (Consumer Policy Research Centre) and Lucy Westerman (VicHealth).

Panel 2: Accountability for Online Ads 

Professor Daniel Angus (ADM+S, QUT) moderated the discussion with Simon Elvery (ABC), Samuel Kinnonmonth (ACCAN – The Australian Communications Consumer Action Network), Lizzie O’Shea (Digital Rights Watch), Xue Ying Tan (Jane) (ADM+S, QUT), and Dr Verity Trott (ADM+S, Monash University)

The Hackathon was organised by the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) in collaboration with ABC, VicHealth, Digital Rights Watch, ACCAN, CHOICE, CPRC, and FARE.

View panel 1 discussion on ADM+S YouTube 

View panel 2 discussion on ADM+S YouTube 

SEE ALSO

Doomscrolling is literally bad for your health. Here are 4 tips to help you stop

Becca Tapert/Unsplash

Doomscrolling is literally bad for your health. Here are 4 tips to help you stop

Authors Kate Mannell and James Meese
Date 9 September 2022

Doomscrolling can be a normal reaction to living through uncertain times. It’s natural to want to understand dramatic events unfolding around you and to seek out information when you’re afraid. But becoming absorbed in bad news for too long can be detrimental.

A newly published study has found that people with high levels of problematic news consumption are also more likely to have worse mental and physical health. So what can you do about it?

We spoke to Australians in the state of Victoria about their lengthy lockdown experiences and found how they managed to stop doomscrolling. Here are some tips to help you do the same.

Doomscrolling – unhelpful and harmful

“Doomscrolling” describes what happens when someone continues to consume negative news and information online, including on social media. There is increasing evidence that this kind of overconsumption of bad news may have negative impacts.

Research suggests doomscrolling during crises is unhelpful and even harmful. During the early COVID-19 pandemic, consuming a lot of news made people feel overwhelmed. One study found people who consumed more news about the pandemic were also more anxious about it.

Research into earlier crises, like 9/11 and the Boston Marathon bombings, also found that sustained exposure to news about catastrophes is linked to negative mental health outcomes.

Choosing to take control

During the peak of COVID-19 spread, many found themselves doomscrolling. There was lots of bad news and, for many people, lots more spare time. Several studies, including our own, have found that limiting news exposure helped people to cope.

Melbourne, the state capital of Victoria, experienced some of the longest-running lockdowns in the world. Wanting to know how Victorians were managing their news consumption during this time, we launched a survey and held interviews with people who limited news consumption for their own wellbeing.

We found that many people increased their news consumption when the lockdowns began. However, most of our participants gradually introduced strategies to curb their doomscrolling because they realised it was making them feel anxious or angry, and distracted from daily tasks.

Our research found these news-reduction strategies were highly beneficial. People reported feeling less stressed and found it easier to connect with others. Here are some of their strategies, which you might want to try.

1. Make a set time to check news

Rather than checking news periodically across the day, set aside a specific time and consider what time of day is going to have the most positive impacts for you.

One participant would check the news while waiting for her morning cup of tea to brew, as this set a time limit on her scrolling. Other participants preferred saving their news engagement for later in the day so that they could start their morning being settled and focused.

2. Avoid having news ‘pushed’ to you

Coming across news unexpectedly can lure you into a doomscrolling spiral. Several participants managed this by avoiding having news “pushed” to them, allowing them to engage on their own terms instead. Examples included unfollowing news-related accounts on social media or turning off push notifications for news and social media apps.

3. Add ‘friction’ to break the habit

If you find yourself consuming news in a mindless or habitual way, making it slightly harder to access news can give you an opportunity to pause and think.

One participant moved all her social media and news apps into a folder which she hid on the last page of her smartphone home screen. She told us this strategy helped her significantly reduce doomscrolling. Other participants deleted browser bookmarks that provided shortcuts to news sites, deleted news and social media apps from their phones, and stopped taking their phone into their bedroom at night.

4. Talk with others in your household

If you’re trying to manage your news consumption better, tell other people in your household so they can support you. Many of our participants found it hard to limit their consumption when other household members watched, listened to, or talked about a lot of news.

In the best cases, having a discussion helped people come to common agreements, even when one person found the news comforting and another found it upsetting. One couple in our study agreed that one of them would watch the midday news while the other went for a walk, but they’d watch the evening news together.

Staying informed is still important

Crucially, none of these practices involve avoiding news entirely. Staying informed is important, especially in crisis situations where you need to know how to keep safe. Our research shows there are ways of balancing the need to stay informed with the need to protect your wellbeing.

So if your news consumption has become problematic, or you’re in a crisis situation where negative news can become overwhelming, these strategies can help you strike that balance. This is going to remain an important challenge as we continue to navigate an unstable world.The Conversation

Kate Mannell, Research Fellow in Digital Childhoods , Deakin University and James Meese, Research Fellow, Technology, Communication and Policy Lab, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

How dark is ‘dark advertising’? We audited Facebook, Google and other platforms to find out

Ashkar Dave/Unsplash

How dark is ‘dark advertising’? We audited Facebook, Google and other platforms to find out

Authors Nicholas Carah, Aimee Brownbill, Amy Shields Dobson, Brady Robards, Daniel Angus, Kiah Hawker, Lauren Hayden and Xue Ying Tan
Date 7 September 2022

Once upon a time, most advertisements were public. If we wanted to see what advertisers were doing, we could easily find it – on TV, in newspapers and magazines, and on billboards around the city.

This meant governments, civil society and citizens could keep advertisers in check, especially when they advertised products that might be harmful – such as alcohol, tobacco, gambling, pharmaceuticals, financial services or unhealthy food.

However, the rise of online ads has led to a kind of “dark advertising”. Ads are often only visible to their intended targets, they disappear moments after they have been seen, and no one except the platforms knows how, when, where or why the ads appear.

In a new study conducted for the Foundation for Alcohol Research and Education (FARE), we audited the advertising transparency of seven major digital platforms. The results were grim: none of the platforms are transparent enough for the public to understand what advertising they publish, and how it is targeted.

Why does transparency matter?

Dark ads on digital platforms shape public life. They have been used to spread political falsehoods, target racial groups, and perpetuate gender bias.

Dark advertising on digital platforms is also a problem when it comes to addictive and harmful products such as alcohol, gambling and unhealthy food.

In a recent study with VicHealth, we found age-restricted products such as alcohol and gambling were targeted to people under the age of 18 on digital platforms. At present, however, there is no way to systematically monitor what kinds of alcohol and gambling advertisements children are seeing.

Advertisements are optimised to drive engagement, such as through clicks or purchases, and target people who are the most likely to engage. For example, people identified as high-volume alcohol consumers will likely receive more alcohol ads.

This optimisation can have extreme results. A study by the Foundation for Alcohol Research and Education (FARE) and Cancer Council WA found one user received 107 advertisements for alcohol products on Facebook and Instagram in a single hour on a Friday night in April 2020.

How transparent is advertising on digital platforms?

We evaluated the transparency of advertising on major digital platforms – Facebook, Instagram, Google search, YouTube, Twitter, Snapchat and TikTok – by asking the following nine questions:

  • is there a comprehensive and permanent archive of all the ads published on the platform?
  • can the archive be accessed using an application programming interface (API)?
  • is there a public searchable dashboard that is updated in real time?
  • are ads stored in the archive permanently?
  • can we access deleted advertisements?
  • can we download the ads for analysis?
  • are we able to see what types of users the ad targeted?
  • how much did it cost to run the advertisement?
  • can we tell how many people the advertisement reached?

All platforms included in our evaluation failed to meet basic transparency criteria, meaning advertising on the platform is not observable by civil society, researchers or regulators. For the most part, advertising can only be seen by its targets.

Notably, TikTok had no transparency measures at all to allow observation of advertising on the platform.

Advertising transparency on these major digital platforms leaves a lot to be desired. From ‘Advertisements on digital platforms: How transparent and observable are they?’, Author provided

Other platforms weren’t much better, with none offering a comprehensive or permanent advertising archive. This means that once an advertising campaign has ended, there is no way to observe what ads were disseminated.

Facebook and Instagram are the only platforms to publish a list of all currently active advertisements. However, most of these ads are deleted after the campaign becomes inactive and are no longer observable.

Platforms also fail to provide contextual information for advertisements, such as advertising spend and reach, or how advertisements are being targeted.

Without this information, it is difficult to understand who is being targeted with advertising on these platforms. For example, we can’t be sure companies selling harmful and addictive products aren’t targeting children or people recovering from addiction. Platforms and advertisers ask us to simply trust them.

We did find platforms are starting to provide some information on one narrowly defined category of advertising: “issues, elections or politics”. This shows there is no technical reason for keeping information about other kinds of advertising from the public. Rather, platforms are choosing to keep it secret.

Bringing advertising back into public view

When digital advertising can be systematically monitored, it will be possible to hold digital platforms and marketers accountable for their business practices.

Our assessment of advertising transparency on digital platforms demonstrates that they are not currently observable or accountable to the public. Consumers, civil society, regulators and even advertisers all have a stake in ensuring a stronger public understanding of how the dark advertising models of digital platforms operate.

The limited steps platforms have taken to create public archives, particularly in the case of political advertising, demonstrate that change is possible. And the detailed dashboards about ad performance they offer advertisers illustrate there are no technical barriers to accountability.The Conversation

Nicholas Carah, Associate Professor in Digital Media, The University of Queensland; Aimee Brownbill, Honorary Fellow, Public Health, The University of Queensland; Amy Shields Dobson, Lecturer in Digital and Social Media, Curtin University; Brady Robards, Associate Professor in Sociology, Monash University; Daniel Angus, Professor of Digital Communication, Queensland University of Technology; Kiah Hawker, Assistant researcher, Digital Media, The University of Queensland; Lauren Hayden, PhD Candidate and Research Assistant, The University of Queensland, and Xue Ying Tan, Software Engineer, Digital Media Research Centre, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

How can social media platforms be more sex positive?

How can social media platforms be more sex positive?

Author Kathy Nickels and Zahra Stardust
Date 31 August 2022

Social media platforms have the capacity to shape broader attitudes towards sex and nudity through decisions about what can and can’t be posted online. And yet, it will be of no surprise that the community standards and content moderation practices of dominant social media platforms are not very sex positive.

Platforms currently make private, arbitrary and unaccountable decisions about the kinds of sex and sexualities that are visible in online spaces. These decisions have seen plus-sized profiles, top surgery fundraisers and chestfeeding people flagged for “excessive nudity” and “sexual solicitation”.

Platforms have pre-emptively shut down spaces that have been safe havens for systemically marginalised communities and actively shadowbanned, demoted, de-monetised, suspended and deplatformed groups as diverse as sex workers, people of colour, LGBTQIA+ folk, disabled people, fat activists, women and sex educators.

Current trends in regulation create a hostile environment for those for whom sex is an active, visible part of life, especially when state legislation incentivises platforms to remove all sexual content.

Seeking positive change to social media policies, ADM+S researcher Dr Zahra Stardust and fellow academic and community experts brought together the voices of stakeholders and advocates to develop the Manifesto for Sex Positive Social Media.

The Manifesto originated from a community lab on Alternative Frameworks for Sexual Content Moderation hosted at the 2021 RightCon Summit on Human Rights in the Digital Age where the group considered how social media platforms could better understand sexual content, communication, expression, and representation.

Dr Stardust is a socio-legal scholar working at the intersections of sexuality, technology, law and social justice.

Dr Stardust says “We believe platforms can learn from a long history of sex-positive thinking. Taking a sex positive approach to social media will require structural changes to the current assemblage of power, labour and value, including contesting existing practices of surveillance capitalism, online gentrification and sexual stigmatisation.

It will also involve building up the generative work currently underway among independent collectives and cooperatives, who are designing new spaces, ethical standards and governance mechanisms for sexual content.”

The Manifesto sets out guiding principles that platforms, governments, policy-makers and other stakeholders should take into account in their design, moderation and regulation practices.

These principles are:

  • Destigmatise sex
  • Integrate sexual cultures into social media
  • Value the labour of sexual content creators
  • Build safer spaces
  • Cultivate consent
  • Be accountable
  • Dismantle structural oppressions

Authors of the Manifesto: Dr Zahra Stardust (ADM+S, QUT, Australia), Dr Emily van der Nagel (Monash University, Australia) Professor Katrin Tiidenberg (Tallinn University, Estonia), Jiz Lee (Pink & White Productions), Em Coombes (University of Nevada, Las Vegas), Associate Professor Mireille Miller-Young (University of California Santa Barbara).

Artwork and illustrations by Jacq Moon

Read the full Manifesto of Sex Positive Social Media on the APO website

Show your support and sign the Manifesto at Change.org

Visit the Manifesto for Sex Positive Social Media website

SEE ALSO

Future Automated Mobilities: Towards hope, justice, and care

Woman holding phone to ear with text: Siri, where am I

Future Automated Mobilities: Towards hope, justice, and care

Author Kathy Nickels
Date 22 August 2022

Transport systems and vehicles are rapidly becoming automated, often in ways invisible to us. Cars have increasingly automated features like auto brake, cruise control and auto parking. Ride sharing, and car or bike sharing services which are managed through automated digital platforms are increasingly popular. We plan and navigate our routes using data driven recommendations delivered by smartphone apps or in-car systems.

It is predicted that in the near future fully self-driving cars will be on the roads, flying cars and drones will be in the skies, and that we will be using automated mobility systems (such as Mobility as a Service) that will create seamless travel experiences for us by connecting different modes of transport in one journey.

But how are the automated technologies behind these systems being developed? How are the future people who will be using these systems being envisioned? And what kinds of participation are needed to ensure trusted and safe futures for all?

Join discussions on these questions and more at the Future Automated Mobilities: Towards Hope, Justice, and Care symposium being hosted by the ARC Centre of Excellence on Automated Decision-Making and Society (ADM+S) 20 – 21 October 2022.

Attendees can join in-person at RMIT University in Melbourne, Australia or online. All activities are free, to secure your spot visit the event registration page.

Key issues to be discussed at this event:

  • Why do we need to be careful? Where are the limitations, flaws and inequities, in dominant anticipatory visions and narratives? What could possibly go wrong if we place ADM at the centre of a predictive, automated and data-driven mobilities future?
  • How might we anticipate trusted mobilities futures based in principles of care? And what could go right if we do? What might more realistic, ethical and responsible automated mobilities futures might look like? What does safety actually entail when we explore it from the perspective of the diverse groups of people who might encounter ADM in transport mobilities?
  • How can attending to these questions help us move us forward into more hopeful mobility futures? What are the shared values that might make this possible? Where and with whom should we be imagining, designing and testing our future automated mobilities? And what should be our first steps to achieve this collectively and collaboratively?

Professor Sarah Pink, lead of the Transport and Mobilities focus area at the ADM+S Centre and Director of the Emerging Technologies Research Lab at Monash University said this symposium takes the vital opportunity to bring together leading stakeholders in automated decision-making for our mobility futures to engage through a new agenda that centres the key values of hope, justice and care.

“To achieve safe and trusted automated mobility futures we need to shift the agenda. This means academics from the social sciences and STEM working together with industry, government and not-for-profit stakeholders to envisage futures together. It means putting principles of care and ethics at the core of our work, and it means engaging with diverse people and their possible futures in everyday worlds” said Professor Pink.

Over two days, representatives from industry, government, community organisations and other stakeholders will envision the future of transport mobilities in Australia through panel discussions, research workshops, and short film screenings. Themes will include designing mobilities of care; doing good with mobility data; disability and automated mobilities; and interdisciplinary, interspecies, and multi-stakeholder mobilities.

The symposium includes speakers from CISCO (multinational technology conglomerate corporation), Drive Sweden (driving the development of digitized, connected and shared mobility solutions for a sustainable transport system), Humanitech (harnessing the power of technology for good by putting humanity first), iMove (a national centre for transport and mobility research and development), MACA (a registered charity dedicated to advancing the rights of children with disabilities and medical conditions to safe and accessible transport), National Transport Commission, QLD Department of Transport, She’s a Crowd (countering gender-based violence through data collection activism), Volvo Cars, as well as Deakin University, Halmstad University, Monash University, Swinburne University, University of Cambridge, University of Melbourne, University of NSW, University of Sydney, University of Warwick, and University of Western Australia.

View further information and register for the event by visiting: admscentre.org.au/fam2022

#FutureMobilities 

SEE ALSO

ADM+S members elected to Queensland Academy of Arts and Sciences

ADM+S members elected to Queensland Academy of Arts and Sciences

Author Kathy Nickels
Date 17 August 2022

The Queensland Academy of Arts and Sciences (QAAS) council has elected ADM+S researchers Professor Daniel Angus, Professor Axel Bruns and ADM+S International Advisory Board Member Distinguished Professor Stuart Cunningham AM to join the ranks of the academy.

Invited membership to the QAAS recognises significant eminence and contributions to the arts and sciences as a practitioner.

Professor Daniel Angus says that the QAAS is doing immensely important work in bringing together thought leaders from a range of disciplines.

“Having straddled the social and computing sciences for most of my career I know how important it is to have mentors who can support inquiry from a range of theoretical and methodological backgrounds.

I am honoured to receive this recognition and look forward to continuing to grow the reputation and recognition of the arts and sciences in Queensland” said Professor Angus.

Professor Daniel Angus is Professor of Digital Communication in the Digital Media Research Centre (DMRC) at QUT and an Associate Investigator at the ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S).

Daniel’s research focuses on the development and application of visual computational analysis methods in communication and media studies, with a specific focus on conversation and social media data.

Daniel has been involved in computer science research for 20 years and he contributes regularly to media and industry on the impact of technology on society.

Professor Axel Bruns is an Australian Research Council Laureate Fellow and a Professor at the DMRC at QUT, and a Chief Investigator at the ADM+S.

Axel’s current work focusses on the study of user participation in social media spaces, and its implications for our understanding of the contemporary public sphere, drawing especially on innovative new methods for analysing ‘big social data’.

Distinguished Professor Stuart Cunningham AM is a Distinguished Professor of Media and Communications at QUT. He has published extensively on topics such as emerging digital industries, the creative industries and national innovation policy, and Australian screen culture and industry and directed the first ARC Centre of Excellence based outside the sciences.

Stuart is a fellow of the UK-based Academy of Social Sciences and the International Communication Association, and an inaugural fellow in Cultural and Communication Studies, Australian Academy of the Humanities. Stuart was invested as a Member of the Order of Australia in 2015 and has held numerous key advocacy, advice and governance roles in his career.

Professor Daniel Angus, Professor Axel Bruns and Distinguished Professor Stuart Cunningham AM join fellow ADM+S researcher Professor Jean Burgess who was elected to the QAAS last year.

The Academy is Queensland’s peak body for scholars in arts and sciences. It exists to stimulate activity in those areas that lie on the intersection between disciplines, and to provide independent scholarship and advice for social and public policy.

SEE ALSO

Public hackathon to reimagine online advertising accountability

Young woman using touch screen on the street

Public hackathon to reimagine online advertising accountability

Author Kathy Nickels
Date 8 August 2022

The Tech for Good: ADM+S Dark Ads Hackathon public to be held in Melbourne from 28 – 30 September provides an opportunity for participants to work with consumer rights organisations and academic experts to hack new ideas and concepts for ensuring better accountability for online advertising.

Participants will create conceptual designs and pitches that will contribute to new research into the ways that online advertising practices can be more transparent and social media companies accountable.

The event is being led by researchers from the Australian Research Council’s Centre of Excellence for Automated Decision-Making and Society (ADM+S) with support from Victorian Health Promotion Foundation, ABC, the consumer advocacy group CHOICE, the Consumer Policy Research Centre (CPRC) and the Foundation for Alcohol Research and Education (FARE).

Professor Mark Andrejevic, researcher from the ADM+S and organiser of the hackathon, says that the problem of false, discriminatory, and predatory advertising online is a serious one that requires innovative strategies for providing transparency and accountability.

“The aim of the Dark Ads Hackathon is to involve participants not just from coding and tech backgrounds but also from social sciences and humanities.

“We need to a multi-disciplinary approach to develop some measure of accountability for online targeted advertising” said Professor Andrejevic

A panel of consumer rights and industry experts will inspire participants to ideate new approaches to better online advertising. Speakers include:

  • Kate Bower, Consumer Data Advocate (CHOICE)
  • Dr Aimee Brownbill, Senior Policy and Research Advisor, Foundation for Alcohol Research and Education (FARE)
  • Dr Laura Edelson, Postdoctoral Researcher, New York University
  • Simon Elvery, Journalist and Developer at ABC News Story Lab, ABC
  • Samuel Kininmonth, Policy Officer, Australian Communications Consumer Action Network (ACCAN)
  • Lilly Ryan – Lead Security Specialist at Thoughtworks
  • Erin Turner – Chief Executive Officer, Consumer Policy Research Centre (CPRC)
  • Lucy Westerman – Commercial Determinants of Health Lead, VicHealth

Dr Laura Edelson postdoctoral researcher with Cybersecurity for Democracy at New York University, will be speaking at the event.

Dr Edelson says “One thing we know – there is a lot of problems, we know that there are consumer scams, we know there is discriminatory delivery and targeting, there’s misinformation, there’s wilful disinformation.”

“The most important step we can take right now is more transparency so that more researchers can actually work on this problem.”

Participants will be guided by industry and academic mentors through ideation and design methods and will be provided with the resources, data and tools to work with.

Expressions of interest are open until 31 August for people interested in ethics by design, privacy and public accountability for commercial institutions.

For further information and registration visit the Tech for Good, ADM+S Dark Ads Hackathon webpage.

SEE ALSO

ADM+S and Telstra’s 2022-2025 Reconciliation Action Plan

ADM+S and Telstra’s 2022-2025 Reconciliation Action Plan

Author Natalie Campbell
Date 29 July 2022

On 28 July Telstra launched its Reconciliation Action Plan for 2022-2025, committing significant investments in remote infrastructure and digital inclusion programs, including the ARC Centre of Excellence for Automated Decision-Making and Society’s (ADM+S) Mapping the Digital Gap project which aims to improve digital inclusion outcomes and access to services in remote Aboriginal and Torres Strait Islander communities.

The Reconciliation Action Plan endeavours to create an inclusive Australia where Aboriginal and Torres Strait Islander peoples are connected digitally and empowered to thrive in those contextual environments. Telstra’s partnership with ADM+S is one step toward making this happen.

Led by Dr Daniel Featherstone, ADM+S researchers partnered with 12 remote first nations communities across South Australia, Queensland, Western Australia, New South Wales and the Northern Territory.

Working with community co-researchers, the Mapping the Digital Gap project will be interviewing hundreds of residents, and organisations, to collect data on the accessibility and use of communications and media services in these areas.

The project data will be used to inform government and industry policy decisions to identify ways of improving access and the useability of digital technology for these communities.  It will also provide data about the heightened impacts of Covid-19 on digital inclusion in these communities, which evidently are more pronounced in remote areas where accessibility is already lacking.

This project will also contribute to Telstra’s involvement in Target 17 of the Australian Government Closing the Gap strategy which states that by 2026, Australia’s First Peoples will have equal levels of digital inclusion.

Read the full report Telstra’s 2022-2025 Reconciliation Action Plan

SEE ALSO

Dr Anjalee de Silva awarded a Women’s Leadership Institute Australia Fellowship

Dr Anjalee de Silva awarded a Women’s Leadership Institute Australia Fellowship

Author Kathy Nickels
Date 26 July 2022

ADM+S researcher Dr Anjalee de Silva has been awarded a Women’s Leadership Institute Australia Fellowship for her work on creating tangible change for gender equality in Australia.

Women’s Leadership Institute Australia Fellowships are awarded to those who are leaders in their respective fields, women who have innovative approaches and the courage, conviction and capacity to create real change.

Dr Anjalee de Silva’s work at the ADM+S Centre and the University of Melbourne, Melbourne Law School examines vilification or ‘hate speech’ directed at and about women, as well as the role of law in deterring, regulating, and mitigating the harms of such speech.

Anjalee says she is honoured to have the support of WLIA to continue her work in this area.

“Hate speech against women silences women by preventing them from speaking, marginalising and devaluing their speech, and building structural constraints impeding their speech. If democratic legitimacy rests on equality of opportunity to participate in democratic processes, hate speech against women represents a crisis of democracy itself. The law plays a crucial role in responding to and mitigating some of these harms.” says Anjalee.

Read the full announcement from Women’s Leadership Institute Australia

SEE ALSO

‘A weird dinging sound that everyone dreads’: what rapid deliveries mean for supermarket workers

Man walking out of woolworths with Ubereats bag

‘A weird dinging sound that everyone dreads’: what rapid deliveries mean for supermarket workers

Author Lauren Kate Kelly
Date 18 July 2022

Online grocery shopping has boomed since the pandemic began in 2020, with Woolworths and Coles steadily expanding their home-delivery offerings. Rapid delivery is the latest frontier.

Woolworths and Coles Express have been offering on-demand deliveries through UberEats and Doordash since last year. Woolworths recently launched the Metro60 app which promises home delivery within an hour to select suburbs.

These arrangements have received little fanfare, yet they signal a significant shift for supermarket workers.

As part of ongoing research, I study how the gig economy is transforming conditions of work within traditional employment. To find out how interacting with delivery platforms affects supermarket employees, I interviewed 16 experienced “personal shoppers” at Woolworths and Coles who fill delivery orders from supermarket shelves.

The labour of on-demand grocery

In supermarkets that offer on-demand home delivery, the work of the personal shopper takes on a faster pace. For Woolworths employees, for instance, an UberEats order can drop in at any time, setting off an alarm until the order is accepted and picking begins. As one personal shopper explains:

We get this weird dinging sound that everyone dreads. You have to pick that order within the half hour or within the hour … it can drop in at any time. So if you’re sitting there having lunch for an hour, you still have to go do it because you’ve got that KPI to hit.

All the (scanner) guns in the store drop that sound. So it reverberates through the store. The customers can’t hear it because they don’t know what it is. But all of us know what it is.

Serving up urgent orders to couriers from gig economy platforms like DoorDash and UberEats has a significant impact on supermarket workers. DoorDash

The on-demand orders must be prioritised alongside existing orders, requiring the personal shopper to juggle competing time crunches simultaneously.

It’s urgent, and they just pop out of nowhere. So you don’t really know when they’re coming until they’re there. It’s super stressful. I dislike them immensely.

Enter the gig worker

Once the order is picked from the supermarket aisles, the employee hands it over to a gig worker for home delivery. Supermarket staff say their interactions are brief and often impersonal.

It’s a complete mess. You have no idea who’s coming to pick up these things. And it’s just people showing up with their headphones in showing you that they’ve got this order on their phone. There’s no real rhyme or reason to any of it.

For supermarket workers, gig workers are neither colleagues nor customers, yet they play an essential role in home delivery and customer service.

When things go awry, however – such as a missing bag or broken eggs – it’s the supermarket staff who field those complaints. Similarly, when personal shoppers run behind schedule it has punitive flow-on effects for gig workers.

The on-demand model may, by design or otherwise, pit two groups of workers against each other, fostering frustrations at both ends.

Most of the time they’re pretty good. They deal with it. It’s just those bad times where we might be behind and then they don’t deal with it very well.

A new labour regime

At first glance the partnerships between supermarkets and gig economy platforms look like the supermarket is outsourcing the work of delivery.

But this is a simplification: in fact, the traditional companies are bringing the precarious and on-demand labour of the gig workers inside their own firm, and making it legitimate through formal partnerships.

The ‘dedicated team’ behind Woolworth’s Metro60 app includes traditionally employed staff and gig workers. Woolworths

 

How do supermarket employees view on-demand grocery?

Most personal shoppers I spoke with are ambivalent or wary of the expanding on-demand services.

The people that I work with either love it or hate it. They like it because it’s different, you never get bored, and you’ve always got something to do. But that’s why other people hate it. Because you don’t get a chance to just stand for a second, you always have to be doing something.

Some enjoy the fast pace and express satisfaction in meeting targets and making the customer happy.

We’ve all gotten to the point now where we’re attuned, we hear the chime, we know what actions we need to take. So it almost happens autonomously. And before you know it, here comes another one and you just keep going.

Others expressed concerns about burnout, unpredictable workloads and an increasing pace of work.

It’s obviously a very high-demand, high-speed job. That’s probably the biggest frustration. We also have pick rates, essentially like Amazon, where we get told this is how many items we should average an hour … and a lot of the time people can’t meet the average.

Staff who have been in the role more than a decade have seen the pace of work speed up significantly during their tenure, and are more critical.

You’re not a person when you walk in the door, you’re a machine.

Some expressed broader concerns about the possibility of their role being taken over entirely by the gig economy. In the words of one shopper:

I was a little dismayed when the whole DoorDashing started because it’s like, oh no, the gig economy is getting closer and closer. Gig stuff always … makes me uncomfortable … It’s all this whole long-term ploy to destroy some existing industry or place, or eliminate worker protections.

Another expressed a similar sentiment:

My biggest worry is that they start outsourcing the actual shopping procedure. I think that would be the next logical step similar to what America has with Instacart.

Supermarket jobs of the future

All the personal shoppers I spoke with shared a pride in their work and their deep knowledge of the supermarket and its local community. How the role continues to evolve through partnerships with the gig economy is not inevitable but a matter of choice.The Conversation

Lauren Kate Kelly, PhD Candidate, ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The Australian Search Experience to analyse over 350 million search results

Person looking at computer screens with data

The Australian Search Experience to analyse over 350 million search results

Author Kathy Nickels
Date 15 July 2022

Researchers investigating whether people’s search results vary depending on who they are will be analysing over 350 million search results to uncover the impact of the recommendation algorithms producing Google and YouTube search results.

The Australian Search Experience invited Australians to install a browser plugin that automatically queried Google Search, Google News, Google Video, and YouTube several times a day on key topics. 

Over the past 12 months, the plugin searched on our participants’ Chrome, Edge, and Firefox browsers on 48 topics, from “who should I vote for in the federal election” to “COVID vaccines” and “Tokyo Olympics”.

These data donations from the more than 1,000 citizen scientists who had installed the plugin were crucial to developing a broader perspective on the search results that ordinary Australians encounter as they use these search engines.

Chief Investigator Professor Axel Bruns, an internationally renowned Internet researcher in QUT’s Digital Media Research Centre, said the project explores whether search engines have the potential to create ‘filter bubbles’ or to promote misinformation and disinformation.

“These results will be analysed to understand the personalisation of search results for critical news and information, across key platforms including Google and YouTube, based on the profiles that these platforms establish for their different users.

“We are also interested in seeing how information changes over time: as news breaks or the available information evolves, how long does it take for this to be reflected in the search results? And is this driven solely by recommendation algorithms, or is there evidence of manual curation too?”

To date, the analysis has indicated that there is only very limited, largely benign personalisation in the results produced by Google Search: some results may vary depending on the state where Australian users are located, but there is no evidence of widely diverging search results based on personal identity or ideology – commonly described as ‘filter bubbles’.

The project has attracted international attention from the University of Twente in the Netherlands, where researchers are planning to launch a Dutch equivalent to the Australian Search Experience project in the coming months. This will produce valuable comparative data.

Professor Bruns will be presenting the Australian Search Experience project: Background paper alongside national and international research on responsible, ethical and inclusive automated services, at the 2022 ADM+S Symposium next week.

The project is a partnership between researchers from Australian universities within the ARC Centre of Excellence for Automated Decision-Making + Society (ADM+S) and the international research and advocacy organisation AlgorithmWatch, which developed an earlier version of the Search Experience plugin for a pilot study in Germany in 2017.

SEE ALSO

Surveillance does not equal safety: Police, data and consent on dating apps

Person using online dating app.

Surveillance does not equal safety: Police, data and consent on dating apps

Author Kathy Nickels
Date 12 July 2022

Dating apps are continually under pressure from civil society, media and governments to address safety concerns resulting from the use of these platforms. In response to pressures dating apps have added a number of so-called safety features. But how safe do these features actually make their users?

In this recent article Surveillance does not equal safety: Police, data and consent on dating apps published in Crime, Media, Culture: An International Journal, authors Dr Zahra Stardust, Dr Rosalie Gillett and Prof Kath Albury draw on empirical accounts of app use – and popular media reporting – to problematise commonsense assumptions about dating apps, safety, technology, policing and surveillance.

The authors use a critical criminological perspective as a lens to think about accountability by transforming the systems under which consent is navigated, alongside a public health approach that demonstrates the value of adopting a nuanced and contextual approach to gender and sexual diversity.

The article raises concerns that some of these safety features actually increase surveillance as user’s data is shared with external ‘consent apps’ and law enforcement agencies. This shared data has the potential to make users less safe – and this is particularly the case for app users who are marginalised or stigmatised on the basis of their race, sexuality, gender, health status, employment or disability.

Instead of the impetus to ‘datafy’ consent by documenting evidence of sexual transactions, or to monitor users by sharing data with police, the authors argue that a more effective approach to safety must extend the notion of ‘consent culture’ to encompass a consent-based approach to collecting, storing, and sharing user data – including seeking consent from users about how and whether their data is sold, monetised or shared with third parties or law enforcement.

“If dating apps are committed to advancing consent culture, and not simply to quick reputational fixes, they could actively build in avenues for users to expressly consent to (and withdraw from) specific uses of their data. This includes refusing intimate data from being sold, monetised or shared with law enforcement.

Datafying consent will not protect dating app users. However, understandings of sexual consent as a dynamic, interactive and communicative practice can help shape dating apps’ policies towards safety and data privacy.” 1

1. Stardust Z, Gillett R, Albury K. Surveillance does not equal safety: Police, data and consent on dating apps. Crime, Media, Culture. July 2022. doi:10.1177/17416590221111827

Read the full article Surveillance does not equal safety: Police, data and consent on dating apps

SEE ALSO

Research report: Public Interest Sex Tech Hackathon

Cover for Public Interest Sex Tech Hackathon report

Research report: Public Interest Sex Tech Hackathon

Author Kathy Nickels
Date 11 July 2022

Is it possible to design and govern ethical sex tech at scale? What might intersectional, public interest sex tech look like?

Dr Zahra Stardust, Dr Jenny Kennedy and Prof Kath Albury have released a new report Public Interest Sex Tech Hackathon: Speculative futures and participatory design’ which captures the research findings, provocations, challenges and insights from the ADM+S Public Interest Sex Tech Hackathon held earlier this year.

“The report makes a unique contribution to public conversations on public interest technologies, ethical data governance and design justice,” said Dr Stardust.

The Public Interest Sex Tech Hackathon brought together designers, technologists and communities to workshop ideas of how sexual technologies can be designed and governed in ways that prioritise public interest benefit.

Participants explored the ethical potential of sex tech for safety, pleasure and health and worked with industry mentors including Samantha Floreani from Digital Rights Watch and Eliza Sorensen from Assembly Four, a collective of sex workers and technologists.

Participants were invited to create open-source designs and pitches that would contribute to new research into the ways that ‘big data’ can be used for sexual and reproductive health, wellbeing, rights and justice.

The winning pitch from Organic Matters Group (OMG) presented a research and manufacturing centre investigating organic materials such as algae and mycelial networks in sex tech products. The model is based on the mantra ‘reduce, reuse, recycle’ and aims to influence industry practices through partnerships with local Indigenous communities to promote environmental sustainability.

Other pitches included:

  • A metaservice that empowers users to navigate how much of their sexual identity they share in any given situation, application or location;
  • A digital co-design platform puts sex tech businesses in touch with marginalised communities to assess the social impact and accessibility of their product development;
  • A virtual community space grounded in the social model of disability;
  • A protocol for people to identify their preferred mode of communication to facilitate matches based on shared preferences.

The event was organised by ADM+S researchers Dr Zahra Stardust, Dr Jenny Kennedy and Prof Kath Albury in collaboration with global software developer Thoughtworks, and SexTech School an online training academy for sex tech entrepreneurs.

“The hackathon highlighted the potential uses of technology for sexual health, rights, justice and equity as well as the regulatory, political and surveillance cultures that need to change to make public interest sex tech possible,” said Dr Stardust.

Read the research report on the APO.

SEE ALSO

Facial recognition is on the rise – but the law is lagging a long way behind

People in public place with facial recognition

Facial recognition is on the rise – but the law is lagging a long way behind

Author Mark Andrejevic and Gavin Smith
Date 27 June 2022

Private companies and public authorities are quietly using facial recognition systems around Australia.

Despite the growing use of this controversial technology, there is little in the way of specific regulations and guidelines to govern its use.

Spying on shoppers

We were reminded of this fact recently when consumer advocates at CHOICE revealed that major retailers in Australia are using the technology to identify people claimed to be thieves and troublemakers.

There is no dispute about the goal of reducing harm and theft. But there is also little transparency about how this technology is being used.

CHOICE found that most people have no idea their faces are being scanned and matched to stored images in a database. Nor do they know how these databases are created, how accurate they are, and how secure the data they collect is.

As CHOICE discovered, the notification to customers is inadequate. It comes in the form of small, hard-to-notice signs in some cases. In others, the use of the technology is announced in online notices rarely read by customers.

The companies clearly don’t want to draw attention to their use of the technology or to account for how it is being deployed.

Police are eager

Something similar is happening with the use of the technology by Australian police. Police in New South Wales, for example, have embarked on a “low-volume” trial of a nationwide face-recognition database. This trial took place despite the fact that the enabling legislation for the national database has not yet been passed.

In South Australia, controversy over Adelaide’s plans to upgrade its CCTV system with face-recognition capability led the city council to vote not to purchase the necessary software. The council has also asked South Australia Police not to use face-recognition technology until legislation is in place to govern its use.

However, SA Police have indicated an interest in using the technology.

In a public statement, the police described the technology as a potentially useful tool for criminal investigations. The statement also noted:

There is no legislative restriction on the use of facial recognition technology in South Australia for investigations.

A controversial tool

Adelaide City Council’s call for regulation is a necessary response to the expanding use of automated facial recognition.

This is a powerful technology that promises to fundamentally change our experience of privacy and anonymity. There is already a large gap between the amount of personal information collected about us every day and our own knowledge of how this information is being used, and facial recognition will only make the gap bigger.

Recent events suggest a reluctance on the part of retail outlets and public authorities alike to publicise their use of the technology.

Although it is seen as a potentially useful tool, it can be a controversial one. A world in which remote cameras can identify and track people as they move through public space seems alarmingly Orwellian.

The technology has also been criticised for being invasive and, in some cases, biased and inaccurate. In the US, for example, people have already been wrongly arrested based on matches made by face-recognition systems.

Public pushback

There has also been widespread public opposition to the use of the technology in some cities and states in the US, which have gone so far as to impose bans on its use.

Surveys show the Australian public have concerns about the invasiveness of the technology, but that there is also support for its potential use to increase public safety and security.

Facial-recognition technology isn’t going away. It’s likely to become less expensive and more accurate and powerful in the near future. Instead of implementing it piecemeal, under the radar, we need to directly confront both the potential harms and benefits of the technology, and to provide clear rules for its use.

What would regulations look like?

Last year, then human rights commissioner Ed Santow called for a partial ban on the use of facial-recognition technology. He is now developing model legislation for how it might be regulated in Australia.

Any regulation of the technology will need to consider both the potential benefits of its use and the risks to privacy rights and civic life.

It will also need to consider enforceable standards for its proper use. These could include the right to correct inaccurate information, the need to provide human confirmation for automated forms of identification, and the setting of minimum standards of accuracy.

They could also entail improving public consultation and consent around the use of the technology, and a requirement for the performance of systems to be accountable to an independent authority and to those researching the technology.

As the reach of facial recognition expands, we need more public and parliamentary debate to develop appropriate regulations for governing its use.

Mark Andrejevic, Professor, School of Media, Film, and Journalism, Monash University, Monash University and Gavin JD Smith, Associate Professor in Sociology, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Op-ed: Clearview AI facial recognition case highlights need for clarity on law

biometric verification and face detection on lady on mobile phone

Op-ed: Clearview AI facial recognition case highlights need for clarity on law

Authors Megan Richardson, Jake Goldenfein and Mark Andrejevic 
Date 22 June 2022

In January 2020, the New York Times published an exposé on Clearview AI, a facial recognition company that scrapes images from across the web to produce a searchable database of biometric templates.

A user uploads a ‘probe image’ of any person, and the tool retrieves additional images of that person from around the web by comparing the biometric template of the probe with the database.

It soon emerged that law enforcement agencies around the world, including Australian law enforcement, were using Clearview AI, often without oversight or accountability. A global slew of litigation has followed, challenging the company’s aggregation of images, creation of biometric templates and biometric identification services.

Findings against the company mean that Clearview AI no longer offers its services in certain jurisdictions, including the UK, Australia and some US states.

But this has hardly stopped the company. Clearview is expanding beyond law enforcement into new markets with products such as Clearview Consent, offering facial recognition to commercial entities that require user verification (like finance, banking, airlines and other digital services).

Other companies are selling similar facial recognition products to the general public. It’s clear the private sector has an appetite for these technical capacities: CHOICE has exposed how numerous Australian retailers build and deploy facial recognition for various purposes, including security and loss prevention.

Facial recognition and data privacy laws

The OAIC investigated Clearview AI in partnership with the UK Information Commissioner’s Office, and determined in November last year that the company breached several of the Australian Privacy Principles (APPs) in the Privacy Act 1998. The Clearview AI determination is the most extensive consideration of facial recognition by the Office of the Australian Information Commissioner (OAIC) to date.

In Australia, the deployment of facial recognition is primarily governed by data privacy (or data protection) legislation that, while designed for different technologies, has been somewhat successful in constraining the use of facial recognition tools by private companies and Australian law enforcement.

“The OAIC investigated Clearview … and determined that the company breached several of the Australian Privacy Principles”

But a closer look shows an awkward relationship between facial recognition and existing privacy law.

While the Australian Information Commissioner should be applauded for interpreting Australian law in ways that framed the use of facial recognition by Australian companies as privacy violations, it is unclear whether the reasoning would be applicable in other facial recognition applications, or would survive further examination by the courts if appealed.

In fact, the specifics of how facial recognition works, and is being used, challenge some of the basic ideas and functions of data privacy law.

Commission’s finding reveals complex issues
A few aspects of the determination highlight the complexities in the relationship between data privacy law and facial recognition.

For Clearview AI to be subject to the APPs, it has to process “personal information”. This means information about an identified or reasonably identifiable individual.

Clearview argued the images it collects by scraping the web are not identified, and that the biometric information it created from those images was not for the sake of identification, but rather to distinguish the people in photos from each other. It acknowledged that providing URLs alongside images may help with identification, but not always.

The Commissioner, however, found that because those images contained faces they were ‘about’ an individual. They were also reasonably identifiable because biometric identification is the fundamental service that Clearview provides. The biometric information (the templates) that Clearview extracted from those images was deemed personal information for similar reasons.

“Collecting sensitive information without consent is permissible in cases of threat to life or public safety – those exceptions did not apply in this case”

After finding that Clearview was processing personal information, the Commissioner evaluated whether Clearview was also processing “sensitive information”, which is subject to stricter collection and processing requirements. That category includes “biometric information that is to be used for the purpose of automated biometric verification or biometric identification; or biometric templates”.

The latter is generally understood to be the data collected for the sake of enrolment in a biometric identity system. Collection and processing of sensitive information is typically prohibited without consent under APP 3.3, subject to very narrow exceptions in APP 3.4.

While the Commissioner acknowledged that collecting sensitive information without consent is permissible in cases of threat to life or public safety, those APP 3.4 exceptions did not apply in this case.

Covert image collection and the risk of harm  

The Commissioner was particularly concerned with this type of “covert collection” of images because it carried significant risks of harm. For instance, it created risks of misidentification by law enforcement, being identified for purposes other than law enforcement purposes, and created the perception for individuals that they are under constant surveillance.

Such collection was not lawful and “fair” (as required by APP 3.5) because:

  • the individuals whose personal and sensitive information was being collected by Clearview would not have been aware or had any reasonable expectation that their images would be scraped and held in that database
  • the information was sensitive
  • only a very small fraction of individuals included in the database would ever have any interaction with law enforcement
  • although it did have some character of public purpose – being a service used by law enforcement agencies – ultimately the collection was for commercial purposes.

Is a facial image ‘personal information’?

The Clearview AI case is a fairly egregious example of data processing in breach of the law. But the legal status of other facial recognition tools and applications is less clear.

For instance, it is not always clear that facial images collected will constitute personal information. A debate continues in other jurisdictions about the status of facial images and the scope of data protection law. The European Data Protection Board, for instance, was silent on this point in its 2019 advice on video information processing, and there have been conflicting interpretations in the literature.

“A debate continues in other jurisdictions about the status of facial images and the scope of data protection law”

Although there does appear to be some consensus that images from photography or video that are recorded and retained in material form should be considered personal information, this still requires that the person in an image be reasonably identifiable.

That was not controversial in the Clearview case because images were collected for the sake of building a biometric identification system, and its database of images and templates was retained indefinitely.

But some facial recognition technologies will anonymise or even delete images once biometric vectors are extracted, meaning there may no longer be a link between the biometric information and an image, making future identification more difficult.

The duration that material is retained also seems critical to any capacity for future identification.

The ‘landmark’ loophole

Further, not all biometric systems perform ‘identification’. Some perform other tasks such as demographic profiling, emotion detection, or categorisation.

Biometric information extracted from images may be insufficient for future identification because instead of creating a unique face template for biometric enrolment, the process may simply extract ‘landmarks’ for the sake of assessing some particular characteristic of the person in the image.

If biometric vectors are just landmarks for profiling, and no image is retained, identification may be impossible. The European Data Protection Board has suggested such information, i.e. ‘landmarks’, would not be subject to the more stringent protections in the European General Data Protection Regulation for sensitive information.

This raises a question as to whether the collection of images for a system that did not lend itself to identification may not be personal information (let alone sensitive information) under the Privacy Act.

Nevertheless, these technologies are often used for different types of ‘facial analysis’ and profiling, for instance, inferring a subject’s age, gender, race, emotion, sexuality or anything else.

Singled out at 7-Eleven

Before the Clearview AI determination, the OAIC considered 7-Eleven’s use of facial recognition that touched on these questions more closely.

7-Eleven was collecting facial images and biometric information as part of a customer survey system. The biometric system’s primary purpose was to infer the age and gender of survey participants. However, the system was also capable of recognising whether the same person had made two survey entities within a 20-hour period for the sake of survey quality control.

Here the Commissioner deemed the collection a breach of the APPs because there was not sufficient notice (APP 5), and collection was not reasonably necessary for the purpose of flagging potentially false survey results (APP 3.3).

7-Eleven argued that the images collected were not personal information because they could not be linked back to a particular individual, and therefore were not reasonably identifiable.

The Commissioner found that the images were personal information, however, because the biometric vectors still enabled matching of survey participants (for the sake of identifying multiple survey entries), and therefore that the collection was for the purpose of biometric identification.

This reasoning drew on the idea of ‘singling out’, which is when within a group of persons, an individual can be distinguished from all other members of the group.

Compared to the Clearview example, the finding here that images and templates were both personal information and sensitive information is somewhat less robust

In this case, the Commissioner also held that the facial images were themselves biometric information, in that they were used in a biometric identification system.

“Compared to the Clearview example, the finding here that images and templates were both personal information and sensitive information is somewhat less robust”

There has been substantial critique of approaches to identifiability premised on ‘singling out’ because the capacity to single out says nothing really about the identifiability of the individual. Even the assignment of a unique identifier within the system is not enough to ‘identify’ a person – there would still need to be a way to connect that identifier with an actual person or civil identity.

Finding that information is personal because of its capacity to single out, but without necessarily leading to identification, may be a desirable interpretation of the scope of data protection, but it is not settled as a matter of law.

Singling out remains important because it still enables decisions that affect the opportunities or life chances of people – even without knowing who they are. But whether singling out, alone, will bring data processing within the scope of data protection law is yet to be unequivocally endorsed by the courts.

Considering the Australian Federal Court’s somewhat parsimonious approach to the definition of personal information in the 2017 Telstra case, this finding might not hold up to juridical examination.

The TikTok settlement

The 2021 TikTok settlement under the US state of Illinois’ BIPA (Biometric Information Privacy Act) looked at similar issues. There, TikTok claimed the facial landmark data it collected and the demographic data it generated, used both for facial filters and stickers as well as targeted advertising, was anonymous and incapable of identifying individuals. But the matter was settled and the significance of anonymity was not further clarified.

Identifying and non-identifying biometric processes

BIPA has no threshold requirement for personal information, and is explicitly uninterested in governing the images from which biometric information is drawn.

To that end, it may be that BIPA has a more catch-all approach to biometric information irrespective of application, which is different from the Australian and European approaches under general data protection law which clearly distinguish identifying and non-identifying biometric processes.

Lack of clarity on facial recognition law

As facial recognition applications further proliferate in Australia, we need more clarity on what applications are considered breaches of the law, and in what circumstances.

In the retail context, facial recognition used for security purposes is likely to be considered biometric identification because the purpose is to connect an image in a database of a known security threat to an actual person in a store.

But systems used exclusively for customer profiling may not satisfy those thresholds, while still enabling differentiated and potentially discriminatory treatment.

“As facial recognition applications further proliferate in Australia, we need more clarity on what applications are considered breaches of the law, and in what circumstances”

Where identification is clearly happening, what forms of notice and consent will satisfy the APPs also require further clarification. But ultimately, any such clarifications should occur after a debate as to whether these tools, be they for profiling or identification, by the private sector or government, are desirable or permissible at all.

These OAIC determinations expose a tension between using existing legal powers to regulate high-risk technologies, and advocating for more ideal law reform, as has occurred in the Privacy Act Review (currently ongoing).

One reform that might be considered is whether there should be a more specific approach to biometric information.

From our perspective, a regime focused on facial recognition or related biometrics may be more restrictive, and address some of the technology’s specificities without getting caught up in whether images or templates are personal information.

And more precise rules for facial recognition could be part of a shift towards more appropriate regulation of the broader data economy.

SEE ALSO

Insurance firms can skim your online data to price your insurance — and there’s little in the law to stop this

Market Analyze with Digital Monitor focus on tip of finger.
Market Analyze with Digital Monitor focus on tip of finger.

Insurance firms can skim your online data to price your insurance — and there’s little in the law to stop this

Authors Zofia Bednarz, Kayleen Manwaring and Kimberlee Weatherall
Date 20 June 2022

What if your insurer was tracking your online data to price your car insurance? Seems far-fetched, right?

Yet there is predictive value in the digital traces we leave online. And insurers may use data collection and analytics tools to find our data and use it to price insurance services.

For instance, some studies have found a correlation between whether an individual uses an Apple or Android phone and their likelihood of exhibiting certain personality traits.

In one example, US insurance broker Jerry analysed the driving behaviour of some 20,000 people to conclude Android users are safer drivers than iPhone users. What’s stopping insurers from referring to such reports to price their insurance?

Our latest research shows Australian consumers have no real control over how data about them, and posted by them, might be collected and used by insurers.

Looking at several examples from customer loyalty schemes and social media, we found insurers can access vast amounts of consumer data under Australia’s weak privacy laws.

A person's hands are visible holding an Apple phone on the left (screen facing forward), and a generic Android on the right.
How would you feel if a detail as menial as the brand of your phone was used to price your car insurance? Shutterstock

 

Your data is already out there

Insurers are already using big data to price consumer insurance through personalised pricing, according to evidence gathered by industry regulators in the United Kingdom, European Union and United States.

Consumers often “agree” to all kinds of data collection and privacy policies, such as those used in loyalty schemes (who doesn’t like freebies?) and by social media companies. But they have no control over how their data are used once it’s handed over.

There are far-reaching inferences that can be drawn from data collected through loyalty programs and social media platforms – and these may be uncomfortable, or even highly sensitive.

Researchers using data analytics and machine learning have claimed to build models that can guess a person’s sexual orientation from pictures of their face, or their suicidal tendencies from posts on Twitter.

Think about all the details revealed from a grocery shopping history alone: diet, household size, addictions, health conditions and social background, among others. In the case of social media, a user’s posts, pictures, likes, and links to various groups can be used to draw a precise picture of that individual.

What’s more is Australia has a Consumer Data Right which already requires banks to share consumers’ banking data (at the consumer’s request) with another bank or app, such as to access a new service or offer.

The regime is actively being expanded to other parts of the economy including the energy sector, with the idea being competitors could use information on energy usage to make competitive offers.

The Consumer Data Right is advertised as empowering for consumers – enabling access to new services and offers, and providing people with choice, convenience and control over their data.

In practice, however, it means insurance firms accredited under the program can require you to share your banking data in exchange for insurance services.

The previous Coalition government also proposed “open finance”, which would expand the Consumer Data Right to include access to your insurance and superannuation data. This hasn’t happened yet, but it’s likely the new Albanese government will look into it.

Why more data in insurers’ hands may be bad news

There are plenty of reasons to be concerned about insurers collecting and using increasingly detailed data about people for insurance pricing and claims management.

For one, large-scale data collection provides incentives for cyber attacks. Even if data is held in anonymised form, it can be re-identified with the right tools.

Also, insurers may be able to infer (or at least think they can infer) facts about an individual which they want to keep private, such as their sexual orientation, pregnancy status or religious beliefs.

There’s plenty of evidence the outputs of artificial intelligence tools employed in mass data analytics can be inaccurate and discriminatory. Insurers’ decisions may then be based on misleading or untrue data. And these tools are so complex it’s often difficult to work out if, or where, errors or bias are present.

A magnifying glass hovers over a Facebook post's likes

Each day, people post personal information online. And much of it can be easily accessed by others. Shutterstock

Although insurers are meant to pool risk and compensate the unlucky, some might use data to only offer affordable insurance to very low-risk people. Vulnerable consumers may face exclusion.

A more widespread use of data, especially via the Consumer Data Right, will especially disadvantage those who are unable or unwilling to share data with insurers. These people may be low risk, but if they can’t or won’t prove this, they’ll have to pay more than a fair price for their insurance cover.

They may even pay more than what they would have in a pre-Consumer Data Right world. So insurance may move further from a fair price when more personal data are available to insurance firms.

We need immediate action

Our previous research demonstrated that apart from anti-discrimination laws, there are inadequate constraints on how insurers are allowed to use consumers’ data, such as those taken from online sources.

The more insurers base their assessments on data a consumer didn’t directly provide, the harder it will be for that person to understand how their “riskiness” is being assessed. If an insurer requests your transaction history from the last five years, would you know what they are looking for? Such problems will be exacerbated by the expansion of the Consumer Data Right.

Interestingly, insurance firms themselves might not know how collected data translates into risk for a specific consumer. If their approach is to simply feed data into a complex and opaque artificial intelligence system, all they’ll know is they’re getting a supposedly “better” risk assessment with more data.

Recent reports of retailers collecting shopper data for facial recognition have highlighted how important it is for the Albanese government to urgently reform our privacy laws, and take a close look at other data laws, including proposals to expand the Consumer Data Right.The Conversation

Zofia Bednarz, Lecturer in Commercial Law, University of Sydney; Kayleen Manwaring, Senior Research Fellow, UNSW Allens Hub for Technology, Law & Innovation and Senior Lecturer, School of Private & Commercial Law, UNSW Law & Justice, UNSW Sydney, and Kimberlee Weatherall, Professor of Law, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

Kmart, Bunnings and The Good Guys using facial recognition technology in stores

Facial recognition on TV screen

Kmart, Bunnings and The Good Guys using facial recognition technology in stores

Author Jarni Blakkarly
Date 15 June 2022

Major Australian retailers Kmart, Bunnings and The Good Guys are using facial recognition technology in stores, raising concerns among privacy experts.

The use of this developing technology, which captures and stores unique biometric information such as facial features (known as a ‘faceprint’), would come as news to most customers.

In a recent inquiry, CHOICE asked 25 leading Australian retailers whether they use facial recognition technology, and analysed their privacy policies. Based on the policies and the responses they received, Kmart, Bunnings and The Good Guys appear to be the only three that are capturing the biometric data of their customers.

Privacy policies not easy to find

“Most of these privacy policies you have to search for online, and they’re often not easy to find,” says CHOICE consumer data advocate Kate Bower. “But because we’re talking about in-person retail shops, it’s likely that no one is reading a privacy policy before they go into a store.”

CHOICE staff members also visited some of these stores in person as part of the investigation.

Bower says the Kmart and Bunnings stores they visited had physical signs at the store entrances informing customers about the use of the technology, but the signs were small, inconspicuous and would have been missed by most shoppers.

The collection of biometric data in such a manner may be in breach of the Privacy Act.

Facial recognition on the rise

Mark Andrejevic, professor of media studies at Monash University and a member of the ARC Centre of Excellence for Automated Decision-Making and Society, tells CHOICE that the use of facial recognition by retailers is in its early stages in Australia. But he predicts it will increase as the technology becomes cheaper and more effective.

We don’t have a clear set of regulations or guidelines on the appropriate use of the technology

“The first concern is notice and consent, it’s not in highly visible forms of public notification that would invite people to understand what’s taking place,” says Andrejevic.

“I think the other set of concerns is we don’t have a clear set of regulations or guidelines on the appropriate use of the technology. That leaves it pretty wide open. Stores may be using it for the purposes of security now, but down the road, they may also include terms of use that would say that they can use it for marketing purposes.”

‘Great concern’

Edward Santow is a professor at the University of Technology Sydney who focuses on the responsible use of technology. As a former Australian Human Rights Commissioner, he also led work on artificial intelligence. Santow says facial recognition technology raises serious questions for our society.

“Even if that technology was perfectly accurate, and it’s not, but even if it were, it also takes us into the realm of mass surveillance,” he says. “And I think there will be great concern in the Australian community about walking down that path.”

Breach of the Privacy Act?

CHOICE’s Kate Bower says the Privacy Act considers biometric information such as unique faceprints sensitive data, and that a higher standard is applied to it than to other types of personal information.

“It requires that your collection of that information has to be suitable for the business purpose that you’re collecting it for, and that it can’t be disproportionate to the harms involved,” she says.

We believe that these retail businesses are disproportionate in their over collection of this information, which means that they may be in breach of the Privacy Act Kate Bower, CHOICE consumer data advocate

“We also believe that these retail businesses are disproportionate in their over collection of this information, which means that they may be in breach of the Privacy Act. We intend to refer them to the Information Commissioner on that basis.”

Bower adds that, irrespective of whether the retailers are in breach of the Act or not, clearer and stronger regulations are needed around customer consent and how retailers obtain and use facial recognition data.

Opportunity to strengthen protection

The Attorney General is currently carrying out a five-year review of the Privacy Act. Bower says it’s an opportunity to strengthen measures around the capture and use of consumer data, including biometric data.

Professor Santow agrees that more work needs to be done. “Certainly in Europe, there are stronger border privacy protections, and there are proposals in place to go further,” he says.

Andrejevic says he’s concerned that the public remains largely unaware of what’s going on regarding the capture and use of their personal data. “When I look at the Australian context, I see the creeping use of the technology without widespread public discussion,” he says.

This story was originally published by CHOICE. Read the full story Kmart, Bunnings and The Good Guys using facial recognition technology in stores

SEE ALSO

When self-driving cars crash, who’s responsible? Courts and insurers need to know what’s inside the ‘black box’

Self-driving care approaching pedestrian crossing
Getty Images/Gremlin

When self-driving cars crash, who’s responsible? Courts and insurers need to know what’s inside the ‘black box’

Authors Aaron Snoswell, Henry Fraser and Rhyle Simcock
Date 25 May 2022

The first serious accident involving a self-driving car in Australia occurred in March this year. A pedestrian suffered life-threatening injuries when hit by a Tesla Model 3 in “autopilot” mode.

In the US, the highway safety regulator is investigating a series of accidents where Teslas on autopilot crashed into first-responder vehicles with flashing lights during traffic stops.

A highway car crash at night with emergency lights flashing
A Tesla model 3 collides with a stationary emergency responder vehicle in the US. NBC / YouTube

 

The decision-making processes of “self-driving” cars are often opaque and unpredictable (even to their manufacturers), so it can be hard to determine who should be held accountable for incidents such as these. However, the growing field of “explainable AI” may help provide some answers.

Who is responsible when self-driving cars crash?

While self-driving cars are new, they are still machines made and sold by manufacturers. When they cause harm, we should ask whether the manufacturer (or software developer) has met their safety responsibilities.

Modern negligence law comes from the famous case of Donoghue v Stevenson, where a woman discovered a decomposing snail in her bottle of ginger beer. The manufacturer was found negligent, not because he was expected to directly predict or control the behaviour of snails, but because his bottling process was unsafe.

By this logic, manufacturers and developers of AI-based systems like self-driving cars may not be able to foresee and control everything the “autonomous” system does, but they can take measures to reduce risks. If their risk management, testing, audits and monitoring practices are not good enough, they should be held accountable.

How much risk management is enough?

The difficult question will be “How much care and how much risk management is enough?” In complex software, it is impossible to test for every bug in advance. How will developers and manufacturers know when to stop?

Fortunately, courts, regulators and technical standards bodies have experience in setting standards of care and responsibility for risky but useful activities.

Standards could be very exacting, like the European Union’s draft AI regulation, which requires risks to be reduced “as far as possible” without regard to cost. Or they may be more like Australian negligence law, which permits less stringent management for less likely or less severe risks, or where risk management would reduce the overall benefit of the risky activity.

Legal cases will be complicated by AI opacity

Once we have a clear standard for risks, we need a way to enforce it. One approach could be to give a regulator powers to impose penalties (as the ACCC does in competition cases, for example).

Individuals harmed by AI systems must also be able to sue. In cases involving self-driving cars, lawsuits against manufacturers will be particularly important.

However, for such lawsuits to be effective, courts will need to understand in detail the processes and technical parameters of the AI systems.

Manufacturers often prefer not to reveal such details for commercial reasons. But courts already have procedures to balance commercial interests with an appropriate amount of disclosure to facilitate litigation.

A greater challenge may arise when AI systems themselves are opaque “black boxes”. For example, Tesla’s autopilot functionality relies on “deep neural networks”, a popular type of AI system in which even the developers can never be entirely sure how or why it arrives at a given outcome.

‘Explainable AI’ to the rescue?

Opening the black box of modern AI systems is the focus of a new wave of computer science and humanities scholars: the so-called “explainable AI” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by changing how the systems are built or by generating explanations after the fact.

In a classic example, an AI system mistakenly classifies a picture of a husky as a wolf. An “explainable AI” method reveals the system focused on snow in the background of the image, rather than the animal in the foreground.

(Right) An image of a husky in front of a snowy background. (Left) An 'explainable AI' method shows which parts of the image the AI system focused on when classifying the image as a wolf.
Explainable AI in action: an AI system incorrectly classifies the husky on the left as a ‘wolf’, and at right we see this is because the system was focusing on the snow in the background of the image. Ribeiro, Singh & Guestrin

 

How this might be used in a lawsuit will depend on various factors, including the specific AI technology and the harm caused. A key concern will be how much access the injured party is given to the AI system.

The Trivago case

Our new research analysing an important recent Australian court case provides an encouraging glimpse of what this could look like.

In April 2022, the Federal Court penalised global hotel booking company Trivago $44.7 million for misleading customers about hotel room rates on its website and in TV advertising, after a case brought on by competition watchdog the ACCC. A critical question was how Trivago’s complex ranking algorithm chose the top ranked offer for hotel rooms.

The Federal Court set up rules for evidence discovery with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago called expert witnesses to provide evidence explaining how Trivago’s AI system worked.

Even without full access to Trivago’s system, the ACCC’s expert witness was able to produce compelling evidence that the system’s behaviour was not consistent with Trivago’s claim of giving customers the “best price”.

This shows how technical experts and lawyers together can overcome AI opacity in court cases. However, the process requires close collaboration and deep technical expertise, and will likely be expensive.

Regulators can take steps now to streamline things in the future, such as requiring AI companies to adequately document their systems.

The road ahead

Vehicles with various degrees of automation are becoming more common, and fully autonomous taxis and buses are being tested both in Australia and overseas.

Keeping our roads as safe as possible will require close collaboration between AI and legal experts, and regulators, manufacturers, insurers, and users will all have roles to play.The Conversation

Aaron J. Snoswell, Post-doctoral Research Fellow, Computational Law & AI Accountability, Queensland University of Technology; Henry Fraser, Research Fellow in Law, Accountability and Data Science, Queensland University of Technology, and Rhyle Simcock, PhD Candidate, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The Australian Ad Observatory: Update on election advertising on Facebook

Ad tagged Clive Palmer billboard
Credit: Dan Angus

The Australian Ad Observatory: Update on election advertising on Facebook

Authors Dan Angus and Axel Bruns
Date 20 May 2022

As the church and school hall floors are being swept, the corflutes and bunting ready, and several tonnes of democracy sausages ordered and ready for the main event this Saturday, ADM+S researchers Professor Daniel Angus and Professor Axel Bruns provide an update on election-related advertising on Facebook through the Australian Ad Observatory project.

We have been experimenting this election with our new Australian Ad Observatory, developed through the ARC Centre of Excellence for Automated Decision-Making and Society.  With support from our partners at the ABC we have enlisted a group of citizen scientists who have kindly downloaded a plugin that allows them to anonymously donate ads they encounter while browsing Facebook. We have been keeping careful watch to identify any political or issue advertising that isn’t properly authorised and that may be trying to sneak through undetected.

Thankfully we have not located any significant or widespread ‘dark ad’ campaigns throughout this election. With the caveat that we can only examine ads from the 1,700 citizen scientists who have installed the ad plugin, and that we can’t see mobile-only ads, it seems Australian Facebook’s political ad environment is mostly running as designed.

This is not to say that there aren’t significant issues regarding false and misleading claims being made in advertising, or that the transparency provided by the platforms is adequate (far from it), but at least for now the political ads we have seen can all be traced back to an authorised source.

A significant part of the work in the Ad Observatory has been to develop machine vision techniques to detect political logos and other signifiers that may help us locate unauthorised ads. It was great therefore to see that colleagues at The Guardian have also been experimenting with the use of machine vision to detect political messaging techniques, such as the use of novelty cheques, cute furry animals, and hi-vis workwear. With the continued fragmentation of our media landscape, these new techniques all play an important role in helping us understand the pulse of the political campaign and hold our politicians to account.

For a further look at campaigning on social media during the election including advertising spend and social media engagement of candidates on Facebook and Twitter visit the Digital Media Research Centre, QUT 2022 Australian Federal Election: Update 5.

SEE ALSO

New book: Everyday Data Cultures

New book: Everyday Data Cultures

Author Kathy Nickels
Date 19 May 2022

ADM+S researchers Jean Burgess, Kath Albury, Anthony McCosker and Rowan Wilken have brought together their knowledge and expertise in digital media and communication studies in this new book Everyday Data Cultures. This book establishes a new theoretical framework for understanding everyday experiences of data and automation, and offers guidance on the ethical responsibilities we share as we learn to live together with data-driven machines.

Everyday Data Cultures shows how ordinary people are negotiating the datification of society from gig worker activism to wellness tracking with sex toys and TikTokers manipulation of the Algorithm.

‘There is no better or more comprehensive look at what datafication means and at its consequences than Everyday Data Cultures. The book’s masterful critical analysis provides not only an understanding of datafication but alternatives to the commercialization of data and options to reclaim it as a public good.’
Steve Jones, University of Illinois Chicago

SEE ALSO

Making sense of deepfakes

3D dissolving human head made with cube shaped particles.

Making sense of deepfakes

Author Kathy Nickels
Date 11 May 2022

AI-generated deepfakes are becoming more common and harder to spot. They have the potential to create convincing footage of any person doing anything, anywhere.

Deepfakes are a type of synthetic media created with the replacement of faces and voice in digital videos. You may have seen the very convincing Tom Cruise deepfakes that went viral on TikTok in 2021.

These videos are made possible by developments in deep learning systems and the availability of extensive video datasets used to “train” generative learning models and to produce synthesized outputs.

Deep fakes bring new types of informational harms and possibilities for image-based abuse, especially in their historical origins in porn production cultures

The question of how to detect, ban or regulate, or educate to mitigate the harms of deepfakes needs to address the multiple dimensions of AI and data literacies, and the contexts of their development and deployment.

In this article Making sense of deepfakes: Socializing AI and building data literacy on GitHub and YouTube, ADM+S researcher Professor Anthony McCosker focuses on educational and social learning responses, asking what kind of AI and data literacy might make a difference in addressing deepfake harms.

SEE ALSO

New book: Everyday Automation: Experiencing and Anticipating Emerging Technologies

New book: Everyday Automation: Experiencing and Anticipating Emerging Technologies

Author Kathy Nickels
Date 10 May 2022

ADM+S researchers Sarah Pink and Deborah Lupton together with colleagues Martin Berg and Minna Ruckenstein bring together research developed across anthropology, sociology, media and communication studies and ethnology, which shows how by rehumanising automation, we can gain deeper understandings of its societal impacts in this new open access book Everyday Automation: Experiencing and Anticipating Emerging Technologies.

“ADM and AI need to be treated as complex sociotechnical systems that develop over time and need ongoing stabilisation, repair and care of human-algorithm relations within the mundane everyday worlds of all the humans who are co-implicated with them.”

From Everyday Automation: Experiencing and Anticipating Emerging Technologies.

As its title suggests, this book brings the experiences of automation in everyday life into focus. The book provides compelling research in three parts that explore: Challenging dominant narratives of automation; Embedding automated systems in everyday; and Experimenting with automation in society.

Download the open access version of Everyday Automation: Experiencing and Anticipating Emerging Technologies

SEE ALSO

Facebook, YouTube, games & Grindr: what we know about online ads in the federal election

Screenshot of Australia United Party advertising on YouTube

Facebook, YouTube, games & Grindr: what we know about online ads in the federal election

Authors Dan Angus, Axel Bruns and Ehsan Dehghan
Date 3 May 2022

We’re halfway through the federal election campaign, and by now you’ve probably seen a significant amount of political advertising – much of it online.

Online political advertising is more pervasive than its analogue predecessors, it can cost less (per individual ad), be deployed more rapidly, and can be micro-targeted towards specific audiences. The targeting can be aimed at protected social categories such as race and gender, and has been used to amplify mis- and disinformation.

Unlike billboards or TV ads which can be widely seen, targeted ads may be invisible beyond their intended audience. Researchers like us try to get a handle on online advertising through projects like the Australian Ad Observatory.

Here’s what we know so far about the state of online advertising in the federal election.

Dashboards

The big dogs in the online advertising world are Meta and Google. Meta allows advertising across its products including Facebook, Messenger, Instagram, and WhatsApp. Google allows ads in its search products, YouTube, and on Android.

Both companies have ad transparency dashboards. (Here is Meta’s, and here is Google’s.)

The dashboards detail the ads that are running, who is running them, and some basic aggregate data on targeting by geography, gender, and age.

For example, we can see the Australian Electoral Commission has spent around $383,000 on Facebook advertising in the past month, largely promoting election-related facts and voter registration information.

Meta’s ad transparency dashboard shows information about ad spending and targeting. Meta Ad Library

 

However, these tools are quite basic, and don’t offer all the information on how ads are targeted, nor insights into trends and patterns. To fill the void, researchers and journalists have been building their own.

For Facebook, we have partnered with colleagues at Ryerson University in Canada to extend their PoliDashboard to Australia. Colleagues at The University of Queensland have also released an excellent Facebook ad spend tracker.

For Google, The Guardian Australia has released data visualisations extending Google’s transparency dashboard with more useful data aggregation and geo-visual elements.

So what have we seen so far in this campaign?

Facebook focus

Analysing spending data from April 1 2022 onward, we can already see differences in how particular parties are strategically purchasing ads on platforms like Facebook.

In the lower house, the traditional Liberal stronghold of Kooyong has been an early standout. Polling suggests the incumbent, Treasurer Josh Frydenberg, is under threat from “teal” independent Dr Monique Ryan.

Since the start of April Frydenberg has tipped around $80,000 into Facebook advertising in his own seat, while Ryan has spent around $41,000. Altogether, more than twice as much has been spent in Kooyong as in the next highest spending seats of Maribyrnong, North Sydney, and Wentworth.

In the senate, Queensland has so far attracted the most spending at around $110,000. Here, a rogues’ gallery of former LNP and other far-right candidates are scrambling to win one of the two seats that are likely to be up for grabs.

Over the past two years, Google has also introduced more advertising products and services in Android mobile games.

Ads in this gaming ecosystem are very difficult to track. The Google transparency dashboard doens’t include data that enables us to determine which ads appear in this specific ecosystem. However, users of our Ad Observatory have sent us photos and screenshots of UAP ads appearing within their mobile games. It seems some of that $15 million is finding its way into mobile in-game advertising.

Perhaps the most creative use of online advertising so far has come from Stephen Bates, the Greens candidate for Brisbane. Bates took out a series of ads on Grindr, a social networking and dating app for gay, bi, trans, and queer people, and men who have sex with men.

Bates’ tongue-in-cheek ads have slogans like “Spice up Canberra with a third” and “The best parliaments are hung”. Unlike other sometimes cringe-worthy attempts by politicians to appeal to specific communities, Bates has leaned into his own identity as an openly gay candidate in selecting this platform and developing ads using Grindr’s platform-specific vernacular.

Greens party political advertisement. Stephen Bates, Greens candidate for Brisbane, is standing next to slogan that says 'The best parliaments are hung'.
In Brisbane, Greens candidate Stephen Bates has taken out cheeky ads on Grindr. Screenshot, Author provided

 

What about Twitter, TikTok and the other big platforms?

Meanwhile, if you spend your time on Twitter or TikTok you won’t see any political advertising during this campaign.

Why not? It’s complicated, but it mainly goes back to the Cambridge Analytica scandal of the mid 2010s, in which personal profile data of more than 50 million Facebook users was siphoned from the platform and used to build a profiling tool that was then leveraged for political gain.

Mass-scale online ad microtargeting using this tool may have played a significant role in the success of Donald Trump’s 2016 US presidential campaign and the Brexit “leave” campaign the same year.

These events thrust online political advertising into the spotlight. In its wake lawmakers and civil society groups around the globe have steadily ramped up pressure on the platforms regarding online political advertising.

In response, Twitter and TikTok have decided political ads aren’t worth the trouble, and banned them outright.

Facebook instead shut access to third-party data gathering – which had the side effect of making genuine, ethical scholarly research more difficult.

Does any of this work, though?

Online political advertising is but one part of an overall campaign, but it can be used to tip the balance in one party’s favour.

In other countries, online advertising may focus on encouraging supporters to vote and discouraging opponents. However, Australia’s compulsory voting means campaigns are likely to focus on persuading a relatively small number of swinging voters. This in turn means specific highly contested seats (such as Kooyong) may play an amplified role within our elections.

One thing to watch in this election and beyond are the growing calls for truth in political advertising. Online election advertising will only become more intense, so we will need ways to reign in mis- and disinformation.The Conversation

Daniel Angus, Professor of Digital Communication, Queensland University of Technology; Axel Bruns, Professor, Creative Industries, Queensland University of Technology, and Ehsan Dehghan, Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

SEE ALSO

The ‘digital town square’? What does it mean when billionaires own the online spaces where we gather?

Twitter on computer screen

The ‘digital town square’? What does it mean when billionaires own the online spaces where we gather?

Authors Jean Burgess
Date 27 April 2022

The world’s richest man, Elon Musk, seems set to purchase the social media platform Twitter for around US$44 billion. He says he’s not doing it to make money (which is good, because Twitter has rarely turned a profit), but rather because, among other things, he believes in free speech.

Twitter might seem an odd place to make a stand for free speech. The service has around 217 million daily users, only a fraction of the 2.8 billion who log in each day to one of the Meta family (Facebook, Instagram and WhatsApp).

But the platform plays a disproportionately large role in society. It is essential infrastructure for journalists and academics. It has been used to coordinate emergency information, to build up communities of solidarity and protest, and to share global events and media rituals – from presidential elections to mourning celebrity deaths (and unpredictable moments at the Oscars).

Twitter’s unique role is a result of the way it combines personal media use with public debate and discussion. But this is a fragile and volatile mix – and one that has become increasingly difficult for the platform to manage.

According to Musk, “Twitter is the digital town square, where matters vital to the future of humanity are debated”. Twitter cofounder Jack Dorsey, in approving Musk’s takeover, went further, claiming “Twitter is the closest thing we have to a global consciousness”.

Are they right? Does it make sense to think of Twitter as a town square? And if so, do we want the town square to be controlled by libertarian billionaires?

What is a town square for?

As my coauthor Nancy Baym and I have detailed in our book Twitter: A Biography, Twitter’s culture emerged from the interactions between a fledgling platform with shaky infrastructure, an avid community of users who made it work for them, and the media who found in it an endless source of news and other content.

Is it a town square? When Musk and some other commentators use this term, I think they are invoking the traditional idea of the “public sphere”: a real or virtual place where everyone can argue rationally about things, and everyone is made aware of everyone else’s arguments.

Some critics think we should get rid of the idea of the “digital town square” altogether, or at least think more deeply about how it might reinforce existing divisions and hierarchies.

The ‘town square’ can be much more than just a soapbox for sounding off about the issues of the day.
Shutterstock

I think the idea of the “digital town square” can be much richer and more optimistic than this, and that early Twitter was a pretty good, if flawed, example of it.

If I think of my own ideal “town square”, it might have market stalls, quiet corners where you can have personal chats with friends, alleyways where strange (but legal!) niche interests can be pursued, a playground for the kids, some roving entertainers – and, sure, maybe a central agora with a soapbox that people can gather around when there’s some issue we all need to hear or talk about. That, in fact, is very much what early Twitter was like for me and my friends and colleagues.

I think Musk and his legion of fans have something different in mind: a free speech free-for-all, a nightmarish town square where everyone is shouting all the time and anyone who doesn’t like it just stays home.

The free-for-all is over

In recent years, the increasing prevalence of disinformation and abuse on social media, as well as their growing power over the media environment in general, has prompted governments around the world to intervene.

In Australia alone, we have seen the News Media Bargaining Code and the ACCC’s Digital Platform Services Inquiry asking tougher questions, making demands, and exerting more pressure on platforms.

Perhaps more consequentially for global players like Twitter, the European Union is set to introduce a Digital Services Act which aims “to create a safer digital space in which the fundamental rights of all users of digital services are protected”.

This will prohibit harmful advertising and “dark patterns”, and require more careful (and complex) content moderation, particularly on of the larger companies. It will also require platforms to be more transparent about how they use algorithms to filter and curate the content their users see and hear.

Such moves are just the beginning of states imposing both limits and positive duties on platform companies.

So while Musk will likely push the boundaries of what he can get away with, the idea of a global platform that allows completely unfettered “free speech” (even within the limits of “the law