The News and Media Focus Area will investigate and improve the uses and impacts of automated decision-making in news work, social media platforms, and the digital media and communication environment more broadly.
Modern digital news and media platforms deploy automated decision-making systems intensively, with positive as well as problematic results. Search engines, personalised newsfeeds, content moderation systems and programmatic advertising all play integral roles in media.
This enables new forms of computer-assisted reporting and audience analytics in journalism, but also creates risks to democratic processes and social cohesion through the automated curation of personalised information and the algorithmic amplification of misinformation and other social harms based on selection principles that are transparent and inequitable.
Building on novel and innovative research frameworks, the News and Media Focus Area investigates the impact of automated decision-making in this field.
RESEARCH PROJECTS
This project seeks to generate better understandings of the functions, capacities, and normative role of humans within automated decision systems.
Our premise is that everyone should have the opportunity to benefit from digital technologies: to manage their health, access education and services, participate in cultural activities, organise their finances, follow news and media, and connect with family, friends, and the wider world.
This project investigates the extent to which automated decision-making systems impact the provision of consumer insurance via pricing algorithms which may produce unfair outcomes for particular subsets of society by engaging in proxy and price discrimination.
Examining the ways in which automated decision-making systems impact public and shared space via sensors that produce actionable digital simulations, artefacts, and interfaces.
This project seeks to scope several approaches to deal with Automated Decision-Making and Decision-Support Systems-Related Risks (ADM/DSS RR) through norms and provide an evaluation of those approaches for their consideration in regulatory contexts.
This project aims to develop new computational methods to prevent language-oriented automated decision-making (ADM) systems from generating harmful content.
Assessing prospective harms vs prospective benefits associated with ADM as a first step to amelioration.
Investigating the enablers of digital transformation and considers digital futures within the cultural sector through evaluating the outcomes of ACMI’s CEO Digital Mentoring Program.
Identifying the opportunities, enablers and barriers for public interest litigation to promote accountability and fairness in automated decision-making.
Examining the political economy of ‘sex tech’ in order to identify how sexual technologies are being governed at scale, how sexual data is being collected, stored, shared and monetised, and how the material benefits of sex tech may be more equitably distributed.
Indentifying how misunderstandings of harm and safety flow into flawed data logics and ineffective automated digital platform responses.
Evaluating the moderation of social media content, which has become radically more reliant on machine learning classifiers during the Covid-19 pandemic.
What shapes the environmental impacts of data centres cooling infrastructures?
This project brings together expertise in digital media, platform studies, and law with data science and machine learning to study the roles and data operations of bots – pre-programmed automated agents – on social media platforms.
Unpacking the biases in models that may come from the underlying data, or biases in software that could be designed with a specific purpose and angle from the developers’ point-of-view
Recent developments in machine learning and information access communities attempt to define fairness-aware metrics to incorporate into these frameworks. This project will address a number of research questions related to quantifying and measuring bias and engagement that remain unexplored.
Examining the challenges to, and opportunities for, liberal and democratic institutions and governance presented by ADM.
Developing a theoretically rich analysis of democracy and freedom given ADM.
Considering ethical approaches in the area of automated decision-making (ADM) and civic life with a focus on civic commitments and concerns.
Operationalising new data partnerships and implementing data analysis to improve non-profit and humanitarian sector work.
Exploring the role of everyday data practices and literacies in automated decision-making.
Examining common themes with respect to the issues raised by the collection, storage, and use of data for ADM across object domains.
Measuring digital inclusion and media use in remote Aboriginal and Torres Strait Islander communities.
Examining the ways in which automated decision-making (ADM) is being integrated into the lives of diverse and non-dominant communities across Australia.
Considering ethical approaches in the area of automated decision-making (ADM) and civic life with a focus on civic commitments and concerns.
Providing strategies to address the potential harms posed by ‘dark ads’ and provide accountability and transparency mechanisms for targeted advertising.
Considering ethical approaches in the area of automated decision-making (ADM) and civic life with a focus on civic commitments and concerns.
This project will create a next generation recommender system that enables equitable allocation of constrained resources.
This project investigates current developments in journalistic practice by conducting in-depth interviews with news workers, including journalists, social media editors, developers, programmers, computer scientists, graphic designers and social media marketing staff.
This project is a review of the current state of ADM implementation, practices and visions in different regions in the Global South.
This research aims to assess the extent to which search results are personalised, by various leading search engines and their algorithms, based on the profiles established by those search engines for their different users.