News and Media Symposium – Facebook Advertising the Australian Ad Observatory Project
6 October 2021

Prof Mark Andrejevic, Monash University node, ADM+S (chair)
Prof Daniel Angus, QUT node, ADM+S
Simon Elvery, Journalist, ABC
Abdul Obeid, QUT node, ADM+S
Dr Nina Li, Automated Society Working Group, Monash University
Watch the recording
Duration: 0:56:22


Prof Mark Andrejevic:

So, thank you for joining us on this action-packed morning. I’ve realised that we are the panel standing between you and lunch, unless you’re on Zoom, in which case you can be munching as we go. So I want to kick things off.

This is really the formal launch of the Australian Ad Observatory project. I’m just going to start off by doing a couple of introductions to the panel and also a couple of thanks for the project, which we will learn about in the coming moments. Just to remind myself and keep myself on track, I am going to show some slides, and we also have the premiere of our promo video for the project. So I’ll show that as well. So, bear with me just while I do my share screen thing.

Okay, so to start off with, I just want to say a little bit about the panel members who are joining us today. Daniel Angus is professor of digital communication at QUT and an associate investigator with the ADM+S Centre. Bronwyn Carlson, who is a member of this project and may tune in as we go, she’s a professor at Macquarie University, Head of the department of Indigenous studies there, and a Director of the Centre for Global Indigenous Futures. And also, an associate investigator with the Centre.

Simon Elvery. Simon showed up, he’s here. Oh, Sam is there. Right, okay, thanks Hey Simon. Brisbane-based journalist and developer at ABC News Story, a partner investigator with the Centre.

Abdul Obeid is a data engineer with the Centre and a PhD candidate at QUT. And Nina Lee, who’s a lecturer at Monash University and part of the Automated Society Working Group, which conducted the pilot version of this project, which she will speak to. Dan’s going to intro the project for us, Simon will talk and give the journalism perspective. Abdul will lead us through the tool, and Nina will talk a little bit about the types of findings that we got in the pilot project, which speaks to maybe some of the things that we might think about when we think about how to handle the data from this project.

I also want to just express some thanks. First to the Automated Society Working group at Monash who again, did the pilot version and brought an earlier version of the project tool to the Centre. And of course Abdul and dan who really made that tool what it is today. To professor Carlson, for her interest in the ways in which this tool might be used to consider how Indigenous Australians are being targeted on Facebook against the background of the history of racism and advertising and predatory targeting, which is an ongoing concern in how advertising has treated a number of groups including Indigenous Australians.

To ACCAN, the group who’s a partner with the Centre and also helped fund the pilot project. Also to Peter Lewis at Essential Media who conducted the poll that I’ll reference at the end of this session, that really highlights the public support for the type of transparency initiative that we’re engaged in. To Jenny Lee who created the promo that you’re about to see, and to the NYU ad observatory. The folks there have been super helpful in terms of filling us in on information about their experience, sharing code with us. They’ve been invaluable in helping us out, and of course to Kathy Nichols who’s done an amazing job in outreach and promotion for the tool. So with those thanks in mind, I’ll just show the promo quick. The other panels haven’t seen this yet because it just got finished this morning. So I hope it doesn’t steal any thunder, but it’s pretty short. And then we’ll hand over to Dan. So here’s the promo that will tell you a little bit about the purpose of this project and sorry, I just need to find the start.



Online advertising ushers in a quantum shift in how advertising works. Advertising traditionally, was out in public view. The shift to advertising online means that advertising can be targeted at very, very specific groups. The reality is that these ads are orchestrated and designed to target vulnerable information about you, that you thought was private. They are hyper customised, often in automated ways, which means it makes it possible to generate thousands of variations of ads or tens of thousands of variations and serve them to different groups, or different individuals. One thing we know is that there are a lot of problems.

We know that there are a lot of consumer scams. We know that there is discriminatory delivery and targeting. We know that there’s misinformation. That there is wilful disinformation. In fact, online ads are sometimes called dark ads because they’re only seen by the people to whom they are directed. That means that they are not available to the public. They’re not available to reporters or journalists. They’re not available to government agencies or regulators. We can’t, as a general public, know what advertising is out there. The most important step we can take right now, is more transparency so that more researchers can actually work on this problem. The goals of the Australian Ad Observatory project are to provide some measure of accountability for online targeted advertising and really get a conversation started about what the appropriate social response is.

The ad collection tool works as a plugin that is installed on a computer browser. You install the plugin and register your consent to letting us collect advertisement data that we gather from within the plugin. This all happens while you are scrolling through your news feed and you can later view the advertisement data within a dashboard that’s provided with the plugin. Anything we find out feeds into a global conversation around the potential harms and threats that online advertising might pose to a functioning democratic society. There’s no reason why digital ad platforms cannot subject ads to the same scrutiny that other industry does. The problem is really urgent. This is a real threat to democracies all over the world.

Prof Mark Andrejevic:

So, I should say that the browser tool is available to install and the link there, will take you to it. And we’ll provide you more background information and the ethics material about it. So, if you’re interested and want to participate, please take note of the URL. And with that I’ll hand over to Dan. I should note that professor Carlson is part of the project. She’s going to join us when we present results and some of the findings. So she won’t be speaking on the panel today. And I’ll hand it over to Dan. Thanks, Dan.

Prof Daniel Angus:

Thanks so much Mark. So, Abdul and I are going to go through a bit of, a kind of a double team here, in presenting a bit of an overview of the process of installing the plug-in. And I’m going to come in towards the end in terms of what we’re potentially going to do with the data that’s collected through this donation process. So Abdul, do you want to just go through, hopefully the slides might work.

Abdul Obeid:

So let’s get started. As you’re all aware, in a nutshell, the Australian Ad Observatory plug-in works through data donations. It’s a pretty simple plug-in at face value, however it’s doing something as we mentioned, that is quite urgent, quite necessary, for understanding transparency and accountability and online advertising practices presently. We advertise the plugin through three online vendors. These are encapsulated in three separate web browsers. Those are Google, Chrome, Mozilla Firefox and the Microsoft Edge web browsers, as you can see up here on the screen. And the first step in installing said plugin, is the installation process. I’m going to highlight to you exactly how easy it is to get started. It’s no more than a few minutes as you will come to know, and this begins inside of a web browser’s web store. So anyone who’s ever installed an extension before has gone to their respective web console, or in this case it’s the chrome web store. In this case we’re using Google chrome, however you can do the same on Mozilla Firefox, or on Edge, depending on what your preference is. So let’s go ahead and install the plugin. I’m going to go ahead and click the little add to Chrome button, and then after it’s done that, we’re going to have to request a few certain privileges. This is all done in front of you, so that you’re aware of what it is that you’re consenting to. And once you’ve added the plugin to your browser, you can access it via the little extension pane up here in the top right corner, with the little plugin puzzle icon.

So now that it’s been added to my browser, which here is Chrome, I can click the little icon in the top right corner to manage the extension.

The plugin will only work with your consent and we stress this very strongly. When you install the plugin you will be confronted with a consent form that basically is asking for your permission before we go ahead and do anything. So we don’t do any data donations without actually getting your explicit okay. The way that the form looks is shown on this page here. I’m going to firstly agree to the consent form, and this here highlights all the details about the ethics process. I might mention also, that we – for those of you that might want to do a little bit of background indexing – we now provide an ethics number. This was just the demo so I’m just going to go ahead and click I agree, and thereafter, what you’re presented with is some demographic information that really is the majority of everything that we’re gathering from you. Beyond this we don’t grab anything that could be used to personally identify you. Everything is de-identified and anonymised because we respect your privacy. And once you’ve filled in the remainder of the form, your plugin begins contributing the data donations.

So here we’re just putting in any random details. You can see that we are interested in understanding levels of education, languages that are spoken, party preferences, employment status, and annual incomes. And this here is just to understand the various demographics that will be using these advertisements, to get an idea about how Facebook targets certain audiences. So once you’ve got the plugin installed and registered, we can begin the collection of data donations. And this is quite effortless. All you really need to do is just use your Facebook account. And here we have one particular kind of post that might be served on Facebook.

Now notice that this is not an advertisement, and the way that we know that this is not an advertisement is because of the fact that it’s got no sponsored tag on it. This is just a public post from a public page. We are not interested in posts of this kind, and I demonstrate it here to really distinguish that this is something that we do not collect. Anything that is not an advertisement is neglected by the tool and we indicate this to you just up here in the top of the post, with a tag that notes to you that hey, this is a public post, it has not been shared with the research project. The same goes for any private posts. Any posts that do not fit the advertiser post model that Facebook has designed. And so this is not an advertisement. We don’t really want anything to do with this. So suppose instead, that you are confronted with an advertisement, and this is ideally what it should look like to you. Great to show this just before lunch as well. Notice the sponsored term. So all advertisements on Facebook are required to have this detail. It’s been a bit tricky getting this particular tag to be recognised, and this sort of falls in line with current measures that are taking place. You can understand that Facebook doesn’t really appreciate the monitoring of certain features, however this particular aspect we are actively representing within posts. And as you can see, this is an advertisement. We clearly indicate that we have collected it along with its targeting information, and thereafter you can review your data donations. So this is the last aspect of the tool that we provide, and this offers complete transparency of the process.

So to review what data you’ve donated, you need to firstly access the extension pane. So this is once again done in the little plugin icon in the top right corner of the page, here. You can see the extensions you’ve installed. I’m going to go ahead and just add that little pin to the top so I can easily access it, and then on accessing the extension you’ll be given some basic details about the advertisement data you’ve donated. So far i’ve seen three ads and there are a few buttons that help me manage various features.

So to see the advertisements you’ve donated you can click on my ad archive, go ahead and click that, and from here any ad that’s been shown to me on Facebook, you can go back and see. So here we can see the advertisement that was previously served to us on Facebook. This is really interesting, in fact for those of you that might subconsciously see advertisements and then have no recollection of them being served to you, you can actually gather a bit of insight from being able to see it all in front of you, in retrospect.

Prof Daniel Angus:

So this area here is one that we will continue to develop, in terms of building out the tool to provide more of those measures around personal insights into advertising and such, that you see within your Facebook feed.

Abdul Obeid:

So, in saying that, there is targeting information that you as users unknowingly provide to Facebook when you just use it, as part of the terms and conditions of the platform. They gather information about you, like your age, your gender, your location, what interests that you are into, and we let you know exactly what aspects of the targeting data have been used to serve certain ads to you, from within your news feed. And we show this back to you from within the actual tool itself. So over time your archive will fill up with all the advertisements that you’ve been shown and thereafter, if you do want to adjust certain preferences you can also refer back to the preference pane, to gather a bit of further control of how you want to manage your experience. So if at any stage you want to change what data you share or how the plugin interacts with Facebook, you can do this from the preferences pane. You can share diagnostic data, or you can choose not to. You can have debug info set up to help us improve the tool, or show the collection status. Have it emitted if you feel that it might be impeding your experience. So with that said, I’m going to hand the mic over to Dan.

Prof Daniel Angus:

Thank you Abdul. So with the data that is donated, what we will be able to do then is collect a public data set of advertising information that can be then subjected to broad-scale, whole nation-level analysis. So this is an example of some of the pathways that we can start to then engage in that analysis. Through this example here is showing logo detection. So we’ve developed a whole range of some machine learning and other kinds of general analytic approaches, to begin to understand these things at scale. So in this example here, this fictitious kind of manufacturer of potato chips, you can see there’s three visible logos within the ad. n the right hand side our system has correctly identified those three logos. Now this uses a very lightweight training approach where we don’t need a lot of examples of logos. It’s a one-shot learning, so we can kind of insert a single logo and then identify any instances of that logo in advertising that people have been subjected to.

This is particularly important for looking at the range of political advertising that might be present on the platform. So here we have another fictitious political party, with Melvin Meador from the Meta Party, that it’s correctly identified the Meta Party’s logo in that particular ad. What it can also do as well, is optical character recognition and salience detection. So the text within this advertisement is able to be converted to machine readable text, but also we can build an idea about how the placement of that text in the image might also be important. So the size of the text, where it is placed within the ad, have particular importance around how people receive that information and attend to it within a specific advertisement. So these tools allow the kinds of analytical processes that can only be done then, at a certain scale. And what that scale means is that we can start to then also look at collections of ads and how they might be being targeted to specific, say protected demographics, or other general demographics. So in this case here, this leads back to some of the pilot work that I know Nina will pick up on and Mark was mentioning from ACCAN, where we looked at a range of ads in that pilot project using a process called the image machine, which has been built out of another discovery project, actually with a colleague here, Jane Tan. And what we did was we use the image machine to cluster ads around general aesthetic properties of those. So we see appliances here and in this kind of appliance cluster, and note how they are targeted towards very specific demographics and how that might change, and how we can then mine the information we’re getting to profile and really understand how ads are being targeted towards specific demographics.

Abdul Obeid:

Thank you, Dan. So, in summary as shown, anyone in Australia can register, install and review their plugin easily. The plugin is really only interested in advertisement data shown to everyday users of Facebook and doesn’t collect anything else through your data donations. We hope to gain a clearer understanding of online advertising practices. In saying that, we’d like to thank the efforts of the Rhythm Transparency Institute, the NYU Ad Observatory, Pro Publica, and others.

The Australian ad observatory plugin is based on the open source software’s that these initiatives have relentlessly maintained. Only with their support has this research been possible, thank you.

Prof Daniel Angus:

Back to you, Mark. Hang on, you’re on mute, Mark.

Prof Mark Andrejevic:

Yes, of course. Thanks for that. It’ll be interesting to hear maybe in the discussion, Abdul say a little bit more about relentlessly. Because there have been ongoing changes too, that Facebook has implemented that have posed challenges for providing this type of accountability. So now I will hand over to Simon.

Simon Elvery:

Thanks Mark. First of all I’m really excited about this project and this research, and just thrilled to be on the panel. So thanks for having me. The main reason I’m so excited about it, is that investigating big tech is really hard and the more projects we have like this in the world, really – the better journalists collaborating with technologists and academics – is a really important piece of the puzzle, I think, for moving forward in this space. It’s a major challenge simply understanding and verifying the flows of data and money around big tech. And I don’t think – I hope – there’s not too many people in the room here or on Zoom, who need convincing of that. But by way, an example I thought I’d talk for a couple of minutes about, a series of stories I did a few years ago without the advantage of collaborations with other technologists and academics. I did a story, a series called Data Life. And the basic idea was that I wanted to take stock of the data that was flowing on or more importantly off, my devices.

So the two main devices I use every day my phone and my work computer. And I wanted to see where it was going. I sort of knew it would be a fairly extensive set of data, but i wanted to know a bit more. Who was collecting it and what kinds of activity were being collected. I kind of basically wanted to man in the middle attack myself, and just see what somebody in that position could get, or you know, somebody at the end of that flow of data, what they were collecting. So as it turns out that’s a relatively complex technical exercise to do. To enable that sort of process you need to have skills around understanding transport layer security, for example. You need to know what certificate pin is, you need to be able to generate and install root certificates on your own devices, set up servers, set up a a VPN. You know, install those VPN services on your devices so that interception can take place. And you know, I say that just to point out that really you need these quite complex technical skill sets that aren’t everyday skill sets, for journalists just to scratch the surface. I mean this really only scratches the surfaces of just my personal data interactions with organisations and so it’s hard to do this reporting.

So again I just wanted to reiterate that point, that collaborations with technologists and researchers is really important for getting these stories told. And we need more journalists with deeper technical skills as well, you know, like I don’t think it’s something that every journalist should hav, but we do need more journalists with those skills working in the field.

So you know, doing this with any sort of consistency takes a huge amount of time and energy and money which a few other panelists earlier in the sessions have pointed out, that is increasingly in short supply for media organisations. And also out of my project there are a bunch of kind of interesting stats that came out of it. I saw just what exactly the volume of data going off my devices was and where it was going. But for the most part, they kind of confirmed existing understandings really. Just like in the wider web, in my personal life, Google looms large and collects an enormous amount of data off of my devices, and about me and my activities. But the one thing that I didn’t necessarily find, and kind of, the article that I used to wrap up the series, was that a bunch of searches that I might have actually… I take that back. One search, I made one search about a sexual health issue. I was searching for a vasectomy for obvious reasons and just out of that one search, it turned out that at least 14 different ad tech organisations, social media companies, got information from the flows of data out of my devices, that could in theory, identify me with that search. And for me personally, that’s not like I don’t feel particularly seen by that, it wasn’t a huge problem, but it doesn’t take too much imagination to understand how – for different searches and different people – that could actually present a big problem.

So you know, once we’ve gone to all this effort of actually understanding and revealing some useful or significant piece of information in this space, how do we turn these highly technical processes and concepts into engaging stories, into engaging journalism, that people actually think about, you know, read.

How do we tell these stories? I mean, arguably one of the more important jobs of a journalists is to attract and hold people’s attention. There are obviously a lot of other important things around journalism, but without the audience. So what’s the point. So how do we make people care about these subjects?

Unfortunately I’m not really here to deliver answers to the question, and I’m hoping you can help. But I do have a few thoughts about it, as I’ve been talking about and thinking about this space. You know, the data economy, and privacy and automated decision making, I’ve started to kind of think that it suffers from what i’m calling the climate change problem. That is that, to people who are interested in it and have a bit of knowledge in the space, we can see the potential for the societal and individual harms that exist and sort of the inevitability, it feels really inevitable that we’re continuing down this path that we’re on. And so for people who are in that space, it can feel like a short path to dystopia, right. And to some degree, I think we have to resist that feeling. But it’s also a useful and motivating feeling, in some ways, as well. And the flip side of this and why I think it’s the climate change problem, is that for people who aren’t interested, or are, don’t have that kind of base level of knowledge. It can feel really abstract and remote from their lives. And if they see the problem at all, they think of it as intractable or inevitable, and not something that they have agency over, and can do something about. So all these factors add up to an audience that’s really hard to engage. And um I’ll finish up by saying that I think it’s a bit of a truism, that personal stories are really good at attracting an audience, but finding examples of those and where there are personal harms in this space, is actually really, really tricky. Especially examples that you can tell in media and in a big story, without causing further harm or anything else like that. So i’ll finish up just by quoting somebody, actually Jeremy Merrill, who was part of the original project that this is based on at NYU, and I think he was working at Pro Publica at the time – had a really interesting insight I thought, which is that examples are the bycatch of academic research, but they make killer stories.

So that’s one of the really great reasons for collaborations between journalism and technologists and researchers. So I’m keen to find some of those killer story examples out of this project.

Prof Mark Andrejevic:

Thanks Simon. Thanks. We’re so happy to be able to collaborate with the ABC, and with you on this. The importance of bringing together independent public service media with public institutions like the University, that’s crucial to providing a position outside for the most part, the commercial ecosystem, to allow us to ask these questions. And now I’ll hand over to Nina who will talk about some of the types of findings that we’re able to generate based on the pilot project which was very small. This is mainly to give an idea of what it means to try to make sense out of some of this type of data, and a preliminary look at it. Thanks Nina.

Dr Nina Li:

Thanks Mark. Let me just pull up, so can you see my slides. Oh, yes. So, hi everyone, my name is Nina Li. I’m part of the Monash Automated Society Working Group research team, led by professor Mark Andrejevic. Now as a research team, we have done a pilot project on dark ads on Facebook, and this project has been funded by the Australian Communications Consumer Action Network, ACCAN. I’m just going to share some of the findings from this project with you. So, there are three major things we did in our research. The first was that we used the ad collection tool, that’s what we’ve just introduced in the promo video, and Dan and Abdul have also introduced the tool. So, we use this tool to collect Facebook ads from our research participants. So, what we did was that we recruited 160 research participants across Australia, and our participants were asked to install our ad collection tool as a plugin for their desktop browsers, which allows them to share the ads that appear in their feeds with us. Now the tool we used was adapted from the tool developed by Publica, but our version is different from the original one because we added the capacity to collect some voluntarily provided demographic information from our research participants, so that our research project is able to show not only what ads are being served to people online, but also how these ads are being distributed across different demographic group. Now with this first step, we were basically able to create a database that included all the dark ads we collected via this tool and this is a screenshot of how the collected ads are being displayed in our database. And we were also able to sort and filter the ads by the demographic categories, such as gender, education, ethnicity, political parties, etc.

So, more specifically for example, this is a demographic table that’s been generated by using our ad tagging and the filtering tool. This table is able to visualise and show us the breakdown of how many times in this case, the tech ads are being received and reviewed by our research participants, based on the demographic features such as – sorry the phone has been small here – such as the age, education, income, ethnicity, political party and agenda, etc. And in terms of gender for example, as you can see in this case, the tech ads have been overwhelmingly received by our male research participants, with a number of views of over two thousand, compared to our female perspectives, which is around seven hundred. And this is another example of the alcohol ads and similarly as we can see, this demographic table is able to provide us with a view of the spread of demographics who have viewed the Apple related ads, and here I just wanted to point to the gender variable again, because gender is a relatively more robust category that we have identified in our research. So, as we can see here, there was a skill to male users compared to the female users, in terms of the demographic distribution of the echo related ads.

Now the images put together here is just a short selection of the ads, again, varied by the category of gender. And as you can see, those on the left is the type of ads that have been served and received by our male participants and the ones on the right are those received by our female participants.

So, that’s the first thing we did in our pilot research. Let me just come back now. The second thing we did was that, once we collected all the images of the dark ads, we used the image machine that  has been developed by Dan and his research assistant Jane, to classify the ads by visual elements in common. So, as then just introduced, the image machine is basically a kind of automated image classification system that could recognise a shared pattern across a selection of images, to group them together and create a cluster of similar images. So, just to give us a sense of the type of images that are being grouped and clustered together by the machine, for example, these are the images from one of the similarity cluster detected by the machine, and clustered together based on the shared feature of the human face. So, that’s the second thing we did in our pilot research. We used an image machine to classify and cluster the ad images we collected.

Now, once the image machine created the ad clusters, the last thing we did was we then used the demographic data provided to us by our research participants, to see if the cluster had a demographic skew. So this step basically allows us to visually detect the demographic skew of the ad clusters. So we have some examples here.

For example we have a cluster of sleek looking car ads, and as we can see here, this cluster has a strong masculine skew in terms of the participants who were targeted by this cluster.

And by contrast, then we have a sleeping related cluster we identified, which has a very strong female skew. And we were also able to compare this pattern with the dining cluster we identified as well, which as we can see, mostly features things of domestic sociality, and this cluster has been primarily seen by our female participants.

So, yeah, briefly these are just the three things we did in our pilot research and before I conclude, I’m just going to quickly share some general observations and some limitations about our pilots project.

Now, current studies on dark ads emphasise more on the context of political advertising, whereas ours is the one that looks at commercial context. And we believe that our research has revealed some potentials for us to assess how existing forms of stereotypes along different lines of social differences might be reinforced by the commercial messaging which nowadays, has been done in a more personalised context of targeted advertising. Now, on the other hand, the limitation of our pilot research is that, first, we have a small pilot sample of 160 and as I just mentioned, the most robust demographic category from our pilot research is gender. So that, in the future we are thinking, we believe that with a larger scale project it could be possible we would be able to explore patterns across a more diverse range of demographic categories and associated combinations.

We are also thinking that the techniques we have been using for collecting and classifying ads in this pilot research could be repurposed so they could be used for other purposes. For example, to identify and examine different versions of ads for the same story, for the same commercial brand, for example. Or for the same politician. So, the hope is that collectively these studies could provide a stronger evidence base for pushing added transparency and accountability, because the biggest challenge with targeted dark ads is really, that they are generally not available for public scrutiny. And oftentimes they are also short-lived, and often going through constant transformations which makes deliberation and collective response really hard. So, yes, just these two thoughts from our pilot research for future collaborations. So that’s all from us, so thanks, and I’ll hand over to Mark.

Prof Mark Andrejevic:

Thanks so much. I think it might be worth just pointing out that previous iterations and uses of this tool have focused primarily on political ads. We have deliberately expanded this because we’re thinking about that broader history of the role that advertising plays in culture, when it comes to questions of stereotyping, discrimination,  predatory advertising, all of which are not limited to the realm of the political sphere. And historically, as you know we pointed out a couple of times, advertising is associated with the notion of publicity. One of the things that’s taking place is the goal of being able to reach people in non-public ways, that really has important significance when touched against the background of the history of some of the social issues raised by advertising. And so, thanks again to all our panellists, and we now open it up to questions. I’ll just take a look at what we’ve got here.

Jean asks Abdul – can you expand on the potential for participants to export and collate the collected ads for use in close qualitative or participatory research?

Abdul Obeid:

Yeah, so presently as you would have seen we do offer the advertisements within a simple dashboard interface. However, we are teasing ideas of allowing users to firstly group what they are seeing and beyond what we’re doing with our research, and how we are going to be investigating some of the clustering methods used. In the pilot projects demonstrated by Nina, it would be very interesting to allow end users to do their own analysis, to have their own export functions. This could take the form of simple exportations that may take CSV formats, or image-based formats. At the same time given that this is advertisement data, there are legalities to think about when it comes to the reproduction of information that might be under specific copyrights. And we are conscious of the various red tapes that we do have to sort of walk our way around when dealing with this.

So, we are very open to the idea of having end users in control of what they are seeing and being able to analyse it, and be aware of that doing. So, in an organised and systematic fashion, would be something truly incredible and it’s a very thought-provoking question. Thank you for that Jean.

Prof Daneil Angus:

If I add on the back of that too, Jean. I know this would be of interest to you and many here, that it would be awesome to actually have people access a kind of algorithmic vernacular behind this, right. So, to narrate why they feel they are seeing particular ads within their feed. And you know, as we’ve been showing, the kind of little tags at the bottom of the tool that you can see as a user of the plugin, to connect that vernacular with the actual raw data to say well okay, here is your account. Why you think you’re seeing this, this is why we believe you are seeing this, is because of you know, this is what you’ve provided to Facebook in terms of data around your potential habits or interests? So that would be a particularly powerful way to explore this further.

Prof Mark Andrejevic:

Great, thank you Brooke Coco. Can this data donation tool be used on mobile devices as well? If not, are you planning to build one out? That’s such an important question and it is a real challenge for this type of research. I’ll let Dan or Abdul handle that.

Prof Daniel Angus:

Can I jump in there Abdul, is that all right? So, no it can’t be used on a mobile. There are restrictions particularly on mobile development environments, on what you can and can’t do. And so this form of observation is really only possible through a browser. The browser plug-in however, I do want to say that there is a whole range of methods being developed currently in other projects. So for example Nick Carah, Brady Rebeds, Amy Dobson and others, are developing up data donation methods for mobile, where they’re getting participants – informants, who are paid informants – to screenshot ads and send them into a central kind of repository. So, I think really, kind of stretching the imagination of the ways in which we can begin to access and share these kinds of ads and that data we’re being subjected to, in a way that opens them up to this kind of broader critical scrutiny, is really fruitful for researchers, and a whole research community to think about and consider, that there are many ways in which we can begin to collect and collate this data. And our tool is just one way. It’s a very necessary approach to bring that kind of important transparency. It’s not the only way, and so I think that the big platforms need to accept that this kind of transparency is absolutely necessary and no matter what they might try and do to counter this watching, what’s happening in NYU and such, we’re going to find ways to get it. So it would be far easier if they were just more open and transparent about what they’re providing, to allow this kind of independent scrutiny to take place. And really, you know, we’re doing this in an ethically guided way, and in a way that’s got the full consent of the users. They should welcome this kind of transparency because you know, it can only lead to us discovering things which will help them if they’re truly kind of honest about that they don’t want to do harm to their user base.

Prof Mark Andrejevic:

Thanks Dan. We should say that there are a bunch of other initiatives to provide forms of ad transparency, not accountability, that we’re at work on, in the Centre. Including, we’re in discussion with Nick about possible collaboration. Nick who’s a partner investigator in the Centre, but also tools that would look more generally at media ads online and other social media platforms, as well. So, you know, Facebook has been the subject of a lot of the scrutiny, but it is obviously a larger issue that Nick Carah’s asking; how confident are we that the information Facebook provides us about how the ads are targeted, is accurate slash meaningful?

Somebody want to field that?

Abdul Obeid:

Yeah, I can answer this one.

So presently, Facebook has a waste standard that they are legally required to disseminate to users, about why they see certain ads. I’m not going to go into depth about how I interpret these legalities, but what I do understand is that there is a requirement to let people know why they see certain advertisements. And this is a global standard. You’ll see the same thing on google sites, on Reddit sites, or Twitter, when you are served a certain piece of content that has been sponsored. You deserve to know why that was shown to you, and this information cannot be sugar-coated. It cannot be put through a filter on account of the fact that this material really is not just part of the marketing initiative, but also to allow users to, you know, when an advertisement goes awry, or suppose an advertising practice is breaching certain rules, that we know exactly who it’s targeted and to what extent. That has perpetuated some sort of implication.

 So these standards at the moment, they’re pretty robust. And as you would have seen in the demo, we have access to this. I don’t perceive that for any particular reason, that these would at any stage become unreliable, lest Facebook becomes questionable in their practices. However, I’m not going to answer that question.

Prof Daniel Angus:

But I guess to the point that we will have that data and we’ll also have the volunteer demographic data, if users who have downloaded the plug-in decide to provide that to us and that gives an assurance, essentially, that the waste data matches the demographic data. And so once again, I feel that for any platform that’s engaging in using this, they would welcome this kind of independent scrutiny to say yes, you know these things match, we are disclosing we are being accurate in those disclosures. If they are indeed doing that to say that yes, this is a robust standard that we’re using and adhering to, absolutely.

Prof Mark Andrejevic:

Thanks. I might just add to that, that if you want to see overall patterns, the place where Facebook provides some information about that is in the ad library. If you go to the Australian version of the ad library, they do provide some very basic info about political issue ads, but not about other advertising, about how the ads are distributed. So, from an overall perspective there’s no publicly available information, at least not through that venue.

Axel asked how many participants would you like to attract with this project?

Prof Daniel Angus:

Every Facebook user in Australia. More is great, right? And obviously same with Australian Search Experience, that you know, the more users are about to tip over a thousand, hopefully by the end of today. And that project I would love to see a similar and indeed even a magnitude of more users on the the ad experience. The more users we have the more potential there is for us to expose many problems that might be there, but also yeah, you know, if everything’s working absolutely tickety-boo then that’s great, right. We can see that at scales. So, yeah, the more users are better, and I guess for everyone here, is to engage everyone you speak to after the event and say hey look, there’s this great plugin I heard about. It’s awesome, you want to know if you know why you’re getting all these particular ads for underwear, or you know, beer or whatever in your feed – download this and it will show you. And I think there is a value there. There’s a value added, right. And I think one of the critical and really valuable things of the tool is that it shows you hey, these are the ads you’ve seen and that’s a really interesting little walk through your own experience of these platforms. So there is I think, a little bit of sugar coating there, for a user to be able to see that really instantaneously.

Prof Mark Andrejevic:

Thanks. We realise we’re never going to get the type of visibility into what’s going on that Facebook has, but the power that we do have is in numbers, and that’s the power of data donation projects like this one. And it’s twofold; one, it gives us more information with which to get a conversation going about universal ad transparency at the regulatory level, and two, it raises public awareness and public literacy.

Sorry, to Simon – to what extent commercial outlets, in your experience as a journalist, behave in the same ways as tech platforms by personalising ads and news content?

Simon Elvery:

Good question. I’m not sure. I’m in the best position to answer that since i’ve never worked at a commercial outlet, but i will say that for the most part, the vast majority of commercial news outlets both in Australia and globally, use these ad tech platforms that are run by Google and Facebook. So, in a lot of ways they are identical. You know, they do the same sorts of targeting that the big tech platforms do on their own self. Social media platforms, there are a few, I’ve seen a couple of articles about bigger media outlets taking back a little bit of control in that space, and running their own advertising programs, but for the bulk of them, they’re still just using the same ad tech.

Prof Mark Andrejevic:

And yeah, this is why it’s important to have that – not only the broader conversation – but also to develop tools that are allowed to enable us to provide some transparency in those areas, as well. I’ve gotten word from Jean that we need to wrap up before 1:15. A bunch of really interesting questions still, please feel free to contact us if you have any questions about the project. Please if you’re interested in helping out, consider installing the tool, or sharing it with those who might be interested in using it, and thanks so much for your time and interest. And a huge thanks to our panellists and everybody who has contributed to getting us to this point. Thanks so much.