PODCAST DETAILS

The Implications of Dark Ads
26 October 2022

Speakers:
Mathew Warren, ADM+S
Nic Carah, University of Queensland
Aimee Brownbill, FARE
Kate Bower, CHOICE
Lucy Westerman, VicHealth
Erin Turner, Lawyer
Sam Kininmonth, ACCAN
Daniel Angus, Queensland University of Technology
Simon Elvery, ABC
Lizzie O’Shea, Digital Rights Watch
Jane Tan, Queensland University of Technology
Listen on Anchor
Duration: 0:43:58

TRANSCRIPT

Mathew Warren:
Welcome back to the ADM+S podcast. I’m Matt Warren, and in todays episode we’re revisiting the recent ADM+S dark ads public panel, for which we were joined by consumer advocates, expert researchers and government representatives, to discuss online harms and the future of advertising accountability.

In the first panel of the evening, Dr Aimee Brownbill, Lucy Westerman, Erin Turner, and Kate Bower discuss the harms of dark ads from a consumer standpoint, and how organisations like their own are working to minimise the negative effects.

Nic Carah:
So, I want to start out tonight by asking this wonderful panel of experts, why do dark ads matter to you? Why do you spend so much time thinking about them, what concerns you about them? And I might start with Kate.

Kate Bower:
So, I think the first prompt that you gave us to think about this evening was a provocation. So I think obviously, what you’ve pointed out is like the darkness of things that were once public, are now hidden. But I have took the liberty of l ooking up a few definitions of advertising. Thought you can never trust Wikipedia, it’s not going to work now. I’m just going to wing it. One of which, is that they were in fact public, but the other word that came up in many definitions was non-personal, so advertising is something that is in public, and is non-personal. And there’s also a creative element to it, so I think the provocation I would bring to this evening is not just the darkness of ads, but are we even talking about advertising when we’re talking about the hype personalisation that’s happening in social media and on digital platforms. It’s not so much just the hiddenness of it, but the fact that it may in fact be AI generated, it may not even have a person’s involvement, it might be hyper-personalised, not just to the extent of say, women under certain category. It could be people with this interest, that interest and this interest combined, who live in this specific location who have two cats, one dog, and three children. You know, that’s the kind of personalisation that’s possible because of the huge database of information. So if we’re thinking about what’s this hyper personalised thing, this thing that were looking at is not just an ad – well what other kind of thing is it? And what is it doing to us? What kind of harms might be possible when we’re thinking about it?

Think about the history of advertising, think about that way from hyper creative admin era, to the meta era where creativity kind of goes out the window and instead it’s like whatever is the most manipulative form that gets you to do the thing.

I had the pleasure of seeing Kevin Roose from New York Times speak about a week and a half ago in Sydeny, and he talked about this idea that it’s not just that the machines read us, they lead us. In fact, it’s not just that we’re seeing certain types of ads, they’re actually contributing to our consumer behaviour and the choices we make in our life. So from a policy angle, from a consumer organisation angle, that’s the relevant bit. We have a current framework around – I’m going to let erin talk about the consumer law angle more, that’s her expertise – but like this idea of false and misleading, and certain things are not allowed in public advertising, but instead if we think about it not as advertising, well what is this thing, what is it doing to us, what kinds of manipulations and enticements are happening, and what kind of policy settings and consumer frameworks do we need, to make sure that we have the same protections that we currently enjoy.

Lucy Westerman:
There’s a professional side obviously, I’m really passionate about preventing disease, and when we can do it, for example through things like tackling things like alcohol, and ultra-processed foods, and reducing the amounts that are in peoples lives, then I think that’s a great thing. I think increasingly we’re realising that the producers of those products aren’t the only problem, so we actually have to start looking at the vehicles that are getting those products into our lives. And in the public health space, I don’t know that that has fully dawned on everybody that is trying to focus on improved nutrition and reducing alcohol use, and tackling gambling problems. I think dawn is still arriving.

So what I love about these conversations is that they are engaging different voices and perspectives and skills and expertise to come together and really explore the different ways we see consumer rights and opportunities we see to have safe spaces that you operate in from different perspectives.

I also come from a very personal perspective as well, as mentioned in my introduction. I have two teenagers, one is nearly 16 and one is nearly 18, and they see completely different things to what I see online, and they tell me about it. So I hear that they are seeing a sports betting ad when they watch things whereas I’m seeing advertisements for going and sunning ourselves in a holiday destination in the pacific islands. They may be seeing fried chicken with coke deals, whereas I’m seeing summer cocktail advertisements. And one of my sons recently told me about miniature sushi rollers. Now I don’t know if anyone else knows what I’m talking about here, but it doesn’t have sushi rice in it. And I find it incredibly unsettling that my kids are being exposed to products and concepts and ideas, and I can’t just switch it off. They have been exposed, and the department of education has said get online. The only way that you’re going to learn in these past couple of years is online, so they’re online. You can’t switch it off as parents. And it’s out of my control, and it’s out of so many parents control, and we need to regain that as a community, and not let bigger businesses whose profits are the main incentive behind what they are doing, be driving what our children, and we, are seeing.

Erin Turner:
I think of this in two ways, it’s the dark part, and the ads part. If you want to boil it down, I think it matters because ultimately, the dataset and what you’re looking at – you’re looking at a series of points that have a potential to cause harm, and quite frankly are. These are things that can hurt people. And when we think about why ads are regulated, or why the consumer law catches bad business behaviour that can express itself in ads or things we might call ads, it’s because of that potential to cause harm. So I guess one of the main reasons why it matters is because you’re likely going to find things like misleading statements, outright lies. You know, soft or hard claims around the sustainability of a product that may or may not be true, and this is something that manipulates people, it causes them to lose money, and it also manipulates markets. People have to compete against other businesses who aren’t doing the right thing. And I guess the other half of the equation is the dark part. The reason why this matters so much, and you put it so well, when I started my career, I could see what businesses were doing as a consumer advocate. I could open the paper, I could go to watch TV and I could be like, oh, that dodgy Telstra ad, I know exactly what they’re telling all of their customers. I could go on their website and the same ad that I saw in the paper would be on the same page. But there’s no transparency. There’s no accountability now. So academics, interested parties in the media, we can’t hold very powerful groups, big businesses, to account the same way we could even five or ten years ago. And that’s why this matters so much. It means we can’t look at the harms we once did, but also we can’t look at the harms that are now exacerbated and new because of this new technology. We can’t look at this new personalised pricing in the same way, we can’t look at the manipulation that comes with large data sets and digital platforms. So, there’s so much potential harm and it’s why its so great to see a project like this where you are looking pretty closely, because I think you’re going to find some shocking things.

Nic Carah:
How dark are they? And I’m thinking here of this work that we did with FARE this year where we set to audit the darkness of the ad model of platforms. And we thought at the start of that, that we’d give them a kind of rating of how dark they were. Anyway, what did we find.

Aimee Brownbill:
Yes, so in this bit of work we’ve just done, and it really came about because we we’re trying to get a sense of how much alcohol advertising is on these digital platforms, and we were discovering it’s really hard to do that. So, we really wanted to get a better sense of what advertising transparency the major digital platforms are providing, and I guess the top line is, not much. Really these digital platforms are saying that they’re providing advertising transparency, but they really are doing the very bare minimum, some of them aren’t even doing that. One of the things we were accessing for is whether they have an archive of ads as a basic level of transparency, and most of them don’t. When you consider the definition of an archive as we were talking about as we we’re doing this project, you need some sort of sense of it being comprehensive record, there being some permanency, and the ads going out on these platforms, and what little – I mean we can’t even call it an archive – the Facebook ad library for example, that disappears once the ad isn’t live anymore. So it really doesn’t count as an archive.

The other thing we were looking for is what information about the ads we could gather from what’s being given by these platforms, and again, very little. So in terms of what are the targeting criteria, what’s the spend, what’s the reach of these ads. There’s really very little that they’re providing. Although I’ll note with the exception of this narrow category of political or other issues based advertising, and for that there is a bit of an archive, there’s a bit of this information. Some, most of us probably would argue there’s not enough information provided for these. But what that really showed us is that it’s technically feasible. They can give us this information, they’re choosing not to give us this information. So, really you have to ask why. Why aren’t they giving us this information?

Kate Bower:
I think it’s really good to start this conversation with this idea of harmful products and advertising, because the question that I struggle with, I’m like ok, I can see having alcohol ads hidden, the harm to me is quite clear. But what if it’s a choice of toothpaste or shampoo, like does that matter if I’m only seeing a limited choice of products, or that the products that I’m seeing have ben hyper targeted, hyper personalised, to me, and to what extent. Now that we’re starting to live in this world that – not just the ads, if we’re going to call them that – but the media itself, synthetic media, increasing hyper personalisation, personalised pricing, frictionless supermarkets. Examples, say in airports, where they’ve now got an example using facial recognition, to be able to show 100 people, using facial recognition, their flight information on a personal scale. So this idea of hyper personalisation is not just advertising, it’s across the board. And I kind of think about it in that framework, and I think that’s where choice comes – choice is the consumer superpower, if you think about it. So, when we’re thinking about a world of big businesses and big corporations, with all of the power, with your data. Where consumers have been able to exercise their power is through choice, is through consumer choice. Having that bargaining power of knowing what products are, you know leading with your feet, you’ll just shop somewhere else. I’m certainly that with some of the responses we got with facial recognition, you know, I’m never going into Kmart again. So, this idea that you can choose differently is really crucial and the centre piece of consumer rights in this country. So when that choice is taken away or severly limited to the point of hyper personalisation is only what Facebook wants you to see, or what Amazon wants to show you on their platform, and we get market concentration, that’s when those competition issues come to the fore. But also consumers lose their powers. They don’t know what they’re seeing is hyper personalised, they don’t know how it was decided to be there, or what other options are available, you’re kind of limited to what you see on the screen. And I think that has fundamental consequences for consumer rights. It really means we have to think more broadly, not just like false and misleading on that ad, but how is the whole system misleading or manipulative, and start thinking more broadly about what kind of cross sectional policy rules do we need to deal with the competition and consumer issues together now that they’re all part of this hyper personalised integrated system, that we’re all forced to live in.

Nic Carah:
And you get this shift in thinking of choice as individual capacity you might have to form a judgement and make a decision, but choice as an environment, a kind of architecture that’s available to us.

Kate Bower:
I think it’s two things. I think it matters what the specific media or ad is that you’re seeing, and looking at that individually and what potential harms could come from that, and I certainly think when we’re talking about gambling ads and when we’re talking about alcohol, the ad itself can be harmful. But I do think there needs to be a strong focus on the architecture of the entire system and the way that data is being used against people, and the fact that so much of our economy now is driven by people sharing their data, and so many ad tech businesses – things that are other businesses – media businesses, real estate business, and buy now pay later businesses – are also data businesses. And all of that is contributing to this culture that we have now, where our data is consistently sold and used against us. So that anything we need to do we need to look at both problems at once.

Erin Turner:
So, we’ve actually done a study on dark patterns, which for us is a manipulative web design. If you did a Ven diagram, it would cut across some dark ads but not all dark ads. Essentially it’s a point where a business uses its power, a knowledge of web design, a knowledge of human weaknesses, and decision-making, it’s information it has about you to steer your choices. That information architecture. They use that so that you are more likely to make a decision where you lose, but they benefit. You’re more likely to keep your subscription, you’re more likely to click on an ad that didn’t actually look like an ad, maybe it looked like a news article, or a post from your friend and it was severely disguised. You’re more likely to accept those cookies, or hand over your email address or other data. This is, essentially, dark patterns are when a business uses it’s power so they win, and you use. And I think there are a lot of commonalities with dark ads. I suspect you might see some dark patterns in your work, I really encourage you to look for them, things like false scarcity for example, ads that might say there sonly two left, or there’s only three in Melbourne, that create that fear of missing out, that really tap into that human act of oh gosh, I have to act now, that push you into it. So I think there’s potentially some really dark pattern elements in the data sets you’re looking at.

Nic Carah:
And how do you know as an individual that you’re the subject of a dark pattern?

Erin Turner:
I think this is one of the trickiest things around dark ads and dark patterns, is you may not. It’s where businesses are using their power in a way that really isolates consumers and removes their power of choice. You might not know that you are receiving a different price from someone else. Or that your decision is being steered, or you might have made a different decision if it was presented in a different way. Sometimes you might know. One of the dark patterns we looked at is the hotel California. You can get in, you can never leave. The subscription traps. So it’s pretty obvious when it’s really easy to subscribe to something and it’s very hard to figure out how to figure how many hoops you have to jump into if you can at all.

But other dark patterns are deliberately obfuscating what’s happening, like hidden advertising, that’s something that’s quite common on a lot of platforms. Ads that don’t look like ads. Or they might look like ads for one thing but they’re for something else. And they’re the kind of things that really take away our power to choose and our power to engage in a market.

Kate Bower:
Can I jump in there? I will answer that too. Influencer culture on social media, is that advertising or is it not. Like this is a very blurred line. There are people getting paid to talk about things or use things, but they’re not shown as ads, so again that’s the part of the whole framework of ways that you’re tricked or manipulated, or enticed into doing certain behaviours which are so far removed from what we would traditionally see as advertising, and for what our rules were originally created for.

Aimee Brownbill:
In terms of what we know, we know that the data driven, programmatic advertising model is designed in a way that it’s actively looking for those people who are most likely to interact with an advertisement, and particularly those who are going to either click through in other ways, like going to purchase a product. So they are looking for people who are those high value customers, who are purchasing more, purchasing more frequently, and when you think about harmful and addictive products like alcohol, that means it’s the people who are using the most alcohol, are the ones who are going to be targeted with these ads the most. And they’re already consuming alcohol in amounts that are already quite risky and are putting the at risk. And Nic’s alluded there to the fact that there is a concentration of a lot of alcohol going to a smaller group of people.

But more generally speaking, I think it’s all by design. And it’s mot just this group of people, but the model is trying to find any personal susceptibilities that you might have, then they’re trying to learn and generate content that’s going to appeal to any one person, as well.

In terms of alcohol marketing online, if the advertising, most of it is going to people who are using high amounts, we need to sort of think about what harm could be done in terms of people who might be experiencing or recovering from alcohol addiction, for example. And at the moment there’s no way for us to know that these people aren’t being targeted with alcohol advertising. Everything that we know points to it being extremely likely that that’s what’s happening.

Mathew Warren:
In the second panel, Simon Elvery, Sam Kininmonth, Verity Trott, Jane Tan and Lizzie O’Shea discuss how research, advocacy, and the regulation of greater advertising transparency can create a safer and more accountable advertising future.

Daniel Angus:
Where can we go here? What do we need to do, to start to tackle the mammoth task of bringing greater transparency accountability back into the ad space.

Lizzie O’Shea:
There’s three things I want to say. The first is that, there are currently laws that exist that we know companies like Meta and Google have breached. And the obvious thing is discrimination law. We now that Facebook has engaged in digital red-lining to exclude certain people from seeing certain kinds of ads based on race. We know that Google has an automated ads category that included Nazi sympathisers for example. Automatically generated, but still there’s a question around whether that’s in breach of discrimination law.

So one of the things I would say is there’s pre existing law that can deal with some of the harms caused by advertising in digital spaces, however there is a problem with enforcement. So to some degree, it’s very difficult as a lawyer to know where these harms are occurring, so we rely on people doing hackathons, we rely on journalists, we rely on whistle blowers to tell us about these instances of wrong doing, because as an everyday consumer, as I’m sure you’re aware, it’s a very atomising experience, being online, and you can’t necessarily have a line of sight on how the company is operating. So, there are some laws that exist, is I suppose the first thing I’d say. And it’s interesting to think about that as you guys are looking at some of this data. I don’t think it’s enough to expect the regulator to be able to tackle these problems alone. I think it is very important that individuals have the right to complain about this kind of conduct, and have the option of pursuing it in court. I’m a bit of a hammer and everything looks like a nail. I like litigating, but I think it does actually play a really important role in holding companies to account, even if it’s at an individual scale, it can have long standing, broader implications. And we need to look for ways to enable people to do that, and that will involve work with lawyers and others.

Having said that of course, there’s still plenty of room for improvement for how we regulate online spaces, to eliminate some of the harms caused by particularly advertising, but by generally, the data economy, the secondary data market in particular. And there’s various concepts and methods. Thinking about these things like purpose limitation on the collection of information, creating things like fiduciary obligations, so things like obligations to act in the best interests of somebody if you’re collecting their information, and things like data minimalization, and concepts like that, that might limit the capacity for companies to collect this information. And among other things I suppose I would say, as a human rights person, is we do need a charter of rights in Australia, Australia is the only liberal democracy without a charter of rights, and that has an impact on the rest of the laws that we have, and people’s sense of citizenship, and the capacity to have these discussions in public spaces about what the role is of corporations, over our lives.

And so, it seems silly to not talk about the Optus data hack in this context, because I think it’s constructive for a few different reasons. Firstly, it sets out clearly that our policy environment requires, but also does allow, huge amounts of information to be collected by us, and held by corporate entities. And this is not just dangerous for us as individually, especially if you’re now a victim of this hack, you may be facing the consequences of identity theft. It’s actually also really damaging to our democracy, and this is where you guys come in, and this question around dark advertising is particularly pertinent, because it does create environments that are hostile to constructive public debate, because there’s a clear financial motivation against that.

So, I think one of the things that comes out of this, is we do need strong penalties in this space. It’s not enough to just frame wrongdoing around harm cause, because harm can quite clearly be amorphized, it can be delayed, it can be collective rather than individual, and in that context we absolutely need penalties for misuse of personal information, and for conduct that is otherwise quite difficult to detect, that is in breach or various laws, whether they’re discrimination, or also privacy laws.

And this is my last point that I wanted to make, that privacy is actually the critical space that I think we need to focus on. And part of that is because I see that as the source for stopping the online ad market and the secondary data market, If we can stop the information from being taken from people, then we can stop all the negative externalities that arise as a result of the secondary data market, but also internal to some of these enormous platforms like Facebook and Google.

The other thing I’d say about privacy reform, is we’re actually in an environment where this is actively being discussed, so it’s a reform that we can demand and discuss because we are going through a review of the privacy act for the first time in a long time, so here’s a clear opportunity where we can start talking to our law makers about what we think privacy reform should look like. And I’m very much an advocate for giving people control over their information as individuals, to be able to be seen and treated with dignity, not just as consumers. We’re not just trying to make the ad market work better. We’re trying to treat them with respect, give them autonomy, and the right to determine their own destiny. And that’s what human rights is really about, creating the circumstances for that to occur. And I think privacy is often perceived as being very narrow, about information management, when I think it is about freedom and autonomy, and that’s how we should think about this. And the secondary data market and dark advertising is a key influence working against people’s autonomy and rights. So I think it’s critically important work that we figure out how to create space for people to make use for the incredible creation that is networked computing, without being subservient to the proper motive of advertising companies and the associated externalities of that marketplace, which limit peoples dignity and autonomy, and that’s hopefully what you guys will be doing tonight, and what we’re talking about tonight. But that’s my perspective, that people are rights holders, and they’re entitled to public life online with dignity and respect.

Daniel Angus:
Jane, maybe you want to offer a few thoughts, and what are some of your thoughts and perhaps the hackathon participants as well, around the difficulties you’ve faced in collecting this, and thing’s you’d like to see done better.

Jane Tan:
Thanks Dan. I think on top of – like with my technical knowledge of these machine learning tools – I think on top of building machine vision tools, I find myself a lot of the time, like the first component of what I have to do is data collection from all these social media platforms, and the majority of it is through Meta, Facebook and Instagram. So there’s been a bit of an upheaval throughout the process, that’s an understatement. Like, Facebook has this tool that’s called Facebook ad library, that if you go to your browser and look up this tool, you can see that it provides public access to the ads that are currently running on Facebook, which means that they are running for a short time and expire, they will soon cease to exist in this library. So that’s issue number one. The lack of access to an ad archive. We often find ourselves having this challenge of having to set up frequent ad collections so we don’t miss out on any advertisements that are short lived. So that’s issue number one.

And also with the previous issues with the US election, Facebook has this ad under the category of issues, elections, and politics. And it has a separate programming API which allows public researchers to collect data from this API. So essentially what we do is send a request to this web URL and then we get a list of advertisements that essentially ties to the subset of that data set that Facebook has of all the ads that are running. So apart from that, Meta data we get from this API is a lot more detailed than all the ads that you see on the interface itself. One of the most important attributes that you get from the Meta data is the demographic information about the targeted audience which is really useful. That’s really useful information for public interest research to just hold these platforms accountable for what they’re doing. So because of the lack of support of detailed information that we get from these platforms, it is really challenging for us to collect enough data to support what we’re doing with all of this research in general.

Daniel Angus:
Verity, do you want to kick us off with some things we can think about, and ways we can start to push back.

Verity Trott:
I guess there’s a couple of things, just standing off that scandal prompt and also some of the things that Erin mentioned about scams, is when we first collected the pilot data back in 2019 ahead of the Australian Federal election back then, we were browsing through some of the samples that we’d collected, and there were some really weird patterns of fake ads, which were kind of like fake news, but also some really weird screenshots of broadcast media. Like a channel 9 screenshot of the channel 9 news. And there were two types of fake ads that we were seeing, and one was a mock up of a news article. And if you followed the links it would take you to this ghost Facebook page, and it would have some random name like Autumn Leaf. Or Wolf of the Dark. It was so nondescript and weird. And then if you followed the links further, it would actually take you to a website that was a fake mainstream newspaper website, like the New York Times, The Guardian, and Sydney Morning Herald. And it was so clearly fake. Like none of the links worked on the website, it was just the front page. And it was filled with all of these nonsensical articles. And at the time we were like, what does this actually mean? Because none of it was actively political. There was no real agenda behind it, it was all just nonsense, so we thought maybe it was being used to just disrupt the feed of certain people who might be targeted from political ads. Or it might be trial runs of some misinformation ecosystem – preliminary trial runs I guess. But we didn’t really find a conclusion about that, that’s really speculative. But the second type of fake ad that we saw, and this ties into what Erin was saying in the previous panel, were those screenshots of David Koshi, or Andrew Forrest. And I think there was also one of Kylie Minogue, and then under it was a crypto currency scam, and they want people to go and sign up.

So, that was 2019 and we didn’t really do anything with it, people weren’t that interested about it. But I saw there were some global UK celebrities that were raging about their faces being used in some ads. And then this year in March the ACCC brought a Federal Court case against Meta for these scam ads. I don’t think it’s been resolved yet, I think there’s been some legal hold up. The last update I think was August, and it was some – maybe you know about it Lizzie, I don’t know where it was categorised – but it will have implications I guess in terms of who’s actually responsible for the content. And it just reminds me of the current conversations or debates that Australia has around who’s responsible for news content and the posting of comments. So that will have a massive implication I guess, if Facebook is responsible for scam ads, what does that mean for other types of ads that we’re seeing. What does that mean for prices changing, or adverts, or any other kind of scam that might pop up.

Sam Kininmonth:
So a bit of context is ACCAN was formed in 2008 to represent consumers with phones and the internet. As we’ve progressed with how we communicate, ACCAN has increasingly found ourselves representing consumers on the internet and digital platforms space. As many of you would be aware, digital platforms are how we communicate for the most part. When we send a text message, when we make a phone call, those are increasingly blurred boundaries by what we mean by that kind of denominator. Like a text might be Facebook messenger, it might be Whatsapp. It might be other different ways of communicating. And many of these ways that we can almost think of as a platform layer on top of this telecommunications infrastructure, are supported by advertising. So, it’s an interesting one because I’m kind of wearing two hats here. PhD candidate, researching programmatic advertising for a while, and as a policy consumer advocate. The way ACCAN I think will think about it more as a consumer experience, there’s obviously the stuff that Erin and Kate were talking about with discriminatory pricing, the selling of internet and phone goods. And there has been historically, lots of shady dealing. And Erin was actually at ACCAN for a while, so she would be able to tell us with many great stories, but I think ACAN would also be really interested in this broader understanding of, as we shift from plantenet, or the platformisation of infrastructure, and the infrastructuralisation of platforms, so that is to say shifting from one way of dealing with things to another. Where we think about this in the policy space, is when we’re thinking about the regulation of advertising, as many of you would know most of the regulation happens through the broadcasting services acting through this big infrastructural turns, this idea that we have this infrastructure that runs through the nation and that we all take part and regulate together. And then the 90’s, you get the kind of free market internet wave doing things, and that leads to platformatisation, platforms which are far more individualised, as Lizzie was saying.

So I guess what I am particularly interested in is well what does it mean when the communications infrastructure is funded by advertising affordances or biases I guess you could say.

Daniel Angus:
Simon, I want to give you a chance to talk a bit about your perspective as a journalist, and some of the ways you’ve been covering this as an issue, and perhaps any other thoughts you have.

Simon Elvery:
Yeah sure, thanks Dan. This panel is really interesting to me because it’s such a broad mix of disciplines and I’m really interested to get into the conversation as well, but I definitely have a few opening thoughts. And the first thing I thought it worth talking about is just thinking over the last five years or so, I think there’s been quite a shift of the general public awareness of data and privacy, and the complications and trouble that that can involve. People are becoming even more alarmed, thank you Optus, about the privacy threats that exist in digital spaces. But we’re here to talk about advertising markets, and I do wonder whether that same kind of wonder and concern extends to advertising in the way that it does to things like big data breaches or leaks from large companies or government organisations. And I don’t think it does. I don’t think people care about that as much. So despite generally becoming more aware of privacy issues, I think for many, especially through that lens of targeted advertising, they really don’t think it affects then that much, so they don’t care.

Some people, and I am repeatedly reminded about this whenever I talk about it in public, even think targeted advertising is great. They actually like it. The common refrain is that if they get a good deal, or find out about something that they like and wouldn’t have known about otherwise, then isn’t that good for me. And to that, my response would be, essentially, lets think about some of the discussion on the last panel – how confident are you that you’re getting a good deal. Discriminatory pricing is definitely a thing, but we don’t know to what extent because of the lack of transparency around this stuff.

So if people don’t care about it, from a journalistic story telling point of view, telling stories about targeted advertising and targeted advertising is a bit of a heavy lift some times. It’s a pretty dry topic to start with, so we need all the strategies we can get to be engaging in this space. You asked for provocations earlier Dan, and one I will put out there is, maybe we can think about whether we can leverage self interest to think about whether we can generate some interest in this space and try and think about the ways we can tell the stories of dark ads that affects people individually and makes clear to them it’s not just other people who might run up against discriminatory practices.

One of the people involved in the forerunner of this project at NYU and ProPublica was Jeremy Merril. And I’m quite fond of a quote from him that I heard his say once. Something like, he loves working with academics because case studies are the bycatch of the academic process and they make great stories. And to echo earlier suggestions, find us a scandal. That’s the kind of thing that he’s talking about there. The case studies that can be exposed through the academic research process can be scandalous, and they can be really interesting stories. But then, because they’ve been exposed through the academic process, when you pair that with a stronger more generalisable conclusion that the research hopes to be able to put forward, that’s something really worth writing about as a journalist, so definitely interested in that.

Another issue I think is just worth bringing up, as your working through your idea son this space, the data economy, privacy and digital ads, is that media organisations are hopelessly compromised in this space. We collect an awful lot of data about our audience, and the most prolific offenders I think it’s fair to say are commercial media, but even public media organisations like the ABC are definitely not very good at this stuff. And the perverse thing about that is, often we’re not even doing it deliberately, it’s not even for a purpose. Just incidental from some other activity we do as part of our work, like embedding a social media post in a story that you’re reading, that’s a data collection process for that social media organisation. And so, I’d go as far as to say most media organisations don’t even know the data they’re collecting, where it’s going, who’s collecting it, or who benefits most from that data collection. So that’s something that I think as a media industry we really need to think about, but also it would be good to think about that in the context of the Hackathon.

There’s a media organisation in the US called the MarkUp. If you’re not familiar with it please google it after this, they do some really great stories in this space. But one of the things I found really interested about them is they went to huge lengths to minimise the data they are collecting about their audience. If you go back and read some of Julia Anglin’s earlier posts that she wrote around the time she was setting up the company, it really illustrates who difficult it is to actually not collect data. Even when you’re trying not to, it’s kind of a hard thing to do. That kind of speaks to the whole ecosystem, and the system that has been set up. It makes it difficult sometimes to do the thing you’re trying to.

A final thought is that I think we really need, as reporters and journalists, I think we need to think carefully about who we are reporting for. Digital privacy- who has access to it and who benefits from it, is a reflection of existing power structures and existing privileges, and understanding and reporting on how these issues effect the least powerful among us is really important. But it’s also important to know how to tell these stories that speak to a large audience, and that’s a real challenge from a journalistic point of view.

Mathew Warren:
Thank you for listening. I hope you enjoyed this episode of the ADM+S podcast. To watch the full recording, visit our YouTube channel at admscentre.org/youtube.

 

END OF RECORDING

SEE ALSO