EVENT DETAILS
News and Media Symposium – Automated Content Curation and Moderation Problematic and ‘Borderline’ Content
6 October 2021
Speakers:
Prof Jean Burgess, QUT node, ADM+S (chair)
Dr Robyn Caplan, Senior Researcher at Data & Society
Dr Timothy Graham, QUT node, ADM+S
Dr Ariadna Matamoros-Fernández, QUT node, ADM+S
Russell Skelton, Founding Director, RMIT ABC Fact Check
Watch the recording
Duration: 0:59:45
TRANSCRIPT
Prof Jean Burgess:
So, this first session today is all about the shadowy border zones between obviously, problematic, or harmful content behaviour on online platforms, and benign or positive or progressive behaviour. It’s in this grey zone that most of the energy innovation policymaking is going, right, in terms of the platformed media environment.
I’m delighted to have four fantastic experts from the ADM+S community to speak with you this morning. I’m going to briefly read each of their bios and then we’re going to go in alphabetical order. So, this is the order of in which people will speak.
So, first of all we’re really grateful to have Robyn Caplan, recently awarded her PhD. Robyn’s affiliated with one of the centre’s part organisations, the Data and Society, or Data and Society Research Institute. Robyn conducts research at the intersection of platform governance and media policy. The work takes an organisational approach towards understanding the development of content standards by platforms looking specifically at the role platforms play, and mediating between different stakeholder groups in the creation and enforcement of content policies. And her most recent work is examining the role verification will play in the future of platform governance. So, some links with many of the topics about news and media we were discussing yesterday.
Next speaker is QUT’s Dr Timothy Graham. Tim’s a senior lecturer in digital media here at QUT in the digital media research centre, and an associate investigator in ADM+S. His research combines computational methods with social theory to study online networks and platforms, with a particular interest in online bots and trolls, disinformation, and online ratings and rankings devices. He develops open software tools for big data analysis and is published in journals such as information communication and society, information policy, big data in society, and critical social policy. He’s going to talk to us at bots.
Ariadna Matamoros-Fernandez is a lecturer in digital media also at QUT. An associate in ADM+S, and a chief investigator at the DMRC. Her research focuses on the interplay between user practices and platform politics in co-shaping contemporary racism. She’s setting up a research agenda around platform governance in relation to memes and other controversial humorous content, and has extensive experience in building digital methods to study digital platforms. Ari is going to talk to us about the problems of identifying and dealing with harmful humour, I think.
And our fourth speaker Russell Skelton, also on Zoom, coming to us from Melbourne, is the director of another one of our collaborators in the ADM+S centre, the RMIT ABC fact check, and its research partner RMIT fact lab. He was the founding editor of ABC fact check, which is an ABC, of course, as one of ADM+S’ partners. Through that work Russell’s held senior editorial positions at Fairfax, the ABC, and News Limite. So, I’m looking forward to doing all the force issue of an emerge of content moderation. I think my mic’s a bit dodgy, isn’t it. I’m going to hand over to Robyn. Welcome, Robyn. Let’s give her a hand for joining us from the evening in New York, I think.
Dr Robyn Caplan:
Thank you. I was worried your mic was my internet coming in and out, so I’m very glad to hear that is not the case. But sorry about your microphone. I want to thank everybody for having me here, in particular Jean and Axel. I’m a huge fan of the work of everyone at ADM+S, so it’s very exciting to get to join from abroad. I am currently sitting in Brooklyn, New York. It is Thursday night where I am, and I feel the need to warn everybody that pretty soon my toddler is going to be coming in through the front door and he might barge in at any time. So, we will hopefully- hopefully that won’t happen. He will be distracted by the wiggles, which is the only show he will watch. So, we have the Australian spirit in our whole house, right now.
Jean said to keep our remarks short and I’m going to do my best. I’ve done a couple of papers on this topic, looking at how organisational dynamics that platforms contribute to, how they address borderline content. One paper I did which I published back in 2018 called content or context moderation, was a comparative approach of all of the major platforms. And looked at how the size of the company -whether they use kind of industrial, more smaller scale approaches, or community reliant approaches impacted, and how they balance this tension between their need to operate at scale and there need to address the needs of their various sub-communities and the cultural context in which they were operating. But I’m not going to talk about that paper today. I’m happy to answer questions about it. I’m not talking about it, mostly because it’s pretty old, though might be relevant here for this panel. I’m going to take a slightly different approach this topic and summarise some work I did with Tarleton Gillespie on what strategy platforms are using to determine how they distribute resources in content moderation. Which certainly applies when we’re talking about borderline content and the role that plays in who, when, and what gets moderated. And by what it means. So, manual automated and semi-automated tools. So, this work doesn’t necessarily address borderline content head on in the way that I think many of my fellow panellists will, but it does highlight how decisions about what gets moderated and what doesn’t are made, and how they can privilege certain players on platforms with this. I want to highlight the ways in which who is doing the speaking can complicate discussions about borderline content in moderation.
So, a couple of weeks ago the wall street journal published an investigation by the journalist Jeff Horowitz into Facebook’s cross-check program. This program reportedly allowed nearly any Facebook employee at their own discretion, to whitelist users who were newsworthy, influential, or popular, or ppr risky. The result of this program as his reporting showed, was that 5.8 million users were moderated according to different roles than ordinary Facebook users, or hardly moderated at all. This led to a system of elite invisible tears at Facebook, which meant that the speech of the powerful and influential actors was protected, while ordinary people’s speech was moderated by automated algorithms and by overworked humans. So, Facebook has said in the past, its rules apply to all. But these documents proved differently, and we all knew that we could see it. We could see it all the time. So, it was always kind of laughable that they said this. In June 2020, when Trump wrote in a post ‘when the shooting starts the looting starts’, an overtly historically racist race- especially within the united states- there was widespread outrage. Widespread outrage and confusion about how Facebook could allow such clearly violating content to remain on its platform. This investigation showed that the content was indeed flat, as 90 out of 100 indicating a high likelihood the content violated the platform’s rules. This would have led to a clear removal for an ordinary user but for trump, and for a lot of other public officials and celebrities, the rules were different. So as Tarleton’s and my research shows, this is actually not uncommon at platforms.
Other platforms besides Facebook are enforcing different standards for different users as part of their business model. So, in our article which we published last year in social media and society, called tiered governance and demonetization: the shifting terms of labour and compensation in the platform economy, we explained how Youtube takes what we call a tiered governance approach, separating users into categories and applying different rules for each category’s videos. We explored this issue in relation to demonetisation. So, that’s basically how they determine whether or not ad revenue should be taken away from somebody, who’s in the partner program, who gets a share of the ad revenue.
What we found was that Facebook has lots of different user groups that they use to distinguish when and how rules apply, and that includes kind of established media. These creators in the partner program – and there’s kind of more subtle tiers within that. And then other users for Facebook, it appeared their program developed as a stopgap measure to avoid public relations issues that might happen if the company deleted content from powerful figures. For Youtube, it began when it created a special category of paid users – the Youtube partner program – to give popular youtubers incentives to stay on the site and make more content. Youtube then made more intricate tiers, established media who had more direct financial relationships. The company were allowed to sell their own ads and therefore removed from the need to conform to advertiser-friendly guidelines. Influential members of those in Youtube’s own creator economy were also given special privileges which led them to develop closer relationships with Youtube employees, which could help them deal with the monetisation and content moderation issues quickly. What this meant was that like Facebook, Youtube differentiated how they applied their guidelines. So, for the majority of users, algorithms were being used to determine whether a video posted the platform violated these rules, those who were whitelisted. Those with strong ties to the company could find workarounds much faster, and that had real social, financial, and reputational impact. Constantly unfolding concerns from another important stakeholder group here. Advertisers also impacted who got caught up in this system and who didn’t. So, over 2016 to 2018 we counted eight distinct periods of demonetisation where how friendly guidelines are, of course it’s happening-
I’m so sorry, I knew this was going to happen. I warned everybody. One moment.
Prof Jean Burgess:
Okay I might just ask the opportunity Robyn to remind everyone that they should be putting their questions and comments into the Slido app – slido.com, event number 101. Anyway are you right? We don’t mind a little bit of toddler action.
Dr Robyn Caplan:
I have to say, I mean this is intense chaos toddler action, this is coming home looking for mom toddler action. So, I’m going to just summarise quickly.
Creators were frustrated they didn’t object to different roles and perks for different user groups, kind of surprisingly. But they didn’t like the unpredictability of Youtube’s decisions, and they constantly pointed to examples that highlighted the unfairness of how rules were applied, and to whom. So, Jimmy Kimmel was often brought up as like the main example of this. His videos on certain topics were untouched, whereas other creators had their content demonetised. So, what our research shows is that no matter what the reasons, there’s lots of reasons that you can think of to tier different user groups and to treat them differently. Covid is a great example. People really kind of demanded that platforms take a different approach to that kind of content and prioritise information from official services. But what our research shows is that no matter what the reasons are, any sort of setup that is not transparent and can lead to unequal treatment, can breed suspicion and distrust. And particularly with borderline content. What I think this shows is that platforms are already making decisions that benefit certain values, in trying to distinguish between different types of content and users, and they could be doing this to benefit different groups.
I’m just going to stop there and hand it off to my fellow panellists. Thank you so much everyone.
Prof Jean Burgess:
Let’s thank Robyn. Okay, Tim. Let’s hear about bots.
Dr Tim Graham:
Okay. Thanks, Robyn. I’m amazed that I actually made it this morning. I have a former toddler who’s now four, and I had to get him ready and get in the car, and he woke up at eight o’clock. So, it’s really great to be here and to be able to actually physically add something to this conversation. What I’d like to talk about really briefly is the issue of social bots. And I think that when it comes to problematic content and automation, this phenomenon has gained a lot of attention over the past half a decade. The issue of accounts on social media that are automated or mostly automated, you know. Like, controlled by a computer, controlled by a script, and what kind of effect are these kinds of actors within our information ecologies having. And how prevalent are they. And what I’d like to propose here rather, it’s not really controversially, I don’t think, but what I want to put on the table following on from what Robyn mentioned, and from what was discussed yesterday, was a question around to what extent – and I say this very, very carefully – to what extent do social bots exist.
Something that we’ve found in the research that we’ve been doing over the last half a decade, but even over the last couple of years, is how difficult it is to find accounts on any platform. Whether it be Reddit, whether it be Twitter accounts like that, that actually fit the the archetype of a persona. Like a profile that’s controlled completely by a machine, you know. This popular imaginary of the social bot. You know, you go online and you see something that catches your attention. But it was – I’m just going to say it was a Russian bot, or something like that. Something that really factors into the popular imagination. And I think what really came through my conceptual understanding of this, and my empirical approach to analysing social bots, was a really instructive and kind of funny, but also problematic case study.
So, on Reddit, it’s a little bit early in the morning to be talking about this. But I’d like to- I just want to put it out there because I need to talk about it. We’re embarking on a project soon and I’ve done a little bit of case study analysis of a major problem that Reddit had for a while. And the problem was this. There were these accounts, it began with one account and the account was called anus fungi, heinous anus fungi. And with a whole bunch of letters and strings and underscores in the name. When anus fungi started to post, all anus fungi did was post a single mushroom emoji randomly into threads on Reddit – and Reddit’s a very thread kind of, right. It’s everything is about these discussions that are ranked. And at first, from what I could see based on the small amount of analysis that I’m doing with colleagues here in the ADM+S on this project, at first people found it funny. It was hilarious. They’re like, oh we’ve been blessed by anus fungi, anus fungi came to our thread and spammed a few mushroom emojis, and this is great. And they upvoted it a lot, right. So, anus fungi actually ended up at the top. These mushroom spamming accounts ended up at the top of the threads. But things soon took a turn for the worse. Fast forward to about six months later and you had entire subreddits, you know, forums on Reddit that were dedicated to eradicating the anus fungi problem on Reddit. The mushrooms were out of control. And although it may be somewhat entertaining, it was a serious moderation issue, and what began as a kind of fun I guess culture, culturally infused with the kinds of cultural vernaculars that Reddit has, soon turned into something that was really problematic.
So, you had thousands and thousands of copycat anus fungi accounts, all with anus fungi in the name. But some string of characters and underscores in there, and they were posting emojis in places where people were talking about serious issues. I mean they were talking about depression, they were subreddits that were about mental health issues, subreddits like teenagers for example, where people come and are looking for mentors and things like this. You know, places on Reddit that do have, that that not appropriate. I think in a normative kind of way, for this kind of fun, troll-like behaviour, and I’m not sure, I’ve kind of lost track of time. This is what this case study does to a person. So, what are we going to do about it. So, the real takeaway for this phenomenon for me, and I think this is why I’m so interested to study this and to work on it with the diverse cast of scholars who can provide different perspectives, it really is to look at well, to what extent do people behave like bots and in what circumstances is that okay? In what circumstances do we encourage that, and how do we go about trying to together, come to a sort of almost, whole of society response to moderating and making sense of this content.
This microphone keeps breaking. And just to pick up really briefly, I guess on some of the other points, there are platform specifics to this. Reddit is one particular platform, but I see so many instances of this and the frustrations that I have trying to determine whether an account is a bot or not, I really think that this is a poor way to frame this problem, using machine learning tools and things like that, is a poor sort of impoverished approach to trying to understand this. So, really what I wanted to do was spend this time to raise to your attention just the kind of difficulties, the conceptual slipperiness that we have trying to understand to what extent something is problematic, or whether it’s something that is fun and makes life worth living- as Jean mentioned yesterday. So, yeah. That’s all I have to say.
Prof Jean Burgess:
Thanks, Tim. So, what is it that we worry about when we worry about bots, might be the sort of question. Yeah, right. From that little bit of comedy over to a more serious discussion of humour and how it could be problematic.
Dr Ariadna Matamoros-Fernandes:
Yeah, thanks. It’s a bit difficult to intervene now after anus fungi discussion, but I’ll try my best. I have something to say about the emoji too, so I’ve been looking at borderline content, that is like this content, that it comes close but does not violate platforms policies, for a few years now. And currently I’m looking at humorous expression on social media and how platforms fail to differentiate between humour that punches up as socially beneficial, and humor that punches down as socially harmful. And I’m looking into this problem. If you will, especially in the context of online harms regulation and how different governments are pushing legislation, to really push platforms to tackle the issue of how harm is carried out through their networks. And the way that I’m looking into this is by proposing the term of harmful humour. And I define harmful humour as humour that punches down on historically marginalised groups, and where there’s a clear power imbalance between who is the speaker and who is the target of the joke. And I think that this is important. And based on what Robyn was saying about that Facebook really looks into the power of the speaker when they are moderating content. But I’m looking into power, not that much into like how authoritative someone is based on because they are a public figure, or like a politician. But power in terms of what Patricia Hill Collins calls the matrix of privilege and oppression.
So, how our social positions, how our positionality really impacts meaning and truth, especially when we are engaging in humorous expression online. So, if I’m talking from a race privilege and I’m engaging in a racist joke, this is problematic. And it’s not merely offensive but it’s harmful and it requires platforms to really intervene. And one of the problems is how platforms conceptualise what is harmful, and that’s why we have to use alternative frameworks to really first define properly, starting off from the experiences of those that are often targeted by online abuse including harmful humour, and then thinking about different remedies that we could apply to really minimise the harm. That it doesn’t have to be just takedowns or accounts suspension, there are other remedies that platforms could implement to really minimise the harms. And one example, sorry I don’t know what’s going on. So what Tim was saying about emoji, we have a recent example of black soccer players in the UK that were abused by fans, that they were sending them like monkey emojis and emoji bananas, and platforms really, because they don’t think about how online abuse is being mediated through this kind of humorous expression and playful behaviour, they didn’t have in place mechanisms to really minimise this kind of abuse that the footballers were saying this was racist abuse.
I’ve done some research around also how right-wing political parties use emojis for islamophobic discourse, like weaponising for instance, the peak emoji to antagonize the Muslim community. So, I think it’s important that we think about how to really moderate this borderline content. There’s another point that I would like to raise, maybe we can discuss that in this interesting regulate like online harms. There’s the emergence of middleware companies. So, if I could have a bit of water. Yeah, thanks. I’m sorry. thank you.
There’s these middleware companies that- like Centropy or two hats, this court recently bought Centropy and these companies are offering kind of okay, platforms really are not good into moderating borderline content and online abuse. But we are going to solve this problem and I think that from a standpoint perspective, it’s really interesting to look at these companies to see whether they are approaching the issue of online words of what is harmful from others, like using other frameworks. Or if they are going to reproduce all the mistakes that mainstream platforms are already doing, when they are tackling online abuse.
So, I think that this is an area that requires further investigation. These platforms that are really lobbying regulators to just say okay, you know, it’s better that platforms hire these third-party companies to solve the issue. And the last point that I would like to raise is that maybe we could also discuss in this panel, not only borderline content but borderline conduct, and this is related to the authentic behaviour that you are looking at, and the bots. And recently in a study that I did with Louisa Bartolo here, we looked at Twitter discussions around Covid and how Twitter users were sharing Youtube URL’s in a suspicious way. So, these were Twitter users that were sharing URL’s maybe 50 times, 100 times, 200 times. And when we did the qualitative analysis, what we realised at the account level, that they were behaving in this kind of bot-like behaviour. That around 35 percent of these users were not engaging in this bot like behaviour to mislead, but they were Youtube content creators trying to professionalise on Youtube, and really trying to promote their content in an increasingly platform media system that it’s really difficult to get seen. And the problem is that we don’t know how platforms really regulate inauthentic behaviour. We don’t know the thresholds. And because we don’t know how they operationalise their policies around inauthentic or borderline behaviour, we can’t really analyse whether they are harming. Also emerging content creators, that the only thing that they are using is the infrastructure of these platforms to really be seen on an Amazon audience. So, yeah. That’s what I have to say.
Prof Jean Burgess:
Thank you so much, Ari.
So, now back to Zoom and over to you, Russell. Welcome.
I think you’re on mute.
Russell Skelton:
I’ll try again. There we go. Okay. Thanks very much for having me. It’s a great privilege to be here, part of this panel. I come from a slightly different perspective of course, our background, we started out as traditional fact checkers checking statements made on the public record by public figures to test their accuracy in terms of the public policy debate. But when we got to the sort of, 2018 election, we discovered that we were late to the party, that a lot of the serious debates that were taking place around policy or around the misinformation/disinformation space, were taking place on Facebook. So, we were focusing entirely on the public record and we were missing what I think was probably a crucial part of the election outcome, which was what was appearing on Facebook and Instagram, and other platforms. And how that was, the conservatives had a team and a network of people all working to push out this information. As we know, that election was won in Queensland and it was a very close one. So, we thought this is a very important area which we’re missing in action on, in a way. And also our remit with the ABC prevented us from actually getting into that space in a major way. And then we had the season of wildfires which swept through the east coast of Australia and parts of Western Australia, and we saw it’s arsonists not climate change debate, and all that false and misleading information that was pushed out around that. We responded to that a bit by doing sort of fact files and analysis pieces of that, but in a fairly sort of superficial way. And then of course we arrived at Covid and the conclusion I reached was the traditional fact checking is not very effective in this space. It is in terms of our policy makers, but it’s not very effective in terms of addressing the impact of harmful information circulating on social media platforms.
So, we’ve come up with the concept of RMIT fact lab, which means that we don’t have to operate within the ABC remit. We can expand it into this new space and we just launched it actually. And the big challenge we have before us of course is the coming election. So, we want to be an effective player in that space and try and track some of the misinformation and disinformation, and call it out when we see it. And so fact lab is sort of very different to RMIT ABC fact check, in the sense that we’re involved in debunking posts on social media. We’re also, we want to get into the research space, very much. We’re also already embarked on the teaching and learning space. So, we’re out there teaching journalists and media managers in the pacific at the moment, how to go about fact checking and how they go about debunking misinformation. And that project is just unfolding now. And we’ve had other offers of people wanting to transpose our methodology of fact checking into this, and the other thing we’re about to embark on is we’ve- some people say it’s like taking drug money, but we’re about to involve ourselves with fact checkers official, Facebook, as official fact checkers. Because we think that getting into that space is pretty important, and we get access to the bucket of claims that people want checked. And so it also gives us a source of income. Because as you know, our ABC and RMIT have no money to expand, so we have to find through donations and through maybe this FIFO service arrangement, an income flow. And as there are no sort of editorial guidelines or parameters imposed by Facebook on us, we feel it’s- and we’re totally independent in our operations- we thought this was a good space to get into if we can. And also you know, although we’re still looking very much at open sites on Facebook, and not the closed ones, we still get a sense of the trends and what’s happening, and the way information is being pushed around by being part of that partnership.
But I think the big challenge for us and maybe we can bring all these things together, is how we do this federal election differently, you know. We’re thinking of setting up- not thinking, but we’re exploring the idea of creating tip lines. Maybe if we get into the Facebook arrangement, looking at the sort of data that’s flowing around there and feeding it in, and then setting up a team where we can effectively debunk information through the election campaign, as well as maintaining our traditional fact-checking of what the politicians are saying out on the stone. That’s probably all I want to say at this stage, but you know, we’re very new to this research space and it’s great to be here and to hear the sort of work that everybody’s been doing, because that all absolutely feeds into our knowledge of this ecosystem and which we’re trying to explore. Thank you.
Prof Jean Burgess:
Thanks very much, Russell. I might come back to you quickly, and then move on to the questions on Slido, give people time to think about your talk and respond to it. Russell, given the theme around borderline content, I suppose in the fact checking/misinformation space, when you’re involved in these kinds of activities in social media I guess, how do you deal with the challenge of perhaps not wanting to buy into or amplify conspiracy theories? How do you decide what to take on and what not to take on, I suppose.
Russell Skelton:
Okay, well we meet together as a team and we discuss what content we’re going to do, and a lot of this content’s already appearing in our corona check newsletter. We’re making these decisions but we will not go after anything which is not being shared widely that’s an absolute first. You know, if it’s been widely shared, we feel we’ve got to get into that space and debunk it as quickly as we can. And I suppose when it comes to say, the beta advocate and things like that, that are pushing out something which is semi-humorous, which has been taken up as serious, we use our own editorial judgment to make a call on no, we’re not going to check that. I think what’s sort of important is the significance of what’s being said. I mean some things that we debunk are just blatant fake news. Like it’ll be a press release in the name of the Minister for Health saying that people don’t need to get vaccinated anymore, or that he’s withdrawn a particular vaccine which is a very simple exercise to push that out and just to debunk it. There are grey areas I guess, we sort of discuss it. We’re all very experienced journalists and we don’t want to amplify or make the situation worse by going after stuff which might seem extreme and ridiculous, but which nobody’s actually looking at, because we feel just to draw attention to it. I don’t know if that answers your question.
Prof Jean Burgess:
Yeah, that’s great. Thanks very much for that. Alright, let’s have a look what we’ve got here. There’s a question that I think is for Tim, from an account called anus fungi. The question appears, what’s the level of- this is an amazing panel with these amazing speakers down to- I’m sorry, I don’t even want to the question. It’s just two mushroom emoji, that’s an easy one. Let’s, I’ll go chronologically.
Robyn, question from Brooke. Do you get the sense that there’s a public appetite to close these moderation loopholes and if so, is any work being done on that front?
Dr Robyn Caplan:
No I’d actually say the opposite. I’d actually say there’s been kind of a bigger push particularly around certain issues of public safety or public health, to expand this. And I mean, it really goes that way. You have- and I think that a lot of these debates, you end up, you have kind of equal voices on both sides of it. But yeah, you see many calls to kind of expand platforms use of editorial judgment and how they’re prioritising different users, and how they’re treating their content, and a lot of those demands end up getting focused around how they should be prioritising or highlighting official sources. So, we did, we had seen that with Covid, we had we saw that in the US with the US census and with the election, and there’s lots of different cases with this, I think one of the main concerns that I am seeing is that platforms can’t really be trusted to make these sorts of distinctions. And that they’re often doing that, and making these in ways that benefit them financially. And that is what our research has shown, and I think I’d probably fall into that camp of just being highly sceptical of platforms and their capacity to make these distinctions all the time, but you’re seeing calls on both sides.
Prof Jean Burgess:
Thank you. There’s a couple of questions which I’ll kind of smoosh together for both Ari and for Robyn, about geographical and cultural differences. So, apologies, I can’t remember who asked which. For Robyn, I guess. The question is around as well as differences between elite or insider actors within the US- you know, Youtube – what about internationally. What kind of disparities do we see there? And then for Ari, how do we deal with the situatedness cultural specificity of what constitutes harmful humour. So, what might the research strategy be to understand how geographical cultural differences for different groups plays out. So, I don’t know, do you want to go first?
Dr Ariadna Matamoros-Fernandes:
Well, that’s the one million dollar question. Like it’s really difficult to account for the contextual specificity of humorous expression, but also hate speech. So, I think that there are different issues here. So, the leaked document that the wall street journal published said okay, you see how US companies prioritise content moderation in the US, this is something that we know and we know that the lobby is like advocacy groups and civil society that are lobbying these companies. Most of the changes that they do, if they improve their policies, the result benefits how for instance, hate speech plays out in the US. We saw how they failed completely. Facebook in Myanmar and in other regions of the world where conflict is going on, and you have like material impacts of what’s happening in these networks. Then this materialises for instance, in violence. But the thing is that for me, if these platforms are benefiting from these emerging markets and they are global, then they have a power. Because they are really just getting advertising money from operating globally. They have to be more mindful into how, where they are operating, really knowing more and just having representatives there, and working with civil society and advocacy growth in these places to really tackle the issues. And sometimes it’s you. You can start with controversies when things that there are crises, you can be more responsive to this crisis and I’m sure that we have- every time that they implement a new policy – we have NGO’s writing policy and responses to it. So, it’s just listening more to what they have to say and then acting accordingly. Because as Robyn has said, there’s no interest in the end, really. Like, improve this globally. They just want to – it’s an exercise most of the time, to really say okay, after the Black Lives Matter protest in 2020 they made a change in their hate speech policies to ban blackface and to ban racial stereotypes that are really linked to US culture. And if you talk to representatives inside Facebook, they will say yeah, we have. Like in our plans we will consult with Aboriginal and Torres Strait Islander people in Australia to see how these racial stereotypes play out here, too. Like, extend our list of racial stereotypes that could be one step, but we haven’t seen much about this. And I think that it comes down to that maybe there’s not the political will to be responsive to this contextual specificity. Yeah, maybe when the wall street journal does a big expose of racism against Aboriginal and Torres Strait Islander people in Facebook, then we’ll see action.
Prof Jean Burgess:
Robyn do you want to comment on the geographical or international dimensions of what you’re talking about?
Dr Robyn Caplan:
Yes. So, I agree with everything that was just said. The report I did on content or context information three years ago found something very similar. It found that platforms have- it’s now again another industry standard that we’re seeing across platforms- is this emergent engagement with external stakeholder groups. And how that engagement happens is heavily biased globally, and often takes shape at least initially for most of the major platforms, out of a who they know kind of informal relationships, that then start building up. I actually just finished my dissertation work on this topic as well, and I don’t think it’ll ever be published because this chapter was written pre and post the toddler that is screaming outside this door. So, it needs a lot of edit, but in that work what I look at is this kind of general orientation towards networked platform governance that we are seeing at these companies. An emergent standard of engaging with external stakeholder groups in a way that kind of fulfills a PR mission. Many of those engagements end up being incredibly unequal and are based entirely on where the platforms themselves have offices, who they are connected with on the ground, where their representatives are located and can introduce new forms of opacity into platform content policy-making over time. So, not just in terms of inequities of who gets to speak to platforms, but also because when these platforms are engaging with all of these different groups, they’re often not telling everybody who else they’re engaging with.
And so, you have no idea as an external stakeholder, who your kind of feedback and expertise is being weighed against. So, yes, there’s a considerable amount of inequality in terms of how this engagement is taking shape.
Prof Jean Burgess:
Okay. I have two questions for Russell, then two for Tim, and one for the whole panel which may see us out, I think. So, Russell first. From Brooke Coco. How do fact checkers maintain a public reputation for trustworthiness especially when addressing controversial content, perhaps wading into this whole social media space. And the other one is- I’ve lost it, but it was about the development of fact-checking mechanisms and tools by the major platforms. What relationship, what might be the opportunities for outlets to use those tools or develop their own tools in addition to the sort of work you’re doing with the issue of trust .
Russell Skelton:
I think you just have to be accurate. If you’re wrong, they jump all over you and we have at the ABC, an incredibly comprehensive complaint system which people are very quick to use. And I think it’s sort of interesting in this deeply polarised space in which we operate, we’ll put something out and we’ll get a pile on from the left or the right depending on who we’ve checked, this is in traditional fact checking. And sort of, we went through a stage where we were funded by the ABC. ABC lost funding, and that’s why we ended up with RMIT ABC fact check. But when the ABC said they were going to discontinue with fact checking because they had no money, a whole lot of people in the middle came out, people we hadn’t heard of, people who didn’t follow us necessarily, and came to our support.
So, I think what we’re trying to do is inform people. We’re not saying this person, this politician is saying bad things, because he made a particular claim. We just look at the claim and we put out all of our data, we put out all of our sources, we’re totally transparent, there’s no confidential sources involved. And people can make up their own mind and they’re perfectly free to disagree. Our purpose is to inform, not to persuade, if you like, in terms of traditional fact checking.
And I think we’ve never had to correct a verdict in almost nine years of fact checking through the complaint system of the ABC. So, you know, we have nuanced verdicts. So we try and get as close to the facts as we possibly can, all the data that we’ve produced.
I’m sorry, what was the second question?
Prof Jean Burgess:
The second one was about the more recent development of all kinds of fact checking tools by the major platforms, or independently, and outlets engaging on their own. I suppose it’s about verification within news-making, perhaps.
Dr Russell Skelton:
Well, we’re still exploring the tools that are available and there are a few. but the main one which I think all fact checkers have been dependent on is Crowdtangle. And I think Crowdtangle obviously has huge limitations into what it can actually tell us and what it can deliver in terms of our fact checking. At the moment we’re having some confidential discussions with a couple of developers about different tools which we might adopt for this federal election as we go forward. At the moment it’s sort of manual, and from our point of view, what comes to our notice, what people send us which is, you know, totally inadequate in the sense when you think of the thousands, millions of bits of misinformation that’s floating around the system. So, I think the challenge for us as at RMIT fact lab is to actually get some decent tools in place, and if you look, at full fact in the UK they had a huge manual team of people doing stuff in the last UK election, was quite effective. But they hired heaps of people. We can’t afford to do that, so we need the tools.
Prof Jean Burgess:
Great, thank you. Can we have the air con button pressed?
Tim, now from Ashwin. Tim, is the visibility and logic of upvoting and voting on the Reddit platform – does that perhaps make it easy to gamify and perhaps lead to some of the behaviour that you’re talking about? And then from PJ, Tim, you talk about the fuzzy line between bot identity and human identity in online environments. Do you see a future in which bots become public figures?
Dr Tim Graham:
What a great question. Okay, well I’ll start with the first. I think it’s funny that I think the upvote/downvote mechanism on Reddit is to some extent, Reddit. Which is to say that what passes for discourse on Reddit, how culture plays out on Reddit, is in a sense inseparable. You can’t really make sense of it outside that functionality, and funnily enough I’ll shout out to Alicia Rodriguez- I’m not sure if she’s watching- but we actually had a paper just published and I haven’t had a chance to respond to the email that actually makes that argument. So, thank you for a question that actually speaks to something that I know about and can offer something for. But I think to your question, yes, I do think that that the particular rating and ranking system on Reddit shapes fundamentally – sorry – what was the, well what’s the relationship I suppose, between that kind of mechanism of economics of attention, and like the existence of such a bot. So, yeah, it can be gamed and it is a major part. So, if you look at Reddit’s rules and you look at terms of service and things, platform manipulation really centres around the upvote and down vote button, and things like brigading, you know. So, there’s like years and years of scholarship that looks at the way that groups of people – even when they’re very small, but very large – can enrol this this kind of functionality, into their mobilisation, into their collective action, for harm and for good as well. So, I think that one cannot really make sense of bots or gamification, or the political economy, or the culture of Reddit, without also taking into account the way that the vote button kind of shapes that. Reddit is the front page of the internet in its own terms, although it has changed that recently, I think, because of that, this fundamental implication of the upvote/downvote button for which it is now amassed fortunes. This was its business model. It’s the lifeblood of Reddit. So, everything that passes on Reddit must pass through that, through that sociotechnical kind of arrangement that they have. I’m not sure if that answers the question.
Yeah, and PJ’s question. Man, I’m still waking up, so this is you know, this keys towards one of – it’s a very fundamental question, I think, when it comes, like much of the science fiction and the popular imagination around social bots has centred upon – sorry, not social bots but bots in general – has centred upon not what the bots do and what the machines do and what do AI do, but what does it mean to be human. And with that question how can we differentiate ourselves from bots, you know. So, films like blade runner that I have asked a question about, thanks Jean, I want to pick up on a point that Axel made yesterday, and I think that hopefully this does some service to answer your question, PJ. It’s, I think that hybridity is really the concept for the horizon of social bot studies and when it comes to questions about identity and selfhood and political action and political communication. I really think that it makes very little sense to try and purify this binary distinction between what is a bot and what is a human. I really think that you fail to operationalise that concept when you apply it analytically to study any kind of communication or sociality on platforms. So, I think hybridity is where it’s at. And to finish the question about whether a bot may become a public figure, you know, I think to some extent that that already is the case. And I pick up on the work that Ariadna, had mentioned before, and Louisa’s work as well – I think that the professionalisation of influence on social media is the professionalisation to some extent, of different techniques that enable you to get through that curve. To not be in the 99, the long tail that no one pays attention to automation, is one way to do that. And so you know this work is picking up on that and i think it shows that if you make it as a youtuber might just think okay, this youtuber, I don’t know I’ll just pick a silly example like Pewdiepie or something like that – I don’t know, I’m so sorry. But perhaps that’s not a good example, but I mean, talk to Louisa and Ariadna about the particularities of that work and these kinds of accounts. But I think you look at them and you think they’re successful. They made it. But one needs to peer into the black box of how they utilised automation, or at least they set up kind of low effort automated practices like an excel spreadsheet that they just copy and paste these links out of hundreds of times per hour like a robot, because they want to make it so bad. And so I think the answer to the question PJ, is perhaps you’ve got a kind of like cyborgs, you know, who may be public figures in the future and I think that that is to me, that what blade runner became. Actually films like that I think, that’s what we ended up with. So, I’ll leave it there. Yeah.
Prof Jean Burgess:
There are legal scholars that are theorising protecting bot expression under the first amendment so like, in the legal space there’s things going on. Also in this regard, I was just going to add that half-awake Tim is so awesome like you should come to work half awake all the time.
The final one for whoever on the panel wants to address it, is how committed do you think the major platforms are to having people with diverse lived experiences who may be affected by some of the things that we’re talking about, on their content moderation teams.
Whoever wants to take that one?
Dr Ariadna Matamoros-Fernandez:
Very briefly, I think that there are people inside this company is really interested in bringing this perspective into play. But they are quite hierarchical these companies, too. So, all those efforts get lost. If the final say is just the CEO of these big companies. So, yeah, I think it’s really difficult to get these platforms to change when the change probably will not mean commercial interest for them. So, giving up power and privilege to accommodate and to be more fair, I don’t think that this will happen unless I mean, I’m not a lawyer but sometimes coming from a European background, I think that regulation is important. So, yeah, like good regulation that really is sensitive to these perspectives, to historically marginalised growth and people are the target of online abuse in this case. It’s quite important to really make these places safe for everyone, or places where everyone can express like Zahra’s work, not that we don’t think that sexual content is harmful, but really allow expression and joy, as you say. Also Jean, so yeah, it’s a complex long answer. I suppose the question is what you can do, you know, what the trade-offs are or what you can achieve politically, but through diversity within the workforce, working on these things and what you achieve through external exogenous drivers like regulation and so on.
Prof Jean Burgess:
Robyn, last word from you?
Dr Robyn Caplan:
I had to feel like, I made it called once. I understand this is a really complicated question that I don’t think we have enough information about to answer properly. I think that we’ve seen so absolutely, these platforms are incredibly hierarchical and a lot of the missteps that we’ve seen in the past actually emerge from this kind of at once, distributed almost too distributed, never quite bureaucratising nature of some of these companies like Facebook that are simultaneously existing. And they get an extreme hierarchy where the founder has a lot of control over kind of final decision. Though we’ve seen a lot of missteps that come out of what happens when you create a platform that is from the perspective of very privileged white male users in America. That has led to a lot of problems. I don’t know if there are a lot of diversity statistics that are being captured by the trust and safety community. It would be really interesting to ask the trust and safety professional association, that’s a new association that is emerging for people within the trust and safety world, includes policy people who are making policy and people who are enforcing policy, and it would be really interesting to see them do a big survey of all of the people at all of the major companies. I can’t really speculate as to what is going on there. One of the big concerns I have though, is that we are not moving towards more diversity of this workforce, we are moving more towards automation. So that is, if the direction we were headed in was you know, we are seeing yes Facebook is adding on more moderators, not enough. And Youtube has added on more moderators and not enough. But largely, the move that we are seeing is towards creating standardised rules, that are very context, not specific. So, they can continue to at least flag content and that might be moderated by humans.
Prof Jean Burgess:
Great. Thank you so much to all of our panellists. We need to move to the next session but please give everyone a hand. Thank you. Thank you, Robyn and Russell.