Using Fake Followers to Shape Public Opinion
2 July 2021
Kathy Nickels, QUT
Marcel Schliebs, The Oxford Internet
Dr Timothy Graham, QUT
Listen on Anchor
Duration: 39:20


Kathy: Welcome to the ARC Centre of Excellence for Automated Decision- Making and Society podcast, my name is Kathy Nickels and today we are investigating how fake followers can shape global public opinion – in particular, we will be discussing recent news reports about the People’s Republic of China and their use of social media networks to spread public opinions.

Joining me in this episode is Marcel Schliebs – lead author of the working paper China’s Public Diplomacy Operations published by the Oxford Internet Institute. Marcel is a Political Data Scientist and PhD candidate at the Oxford Internet Institute. His research is located at the intersection between the social sciences, statistics, and computer science, and he develops novel data-driven methodology for studying a variety of phenomenons including voting behaviour, disinformation, digital text forensics, and AI in warfare.

Kathy: Marcel, thank you for joining us today.

Marcel: Thank you for having me, it’s a pleasure.

Kathy: We also have Dr Timothy Graham from the ARC Centre of Excellence for Automated Decision- Making and Society (ADMS). Dr Timothy Graham is Senior Lecturer in Digital Media at the Queensland University of Technology (QUT). His research combines computational methods with social theory to study online networks and platforms, with a particular interest in online bots and trolls, disinformation, and online ratings and rankings devices.

Kathy: Thank you for joining us today Tim

Tim: Thanks Kathy, great to be here.

Intro: A seven-month investigation conducted by the Oxford Internet Institute and The Associated Press found the People Republic of China had been using fake followers on Twitter to amplify their message is in what appeared to be an effort to shape public opinion in foreign countries. This is not the first time we have seen the use of fake accounts to magnify opinions on Twitter. Late last year in Australia we saw dictator Dan and I stand with Dan hashtags on Twitter trending boosted by fake accounts.

Kathy: Firstly, Marcel, can you tell us about your study and what you found?

Marcel: Yeah, sure. So in our study, we examined multiple things. Let me briefly try to try to summarise it. So, for several months already we had observed that PRC diplomats in Masters had joined Western or let’s say, global social media platforms such as Twitter and Facebook, nearly 200 diplomats on Twitter, alone joining within many of them within the last few months, and we for the first time, wanted to comprehensively measure how active they were under platforms and how they were using it. And we did find that the PRC diplomats on Twitter were highly active, posting hundreds of thousands of times over a few months, over several 100 times per day that they were highly engaged in amplifying messages from the Chinese state-controlled media outlets into the world sort of acting as bridges into different communities of the world at the PRC wants to reach.

But for us, that wasn’t enough because effectively for the first time, showing that they have built a large megaphone if you want so that that allows them to reach out into the world. e wanted to know whether it’s a properly effective megaphone, actually reaching genuine users or and that suspicion was of course informed by previous studies by our colleagues in in in other universities or the tech platforms, finding in authentic engagement, inauthentic amplification of certain Pro-China narratives.

And therefore, we wanted to ask if this megaphone as we like to describe it, a genuine one, that the PRC is built, or whether it’s it’s artificially inflating its retweet and engagement counts. And we did that in a way that we connected every retweet any diplomat got over nine months then waited a few months and got back to each account and tweet and looked whether they were still active or had been subsequently suspended by Twitter and only by this measure, which is definitely a very conservative estimate of problematic activity going on because it only captures what Twitter itself removes already above about 10% of all retweets over 75,000 in total were from accounts that were subsequently suspended, and then we looked at several other things, like how the platforms label these accounts, how the engagement is distributed within a small amount of highly active superspreaders as we call them and we also conducted a more detailed case study on engagement with diplomats in the UK looking at micro-level coordination between networks of accounts in particular.

Kathy: Yeah, so with that with that particular study, how many tweets and retweets did you end up analysing?

Marcel: In total I our whole like data set that we scraped over several months was in the single digit millions. I would say something like close to 10 million but after filtering out different selectors. I think we were closely, 200,000 tweets authored by diplomats themselves, over 700,000 retweets, that they got a lot of replies as well. So we’re definitely kind of in the figure of millions that we were looking at.

Kathy: How could you tell these were fake accounts, and where did these fake accounts come from?

Marcel: That’s a really tough and difficult question. Already conceptually on Twitter I find it very difficult to determine what in my qualitative understanding constitutes a fake or real account? Because contrary to maybe to some degree, even other platforms, like Facebook.

Twitter doesn’t have a real culture of visible authenticity at scale I would call it. Users like journalists, researchers, public figures of course use their real name and their picture and there and a biography because they want to be recognized on the platform as who they are.

I would say, however, if we look at engagement with topics including the Chinese diplomats at Chinese state media, but also a variety of other things right, this is also includes mainstream topics in Western democracies like sports or entertainment or whatever, a large number of accounts just in line with sort of the platform culture choose to use the platform anonymously or pseudo anonymously. Don’t have real picture. Don’t have a real first name, last name, pattern and I struggle sometimes to conceptualise that as is, is this a sign of inauthenticity, or is it just genuine users wanting to not use their real name when commenting on the Internet?

And it makes it really difficult. So instead, rather than coming up with some account attribute like when was it created or how many followers it has or whatever else might be suspicious. And we said it’s not for us to decide which account, it’s just looked by the account itself. Is good or bad or inauthentic or authentic? Although many of them maybe.

So what we did instead is we used Twitter’s measure of whether they had suspended an account for violations of their rules and also our partners at AP contacted them and Twitter responded, saying that many of these accounts were explicitly suspended for what is called platform manipulation, which includes using multiple accounts and using fake personas and other things like automation so that’s kind of one approach to rely on Twitter’s own already happening suspensions that answer yet again, likely not enough because it only catches what has already been caught by them.

So in our UK case study, in addition to that, we looked at patterns of behavior between accounts, not looking at one only account, but in isolation But looking at groups of accounts and looking at how overlapping their behavior was, and once we just determined a group of accounts was kind of acting so similarly in short and long term temporal patterns in language in other things when they were acting so similar that it was too similar to be happening by chance, then we established that there were some coordination going on, which is another concept that we used to classify accounts as inauthentically coordinated, but which doesn’t necessarily mean that there all fake.

Kathy: So Tim, can you tell us about your research in Australia related to the Dictator Dan and I stand with Dan hashtags.

Tim: Yeah, sure, so I mean in many ways there’s a lot of parallels to you know to the study that Marcel and his team have conducted the difference is that this study we looked at, Twitter communication last year in the middle of last year. During you know, the arguably the worst COVID outbreak in Australia. So the state of Vic you know I was trying to handle this outbreak of COVID and in the city, and there was a series of restrictions, and then lockdowns that occurred in Victoria.

The big difference, I think here, is that what we noticed or what I noticed actually to begin with was that was quite a lot of suspicious behavior occurring in Twitter discussions in parallel to what was happening in the news during that time and what spurred this research was really me doing qualitative analysis and kind of monitoring. Well, it was qualitative and large scale quantitative analysis of Australian political discussions on Twitter, but then doing this deep dive into these communities of far-right kind of I wouldn’t call it extremist, but it was certainly extremely problematic accounts that were engaging with our really divisive, emotionally charged discussions about, you know opposition to the lockdowns, and so I saw that there were these hashtags emerging like the dictator Dan hashtag and the Dan lied People died, so the Victorian premier name is Dan Andrews and these hashtags were keying into other discourses on Twitter that were really problematic around, you know, race and xenophobic discourse, as well as denialism of COVID and a number of other related kind of narratives.

So, what I did was is we kind of thought well what we should do is do a bit of a you know more comprehensive study of this rather than focusing on, you know there’s little bits of analysis that I’ve been talking with the media about and publishing via Twitter. So what it resulted in really was collecting nearly half a million tweets that contain these three hashtags and two oppositional hashtags, anti-lockdown hashtags and Dictator Dan, Dan lied, people died and then one that was in support of the premier I stand with Dan hashtag and what we really undertook was, you know, a battery of different kinds of analytical methods, both qualitative and quantitative, and I think really just keying into what Marcel had talked about before, what we tried to do was, was use these approaches to understand and to what extent can we establish if any, can we make about the authenticity or the organic activity that’s occurring around these hashtags, and to what extent can we establish, you know, reasonable patterns that would reflect you know in or what we would broadly describe as although this is a very difficult concept and probably not a very good concept in my opinion, inauthentic behavior.

So, a lot of this was driven out of, you know, some of the patterns. I think that Marcel was talking about before really resonate here, so I had noticed that there was a large, much larger than proportion volume of newly created accounts for the anti-lockdown hashtags. Compared to, you know, the pro Dan Andrews hashtag.

That in itself is not ’cause you know to label something as inauthentic, but when we start to stack up the different kind of behavioral and activity and engagement patterns, what it stacks up to is certainly, if not large, scale but a very strong, very active core who actually did manage to get quite a lot of engagement across different spheres of the media ecosystem, even beyond Twitter. Who were, so a set of accounts that were later suspended in by Twitter for, I mean I, I don’t know what the reason for their suspension is, I suspect it may be platform manipulation. It may also be that they’re simply posting content that that goes against several other rules. As I mentioned before, there was, you know, racism, very problematic discourses that were occurring. So really this paper and this work was an attempt t try and understand where is this line between like hashtag, activism and people who are genuinely interested in a topic? In this case it’s covered.

You know COVID 19 related discussions and lockdowns, things that affect their lives, and then where does that sort of you know graduate into or sort of bleed into, you know inauthentic broadly activity, and potentially also. And we wouldn’t jump to any conclusions, but potentially you know. Information operation type activities, so kind of coordinated activity that that that is kind of centrally governed in some in some capacity, and we can’t I get.

I mean, I’m not going to say that we have any concrete answers to this because you know, like Marcel said before the ultimate ground truth cannot be known and especially not using, you know the kind of data that researchers like, like me, have access to. Knowing that Twitter has suspended. You know, we actually took the, you know, made a very conscious choice to provide the names of some of the fringe, hyper-partisan kind of suspected in authentic accounts in our paper. So, the paper is a little bit more like a cyber security paper in some ways and less like a communication or sociology piece.

But you know it, it doesn’t please me to say this, but many of the accounts that are kind of central within that paper when you go and look them up, they’re suspended now and so to some extent, that does provide some validation for, you know, for the methods and for what we were trying to do.

Kathy: So, Marcel was your research approach similar to what Tim just described?

Marcel: For us as well we had the seven months of researching this where like four months trying to find everything and uncover and then the last three. But it’s just very careful due diligence trying to not make a mistake and be very conservative in doing that. And it’s very difficult because we are making quite delicate claims about accounts being inauthentic accounts being engaging in problematic behavior, etc. And I’ve read the part in the methods piece of your paper about the sock puppets and I, I mean, each of these attributes I recognised from the accounts I’m looking at recent account creation dates, no real profile picture. Also pictures of flowers or nature of water or whatever. No real name, very few followers or following only the same set of recurring users.

I think it’s very anecdotally interesting. Maybe I’ve realised that many of these accounts are often follow accounts, like even accounts that solely promote and amplify a Chinese diplomat often followed the same set of kind of textbook-style stereotypical American accounts like Bill Gates, Lady Gaga, Katy Perry, Obama, of course, and Trump and I was wondering for a long time, is it kind of a deception move to really badly try to imitate being real American? May it be that this is just the accounts that Twitter proposes when you sign up at at the beginning and they just like click the first few very difficult, but I agree the challenges of course that there’s a ground truth in the back that’s binary with an account is authentically inauthentic, or whatever that means or coordinated or not. I like to call it that kind of behind the screen ground truth. But we see only a continuous scale of different attributes that you mentioned and you have to make either quantitative or qualitative coding decision, and that’s, of course, a really difficult one.

Tim: And I think your report’s really interesting because going to the state-linked information operation space kind of adds this whole new dimension, which is really interesting and difficult to unpack.  So, when the Australian context you know I saw that there were these accounts that were replying a lot that we’re using reply Uhm, strategies you know had many of the characteristics you know, profile characteristics that you spoke about before you know they follow Bill Gates and Oprah Winfrey. Recently created accounts and things like that, but then when it comes down to it, this question about well, is this, a PRC linked troll account or is it just the mobilised Chinese diaspora, who are participating on Twitter and very you know there is a complex and very nuanced reasons why they may have Pro-China or pro-nationalist alignments and narratives.

So, I guess I kind of sympathize or empathize with that added dynamic that I thought you know your report did such a rigorous job at trying to be very clear about, you know these kinds of distinctions and really try and use these quantitative metrics carefully to just show the evidence and it just. It just seems like such a complex space and I still don’t know what’s going on at least in terms of not the UK, but the Australian context.

Mercel: That definitely yes, I mean and if you look at the Australian context little bit, which we haven’t done in detail yet, but I looked a little bit at the most famous Chinese diplomat tweet regarding Australia probably, which was the Foreign Ministry spokesperson Zhao Lijian in November last year. Who tweeted a forged fake image of an Australian soldier, I think with a knife and a child or something like that which got a huge huge diplomatic response and if you looked at the accounts that had amplified this tweet there were also lots of accounts among them which were created. Basically, the day that tweet happen or the day after, then retweeted that tweet and then never did anything again. And I mean even if looking at one account in isolation, can’t because there’s so little verifiable information in that account. Can’t let me determine who’s running it, if it’s PRC state-operated, funded, encouraged, whatever, I can still somewhat intuitively say this is a problematic use of the platform if I just set up an account to retweet one tweet and then never use it again.

And in addition to what you said, I think what makes it even more complicated in the Chinese case is that there’s not only previous operations that Twitter itself has labelled US state, linked or state-operated in the takedowns in 2019 and 2020 and yes, the diaspora that you mentioned, or other people that may be sympathetic to geopolitical Chinese interests but there’s also increasing reports of, Mainland China-based Internet users, so-called jumping the firewall and going on to Western social media too or global social media too often, young nationalistic youth workers.

You may call them debar forums that organising forums to then coordinate and go jump the wall to engage in astroturfing over here. That makes it even more difficult because there’s a generally large user base that doesn’t require bot networks or simplistic automation. If you have 10s of thousands of people sympathetic to your cars going over, but in a coordinated manner.

Tim: Yeah, yeah

Kathy: Yeah, that’s so true and I guess, yeah, one of my questions was did you find that many bots were actually amplifying these messages? But from what you’re seeing myself, they probably didn’t need bots because they had enough people to amplify the messages themselves.

Marcel: Yeah, I mean the bot question is very difficult as well because it’s again a binary ground truth behind the screen, right? An account is automated or not? Or sort of in the middle, but we only see patterns that may suggest continuously going in one or the other direction.  What we did see is highly active high-frequency super spreading I call it. So, one account logging on a couple of times per day, retweeting certain or multiple Chinese diplomats with 2,3,4 seconds in between. My suspicion, however, is that it’s likely either Maxim, semi or automated, or manually people just clicking very fast and it’s difficult to see, but the pattern seems to suggest that. Probably sometimes one actual person is operating multiple accounts, so we sometimes saw in the same order the five same accounts being used day after day, with one account retweeting the PRC diplomat message to the UK say 30 times in 60 seconds or something like that, then taking 7-8 seconds to switch to the next account. The same to retweet, retweet, retweet, retweet, switch to the next account and the same accounts being used in the same order day after day, after day. And for me, that’s more likely a model with maybe some automation aid software where you can have multiple accounts logged in and click. But honestly, it could also be just someone opening 5 browser windows each day and then just clicking in the same order.

Kathy: So I guess another question with all this amplification of the message, does this affect the algorithm at all and change what people are seeing?

Marcel: have that question as well. I do wanna know too. It’s really difficult to answer. One frequent caveat that I mentioned when I’m being asked, and it’s certainly in all the interviews. Send and post report briefings that we did and the questions we’re asked the most is about real-world impact, and it’s obviously justifiably the question that people are most interested in. Do these retweets by account which are aprimitive, which don’t have their own following, genuine following networks etc. actually make a real-world difference in making this content visible for genuine users in distributing it to amplifying it through the Twitter network.

And on the one hand, you could say, well, for example, replies to Chinese diplomats, by coordinated inauthentic accounts get very little, proper their own genuine engagement, like further likes to the, to the replies or retweets etc. And on the other hand, we don’t know or maybe Tim you have some insights since you probably studied Twitter for a longer time than me, how every tweet that goes kind of into nowhere because it doesn’t have its own followers behind the retweeter influence that the algorithm to a large degree or whether it’s basically just increasing the number under the tweet that counts the retweets, which again in the Chinese context could be usually important because diplomats have their reputation to defend towards their own central authorities at home. They want to seem popular. They also might affect how Western politicians view them and view the legitimacy of their claims. So, if a Chinese diplomat is saying something outrageous and saying the BBC is lying or Australian soldiers are murderers etc. And if you get zero retweets and then a Western politician who’s obviously worried about its own populations, feelings and opinions might not care much less about it than if it has 1000 retweets even if they haven’t changed their Twitter algorithm. So, with Chinese information operations often targeting Western so-called elites and or decision-makers or influencers, that might be something that plays a role, but I really don’t have a conclusive answer yet about the algorithmic if you want so a mechanic effect of this and needs further study, needs more data from Twitter about how many people see content exposure and reach data, etc.

Kathy: But that’s a really good point actually about. Yeah, just the diplomats themselves being seen as hugely followed and liked. And have you got any views on whether it affects the algorithm at all, Tim?

Tim: Well, I mean I, I agree completely that we don’t know ultimately what govern the rules that these algorithms are programmed with or we don’t know, you know what’s the threshold, for example, that by which a hashtag enters into the trending topic, so I’ve been tracking. You can collect this data through the Twitter API, like for a given country you can collect, you know all of the trending topics, you know, every few seconds, all day long, and then you can go back and try to quantitatively, forensically understand you know how, how you know what are the what governs, what are the determinants of these, you know these hashtags or these terms entering into the trending topics and then for how long do they stay there. But it’s a really difficult question. It’s basically like trying to, you know, pull back the curtain in the Wizard of Oz and try and understand what happens. But you can’t see behind the curtain. Because these are trade secrets and that that there’s commercial and legislative kind of copyright issues around there. So we. Don’t know how it works.

Tim: What I’ve seen, you know, at least in the Australian kind of a bit of parochial but the Australian context is not so much, you know what happens or what’s the impact of a retweet. But just how much people who are trying to or individuals who are trying to exploit the affordances in the architecture, the design architecture of Twitter, how much they explicitly game, you know the trending topics, features and explicitly try to get engagement as well, so we see this play out. I mean a lot with the dictator Dan and I stand with Dan campaigns, but particularly with the, you know, the anti-lockdown so broadly right-wing campaigns you see that they’re trying these fringe accounts that you know in many ways are a bit like the hapless, you know, PRC linked information operations from 2019 and 2020 that you mentioned before, which don’t tend to get a lot of engagement.

These guys are a bit like that. You know they don’t tend to get a lot of engagement, but what they try to do is get the attention of you know influences celebrities in the Australian context. There’s a far-right commentator who’s very, you know, well-known RV Yemeni has been banned from Facebook,  posts very extremist kind of content trying to get his attention knowing that he has 130,000 followers and putting together all the pieces and playing a probability game to try and get to a situation where, a very divisive, can’t you know, piece of content or a hashtag like you know the damn lied people died, hashtag has very problematic mimetic rooted in China lied. People died hashtag and therefore you know it has these kinds of xenophobic connotation.

So, it really factors into some problematic, very emotionally charged, discourses around race and around China, and so they play this game, you know, knowing. But if it gets too, if the hashtag enters into the into the trending list it not only gets attention, but it also, in a sense validates what’s happening. You know that there’s something to it knowing that this is where things really took off, at least for this analysis that we did once they managed. Once these accounts coordinated together and loose fashion managed to get, you know, what the hashtags that they were pushing onto the national trending list. This then set in motion a, you know, a series of very swift events by which mainstream media, particularly News Corp. Media in Australia and these you know, online influences or journalists like RV Yemeni came into play and it went viral, so it’s kind of like nothing happens until suddenly everything happens.

You know, so I guess the question around the role of algorithms. I don’t know either and I’m keen to see what the answer to this is very slowly trying to understand it as well, but certainly, the gamification of trending topics and the difficulty that Twitter has, I think in trying to moderate this because the same activities that these accounts were doing, perhaps inauthentically are well these activities are very similar to what you know, like Black lives matter activists are trying to do who will also coordinatedly retweet things it will also be working like you know the posting at a rate that at a glance would look like a bot. But actually, when you run the experiments yourself, you know, and I’ve actually done this.

So I, you know, I’ve sat there. It’s not particularly rigorous, and it’s not peer-reviewed research at the moment, but I’ve sat there and I’ve tried to match on it. You know, on a mouse on a mouse and on my laptop as well the rate of retweeting for many of these accounts to see whether or not it’s humanly possible to do this, and in many cases, you know, anecdotally, it is so you know. I’m just sharing a bit of experimental work that I’ve done, and I’ve also done it with my left hand. you know my non-dominant hand to kind of validate it a bit, and even with that limitation, it is still possible to retweet two to three times a second when everything that’s in your feed is the same topics or same content you’re just following. You know you’re following accounts that are like yourself so. Uhm, yeah, really. This. This provides absolutely no answer to the question at all but these are just some of my observations in this space.

Kathy: It kind of just shows how easy it could possibly be to amplify your message. So how does Twitter at the moment detect and prevent the creation of fake accounts?

Marcel: It’s a great question. I want to know as well. And so in the case of Twitter. Uhm, well they have very clear or very extensively defined. Not sure how clear there are rules and guidelines for what’s allowed or not.  For example, you’re not allowed to use multiple accounts to buy with the same user to amplify content you’re not allowed to coordinate with others, or to bulk amplify certain content, etc. I don’t need to like read all the rules. But the problem, I think, is that I’m not, I’m myself not sure whether it would be by the current policies prohibited on Twitter to go on the platform, set it up, set up an account with a completely made-up name and maybe a flower as a profile picture. I know you can’t impersonate another account, so I couldn’t call my account Tim Graham from Australia and use this picture because that would be impersonation. But I could just call myself Tim Flower and use a flower as a picture I think. And as obviously a little bit problematic because it makes for us much more difficult to understand what their policies are and to kind of work with them. But I, unfortunately, yeah, I’ve never been there or worked in there, so I don’t have a clear inside. Maybe Tim has another opinion on that, but in addition, maybe to that, I think we should also give Twitter at least some credit for the level of data access they provide us and recognise that this micro-level analysis of retweet engagement and coordination there is sort of only possible on Twitter, because they give us enhanced data access compared to platforms like Facebook for example. Which in return means that we could only detect what was going on there because and apply this level of scrutiny because it was possible and we still have very much of a blind spot for other large platforms like Facebook, YouTube and whatnot where we don’t have micro-level engagement data and can’t even analyze this.

Kathy: And I guess my next question, what more can social platforms do to prevent fake followers. I guess it’s hard to answer when we don’t know what they’re doing now. Would you have any sort of ideas of how they could prevent fake followers in the future?

Marcel: So I think it yeah it will be much important to define clear thresholds, characteristics, attributes of accounts that constitute behavior that is regarded as problematic and not only selectively apply it to certain contexts that that maybe get some attention at a specific point in time, and I mean we offer some other ways of detecting, say, coordination which is a special form of inauthentic behaviour if you use multiple accounts in incoordination and do find that Twitter has suspended or is sometimes slowly, but it’s suspending some of the accounts that we detect us engaging in this behavior and some others account where they don’t detect it or detected only slowly. If we can’t find coordination with one other account, then it’s basically sort of innocent until they’re proven guilty of something else they’re violating. But I think conceptually that’s the most important question.

Kathy: Yeah, that’s some really good points. Tim did you want to add anything to that at all Tim?

Tim: Well, let me you know it’s what do you mean by fake? How do you define fake? I think this is a uh, a long-standing question. Philosophically, socially and it’s not necessarily like a new or technological problem. I think one of the difficulties for you know broadly western social media is that pseudonymity anonymity, freedom of expression, lack of you know, kind of surveillance mechanisms are baked into the very design of these platforms, like they’re open by design and I think that this contrasts with the design principles and the regulatory and legislative context for platforms in other. Countries like with web oh in in in China for example, so I think a lot of this activity that sits on the boundary, or that you might kind of constitute as fake is prevalent because of the design because it brings you know these are. It affords this kind of participation.  And it’s a really difficult question. I mean, all I can do is highlight or problematize where this goes wrong or how difficult it is to do it. So, you look at the concept of anonymity. You know if somebody creates a new account with flower 123 and decides that they want to start really supporting one side or another of a political.

When you actually go and when researchers talk to these people and try to understand what’s happening. Often, they’ll find that it’s women, and you know, marginalised minorities who are using the affordances of anonymity in order to be able to express something that they otherwise might have harm to themselves as and this becomes even perhaps even more complicated when it comes to foreign affairs and foreign diplomacy and you know, discussing state policy and things like that where the risk is that you might be detained or arrested indefinitely, or something like that so you know just having a flower photo and the recently created account like Marcel said before is suspicious perhaps if you see that there’s a sequence of accounts like this that are all posting one after another, or if it’s posting the extreme amount of times per day or always to the same UK diplomat or something like that. But there is so much of this on the platform that I feel sorry for Twitter I think in some ways, like do you hire an army of 50,000 librarians you know to try and govern this, this, this, this content and try to get in amongst it like the researchers do, and understand that contextually from the inside, they can’t do that at scale, and so they still have to rely on these blunt force instruments. These technical instruments to moderate or in some cases they depend on us. You know if enough people report the flower 123 account then suddenly Twitter is paying attention to it, so you know it’s almost like a sort of hidden self-moderation process akin to Reddit, but it’s not built that way, so I’m skirting around the question here, but I think all of these factors are at play. You know, at least when I’m trying to understand this and from what I learned you know from the report that myself published with his team is, you know, just how difficult this is to do and how much we need qualitative and quantitative methods and almost like a standardised, not a standardised set of methods, but we need to develop almost like a qualitative science that that is agreed upon and supported so that we have a common grammar, you know, a common set of parameters by which we can we can put our heads together with the platforms and say OK what’s OK in the public sphere? You know 500 tweets per hour from flower, anonymous flower accounts? Is that OK?

You know, that kind of thing. It’s almost like a question that we need to put to ourselves about what we what we permit as participation in the public sphere in much the same way as we’ve done for centuries. You know, with democracy Marcel very briefly, and then you, you mentioned astroturfing before and you know you can go and print Flyers out at on mass and dump them all over the city. But there are regulations that are in place to come to ensure that that can’t happen or that those parameters around there, you know there’s certain places you can put it in things like that, but that’s not the case on social media, so we still have to figure out what’s the path to acceptable sort of deliberative participatory democracy.

Kathy: Really good points, thank you. Thank you very much Marcel for joining us today and sharing your research into this report also Tim, thank you for sharing your research in this area as well.

Tim: It’s a pleasure.

End: You’ve been listening to a podcast from the ARC Centre of Excellence for Automated Decision- Making and Society. For more information on the Centre go to www.admscentre.org.au.