EVENT DETAILS
News and Media Symposium – Platform Governance Race, Gender, and Sexuality
6 October 2021
Speakers:
Dr Emma Quilty, Monash University node, ADM+S
Prof Kath Albury, Swinburne node, ADM+S
Dr Rosalie Gillet, QUT node, ADM+S
Dr Thao Phan, Monash University node, ADM+S
Dr Zahra Stardust, QUT node, ADM+S
Watch the recording
Duration: 0:58:47
TRANSCRIPT
Dr Emma Quilty:
Thank you everyone, for having us.
My name is Emma Quilty and I’m coming to you live from sunny Melbourne. I am an anthropologist specializing in gender trust and technology, and I have the great pleasure of moderating this wonderful panel this morning. So today our speakers will be discussing how users understand online harm and safety, and consider what kinds of legal policy and platform governance practices are necessary to help foster safe and inclusive digital environments.
First up we have Dr Zahra Stardust, who is a socio-legal scholar working at the intersections of sexuality technology, law and social justice. Then we will hear from Dr Rosalie Gillett. Her research expertise includes online gender-based violence, the normalisation of abuse, and platform governance. We will also hear from Kath Albury. Kath’s research is grounded in the fields of Media Communication’s and cultural studies. And our fourth and fabulous speaker Thao Phan, is a feminist technoscience researcher who specialises in the study of gender and race in algorithmic culture.
Zahra, what is your provocation to revolutionise platform governance?
Dr Zahra Stardust:
Thanks so much Emma, and yes, we’re starting with provocations because I think so many of us are busy diagnosing problems. But actually, we wanted to talk today about solutions and ideas and how we could make the internet a better place. So my provocation is, I want to see your world- oh, I’m breaking up a little bit. Shall we swap.
Let’s take two. My provocation is, I want to see a world where sex positive social media is the norm, where platforms value sexual content creators and they actively facilitate a vibrant and thriving online space for artists, queers, sex workers, sex educators, trans folk, kinky folk, people of colour, fat people, people with disability. And in thinking about sex positivity, I’m drawing up on a long history of scholarship from sex positive and sex radical scholars. It’s not compulsory sexuality, it’s not toxic positivity. It’s bounded in consent and it’s really about dismantling some of the systems of erotic stratification and persecution and persecution, that we see in current platform governance measures. So at present, social media platforms govern sexual content through a dual process I like to think about as, on the one hand extraction, and then on the other hand as exclusion. And that they extract a lot of value from sexual content but then dispose of sexual content creators as if they are simply collateral damage. Social media companies are under a lot of pressure to address issues like non-consensual intimate imagery, child sexual abuse material, and other kinds of harm, and they’ve largely done so by introducing blanket bans on sexuality, nudity and sexual solicitation. Much of the screening and detection software that they’ve developed and deployed, pick up a range of uses from sex educators, sex workers, pole dancers, burlesque artists, queer folk, and in my research over the last 18 years which has involved speaking to a number of different sexual content creators, my participants invariably described just experiences of being shadow banned, suspended, de-platformed, maliciously flagged, or having their content repeatedly removed even if it complied with the terms of service or the community standards.
And then even on platforms where sexual content is permitted, they describe very arbitrary rules and standards that completely prohibited activities that were lawful and consensual in ways that really didn’t have much to do with harm but were mostly appeared to be about corporate and reputational risk, or the heteronormative sensibilities of platforms themselves. And then at the same time, my participants describe these experiences of extraction as being used by platforms to build up their commercial base. Populating their content, increasing their size and viability, only later to be disposed of when those same platforms introduce policies to remove sex entirely. And we can think about Tumblr or even OnlyFans, as examples of this. So in June this year I organised a session at Rightscon, which is a human rights summit on human rights in the digital age, and we spoke about alternative frameworks for sexual content moderation. And I facilitated the discussion with five other sexuality scholars- Jisley, Emily Crooms, Emily Vandenegor, Katrine Tiedenberg and Marie Miller-Young, along with 60 other participants. And as a result of that, we’re now drafting a manifesto for sex positive social media. It spells out some of the basics- that the regulation of sex is political, that platform regulation shapes our erotic imaginations, and our sexual possibilities. That sex shouldn’t simply be conflated with harm or risk. Kink is not violence, nudity is different to sexuality. Body fluids are not offensive. Sex is cultural, sex is social, and sex can be integrated into social media. So creating sex positive social media also means that platforms need to be transparent about the values that guide their decision making, rather than simply pretending they’re neutral. So instead of targeting sex, they ought to be aiming to prevent sexism, racism, transphobia, horephobia, ableism, and other manifestations of structural oppression on their platform. And it’s also about transparent decision making. So not only having accessible appeals, individualised reasons, independence dispute resolution processes. But also avenues so that users can understand, thinking about the panel yesterday on recommender systems, so users can evaluate how their sexual content is actually being ranked or organised, or sorted or curated. It’s important to note that platforms need to recognise that legality and illegality are not necessarily determinative of whether sexual content is consensual, or even ethical. Many forms of consensual sexual content are unlawful in various parts of the world, and social media companies don’t need to turn to these very restrictive repressive regulatory regimes as their kind of lowest common denominator baseline, for then determining their global community standards. Platforms actually have a really great opportunity now to improve upon the poor sexual ethics of governments. So the manifesto which is in draft form at the moment, also sets out how sex positive media requires shifts in the structures and the businesses and the revenue models of platforms, who need to start valuing sexual content creators in material ways, rather than just treating them as an untapped resource. And this means prioritising marginalised communities as stakeholders, as decision makers, and leaders in their governance models. And finally if we really want sex positive social media, it also requires an enabling legal environment. A lot of platforms and governments have over focused I think, on swift takedowns, on quantitative measures, on increased surveillance. On a castle turn, on new automated tools, and Australia’s online safety act is a good example of this. It provides a lot of power to the e-safety commissioner to take down any sexual content that she thinks fit without any dis requirement to give reasons. No criteria for what warrants removal, no process for users to be notified or have opportunity to respond to complaints, and also no requirement to publish transparent enforcement data that would allow researchers and civil society to hold them into account. So instead of all of this privatised decision making, we need governments to decriminalise consensual sexual activity, to repeal laws that hinder access to sex education, health promotion, safety information, harm reduction materials, and regulate to prevent the formation of media monopolies, and to materially support the proliferation of independent media platform cooperatives and alternative economies. The resources that platforms are spending on screening and detection and castral approaches, could be redirected and re-invested into community responses and initiatives that support prevention efforts, including comprehensive culturally relevant and tailored education on sex, respectful relationships, and consent.
Dr Rosalie Gillet:
Okay, hey everyone. So platforms must um re-envision their approach to self-regulation if they’re ever going to foster safe and inclusive digital environments. So in an effort to prevent online harms to women and other vulnerable and marginalised populations, you may be aware that platforms have really focused on increasing the accuracy of their human and automated moderation systems, to detect and remove discrete pieces of content. So platforms delete the content and they ban the users. And I should note that content moderation is a really important service that platforms provide. But i think we’ve been misled to think that it’s an appropriate way to solve online gender-based violence and other online harms. And so these messages have really served to align the notion of justice for gender-based violence, with the logic of platforms. And so importantly, this understanding does little to consider the structural and systemic conditions that actually lead to harm, such as inequality and hierarchies of oppression. And as long as platforms are thinking about harm in terms of content, I think that they’re never going to be able to prevent the things that actually cause harm. And so this is exactly the way that our criminal justice system treats crime and works and this really hasn’t been an effective approach . And so many of you here may be aware that our criminal justice system is resource intensive and it does little to rehabilitate offenders and centre victims survivors needs. Like this approach, platforms leave victims out of decision-making processes, and they rely on punitive punishment, often removing those who break the rules with little to no effective reintegration strategies. So to give you an example of the similarities between our criminal justice system and platform self-regulation, Twitter for instance has a strike system whereby users receive punishment based on the number of rule violations that they have scored against their accounts. But everything we know about changing behaviour tells us that this carceral mindset is never going to tackle the underlying structural conditions that actually cause harm. So what if platforms reimagined their fundamental approach to content moderation and self-regulation more broadly? Instead of focusing on detecting and moderating discrete pieces of content and users, what if we investigated ways to rehabilitate those who cause harm, transform communities, centre victim survivors, and actually enable people to feel safe online. Because there are significant similarities here between how platforms regulate, and criminal models of justice. I think there’s a really important opportunity here to draw on theories of justice, and in particular transformative justice, which shows us how we can better repair harm. So this alternative version of justice is not a new concept, and rather its principles are grounded in the experiences of people of colour. And it really centres marginalised communities. So most simply, transformative justice looks towards collective community-based responses to interpersonal violence, rather than this punitive criminal legal forms of justice. And transformative justice advocates say that we really can’t end violence by using violence. I should note that this approach is really not about excusing people for the harmful behaviour and the harm that they’ve actually caused, but it recognises that we need to understand this really important context, and these conditions that lead to harm, to really prevent the violence and promote healing and accountability. And so I think there’s an important role for automation here, but to date we’ve seen some pretty bad uses of this technology. And this is because the logics of platforms only allow them to see moderation answers which are scalable, but how can platforms move beyond using automation to see, to detect ostensibly harmful words, and actually use it to create tools that cultivate a sense of community and a community accountability that actually has been shown to mitigate harm. And what should platforms do instead, where these approaches won’t work? I think that these are really crucial questions and if answered they could actually enable platforms to cultivate safe and inclusive environments for all. Thank you.
Prof Kath Albury:
Thanks, I think I’m next. Yes, so my thoughts today draw on my recent reflections on the ARC linkage project- safety, risk and well-being on dating apps, that I co-facilitated with my Swinburne colleague, Anthony McCosker, and some others. We invited app users to share aspects of dating app culture that made them feel safer and less safe, and also some collaboration I’m currently working on with Zahra and Rosie, and more importantly, I’m thinking about some of the recent discussions of risk associated with dating apps that have seemed to me to be quite vague and ill-defined. So my provocation in relation to platform governance today, is to unpack or investigate the notion of online safety. So I began to investigate the concept of safety after running a project that had safety in the title, and realising that I had invited users to self-defined safety but hadn’t really thought about the context or the concept much myself. This investigation led me to some really interesting collaborative work that brought together researchers from the fields of linguistics, philosophy and risk studies, to take a closer look at the ways that safety and risk have been defined in both public debates, in kind of vernacular language but also in expert or technical documents like policy documents or risk audits.
So this team of researchers led by Nicholas Mueller, have observed that the term safety is very poorly defined generally in relation to risk. There is a huge field of risk studies, there is a sociology of risk that many of you will be well aware of, but mostly safety is framed as simply the inverse of risk, as if we know what that is. However most risk modelling, as it’s done in a kind of formal sense, assumes that risk can only ever be reduced as opposed to eradicated, which implies that absolute safety is unattainable. So the notion of safety actually makes no sense then, without understanding that some level of risk is always already present. And Mueller’s team discusses the findings of multiple studies, to begin to think about whether it’s reasonable to simply say okay well safety is subjective then, but they conclude that most often safety or the perception of safety correlates with the perception of control. So for example in the case of the United States, many people will say they feel safer subjectively when they carry a gun. Even though statistical evidence suggests that gun owners are more likely to be killed and injured, probably by gunshots, than those who do not own guns.
So subjectively, the perception of safety is not necessarily statistically aligned with safety per se. So they conclude that safety cannot be understood in subjective terms but nor is there an objective way of defining safety. Instead muller and his colleagues propose a third position, which they define as inter-subjective safety. And in this context we would evaluate varying scenarios or states of being in relation to one another. It’s neither objective nor subjective. Rather than defining one scenario as 100 percent safe and another as risky, Mueller and colleagues argue that it’s more appropriate to consider whether one scenario might be at least as safe as another, given the probability of dangerous or risky outcomes. Another paper from members of this same research team, studies everyday or vernacular uses of the terms risk, safety, and security in news reporting, and in this study the researchers note that there are tendencies to associate safety with unintentional harm, and security with intentional harm. So what this means is the term safety in kind of vernacular language, is most often used in relation to discussions of accidents and when it’s associated with protective technology. It’s things like safety belts or safety nets that can’t guarantee there will be no harm, but can mitigate harm in an accidental setting. In contrast, the term security is more often associated with deliberate attempts of harm. So things like military action or data breaches, or assault for example. So given that risk is often seen as desirable in an everyday sense, so you know, no risk no reward. For example, the researchers suggest that where policy makers, academics and others want to convey a technical as opposed to vernacular understanding of risk, they are better off using the term expected damage to think through the concept, but also to communicate the concept. So this was a provoking thought for me and my provocation then back to people who interview me for example, about dating apps and whether they are safe or not, is to reframe the question in both technical discussions or academic discussions of safety, in relation to platform governance. Away from asking are dating apps safe or not, or are they risky or not, but is dating using apps at least as safe as dating when you’re not using apps. If it isn’t, why not? What’s the expected damage associated with the use of the platform or is dating app X at least as safe as dating app Y. If not, why not? What is the expected damage we associate with one platform as opposed to another and if we’re considering the risks of dating use in general or the use of other platforms, what specifically is the expected damage? Not the risk- what is the expected damage. And who is going to be damaged, and then how is that best mitigated. I’ll leave it there, thanks very much.
Dr Thao Phan:
So, thanks so much everyone. As Emma mentioned my name is Thao Phan and I’m beaming to you today from unceded Wurundjeri country in Melbourne. So I’d like to focus on the topic of race and platform harm. So there’s no shortage of examples of things like racial vilification, racial discrimination and other racist actions that take place on platforms today. And most of this is done in very explicit ways, in ways that we can see. Through the use of say, directed racial slurs, the reward and circulation of inflammatory racist content, or the routine delivery of demeaning results for racialised search terms, save for things like black girls or Asian girls. These racist acts are rightly understood as the online continuation of a pre-existing state of affairs, the symptom of an overall racist society. The problem of bias in algorithms is often characterised in this way as a problem of history repeating itself, the flaws of the past being encoded within infrastructure, which then traps the future inside of a recursive loop. In Indy Chung’s words, a model of prediction that closes the world it pretends to open. So for example, a principle like garbage in, garbage out captures as well. You know, any model that’s been trained and a flawed or dirty data set will reproduce the problems within that data set. In this case it’s racism in, racism out. You put racist data into a model, and it’ll reproduce and perpetuate that same racism. But the provocation I’d like to make today is that while platforms are undeniably reproducing racism, they’re also doing something which i think is much more subtle and arguably much more radical. They are fundamentally transforming the category of race itself, and they’re doing it in a way that has significant consequences for how we traditionally think, identify, and confront racism and racialisation. S let me use an example to illustrate. Between 2016 and 2020 Facebook very controversially allowed advertisers to target users in the US using three broad categories of what they called ethnic affinities. And these were African- American, US Hispanic, and Asian American. And they were controversial for two main reasons. The first was that advertisers could use these categories to not only target, but to exclude racial groups from receiving advertising messages. So for example, advertisers could purchase ads targeted at Facebook users who are house hunting and then exclude anyone within an African-American, US Hispanic or Asian American affinity from receiving those ads. What as many commentators pointed out, is effectively a violation of the US federal anti-discrimination laws. The second, has to do with the way people are being categorised. So Facebook does not explicitly ask users to identify themselves according to race, instead these categories are algorithmically determined using behavioural data and indirect proxy indicators. Things like language interests, IP address, and so on. Now there’s been a lot of attention and analysis placed on that first issue, the ways in which algorithmic techniques are being used to continue the work of racism, in this example through racial segregation and systematic housing discrimination it’s an exemplary case study for what Safiya Noble calls technological redlining. The use of algorithmically driven software to reinforce oppressive social relationships. But what receives less attention though, is that second issue which is about how platforms and algorithmic culture more broadly is reproducing new modes of racialisation. New ways of categorising and recognising people as raced. And as a result transforming what it means to be raced today. So why is this important? So unlike the explicit racisms that you can see, that you can pinpoint, that you can confront, this is something that operates in ways that are completely opaque. Either because they are deliberately obscured by companies like Facebook, or because they operate at levels that are literally imperceptible to us, that are beyond human scrutiny. And this is of course the great appeal of using techniques like machine learning, you know they can do things at speeds and scales that humans are unable to do. The direct consequence of this though, is in Louisa Moore’s words, they operate on a plane in excess of human visibility.
So race isn’t what it used to be. It’s not just about the way one looks or about the community to which you ascribe to. Race is now also an emergent EPI phenomena of large-scale automated data processing, and this is a particularly significant point for activists and scholars invested in racial justice, because it confronts us with some important questions.
How can we advocate against a process that operates beyond our perception? How can we keep up with the pace of dynamic classification? You know, classifications that are being assessed and reassessed with every new piece of behavioural data, and how do these issues make redundant the traditional tools of resistance? So advocating for inclusion or representation or diversity makes no sense within this opaque post-visual regime. How can you resist a category you don’t even know you’ve been placed within, and how can we form communities of solidarity under those conditions? Thanks very much.
Dr Emma Quilty:
Great, thank you so much to each of our wonderful panellists, for each of your responses to the provocations. You are all such incredible researchers and you are doing such incredible work in the world and it just makes my heart so happy, and I’m just so, so lucky to be able to moderate this panel. I just adore all of you, and I actually have a question for the entire panel. I’m going to start with Zahra and Rosie. What are the challenges and barriers to realising these provocations?
Dr Zahra Stardust:
I think one of the barriers at the moment in terms of sexual content is the policies of payment processes, and many platforms are concerned about their payment processes withdrawing. So the power and the monopoly of Visa and Mastercard especially, in setting policies and particularly setting very often arbitrary policies, which positions any kind of sexual use or sexual transactions as being part of you know, against their acceptable use policies is yeah, of a major concern. And a massive barrier to many people working in this industry. And again, they often take the most restrictive policies. For example some of you will know, they won’t permit the use of businesses to sell sex toys, because in Texas you can’t have say like six dildos. And so they will take examples like that and then base their global policy off like very particular jurisdictional issues, instead of looking to more progressive cultures and standards. So I think that’s a big thing that’s happening at the moment. And there’s a big push back at the moment, there’s actually a campaign called I think, the hashtag is ‘acceptance matters,’ which is all about financial discrimination against sex workers and sex industry businesses. And there have been actually a number of workers who are now pushing back and making claims against banks and payment processes. There were some sex workers in Queensland who were successful in having a settlement with NAB, so that’s a big movement that’s happening all over the world. And I’m doing some work with hacking hustling and decreeing stigma, and some others in the US about this issue. I think another one of the barriers is the sensation-less kind of media reporting around this, and the kind of political pressure. Because social media companies are so often concerned with reputational risks and market pressure. Their response is most frequently just the bare minimum standards. But it’s also these kind of protectionist narratives that are emerging. You know, the use of like ‘we need to protect women and children’ as a guys expanding state power and expanding platform power. And you know when you look back to the ways in which pornography has been regulated, like our obscenity laws emerged from when the printing press was invented and there was kind of mass- the regulators were concerned with the democratisation of culture, and the kind of getting this material into the hands of the masses, that they thought would corrupt people. And so this focus on harm, and I know Jean talks a lot about this too, like why are we focusing harm, why don’t we focus on creative agency like you say. And this focus on like individual content, or interpersonal behaviour, really draws attention away from platforms themselves as perpetrators of harm. I think that’s a really big problem. And also coming from a public health background, and a sexualities background, a lot of our focus is often on building surpass capacity and supporting development, and how do we foster environments where people can unlearn racism and unlearn sexism. And there’s so many different kinds of harm that are happening on these platforms that are just, it isn’t really being recognised as harmful or worthy of attention, you know. So that, it is a creating structural or perpetuating structural inequalities. It’s presenting barriers for sex education, harm reduction material. It’s sanitising and gentrifying the internet. It’s diminishing our conversations about sex. It’s resulting in discrimination and keeping people in precarious kinds of labour. It’s interrupting movement work, and there’s some great community research now by hacking hustling who wrote a report called Posting Into the Void, where they found that 51 percent of sex workers who were also activists, had found that shadow banning had interrupted their social justice movement organising work. So i think it’s really about the where the attention lies, and which issues are worthy, and which people are worthy of protection.
Dr Rosalie Gillet:
Yeah, so as a few of us I’ve spoken about today in this panel and other panels, platforms have this really huge problem of scale. And as I mentioned earlier, platforms can only really see the problems that they can fix, essentially. And so they only see these moderation answers which are scalable, and so because of this platforms really focus on improving the technologies that they already have. And so they treat harmful behaviour as a classification problem, and this means that they’ve developed tools that are based on classifiers, but as we know classifiers can’t understand context, and they can only really detect the content that they’ve actually been trained to identify. And so I think that this means that platforms have this real tunnel vision and they’re focusing on developing these tools that are scalable, but they may not be particularly effective. And an example of this we can see with the dating app, Tinder . Ad so Tinder within the past year, has created, has deployed a couple of new automated speech detection tools. What these are called, are you sure and does this bother you. And essentially these are behavioural nudges that they deploy in the platform’s instant messaging service. And so if somebody sends something that the platform thinks might be harmful, then it’ll come up with a message saying are you sure you want to send that, and then likewise if somebody receives a message that might be harmful they’ll get a little prompt that says does this bother you. And so I think that this just kind of falls into that same trap of really thinking about just great pieces of content. And while it’s unclear how these work in practice I think that you know, they won’t be picking up on this more mundane everyday behaviours that people do experience online. And that’s something that i found in my PHD research, which investigated women’s experiences of harassment and abuse on Tinder. And I think that this shows us that there are these two really significant problems here, and that platforms focus on content rather than behaviour. But also that they fundamentally misunderstand how their users actually experience and understand harm and safety online. And this then becomes embedded in their automated tools, but because measures like these are scalable, it’s going to be a real challenge to get platforms to think about how they might be able to use automation in new and creative ways that actually centre the needs of victim survivors, and their users more broadly.
Dr Zahra Stardust:
Can I say something very, very quickly about scale. Because i think this comes up a lot, and I know I’m very, I wonder a lot like is scale the problem? And you know some things are just not scalable. And last week we did a presentation, Aaron Snoswell who was one of the, a postdoc here with us, asked is the solution slow tech. And I just wanted to pick up on that, because there’s actually a big slow porn movement that was coined by a partnership here called Sense8 films. And they were very much drawing upon a shift towards ethical and conscious consumption and drawing upon movements like slow food and slow fashion to think about slow pornography. And yeah, so i just wanted to pick on on that point that Aaron had made last week about slow technology, and perhaps that’s a way to think about resistance to surveillance capitalism.
Prof Kath Albury:
You want me to jump in?
Dr Emma Quilty:
Yeah, thanks Kath.
Prof Kath Albury:
Okay, so the question was about barriers to realising the joyful fulfilment of our provocations. Yes, and I think in terms of my idea that we might rethink some of the common-sense understandings of safety and harm, or risk when it comes to dating apps, one of the things that often strikes me is that even though dating apps have really only been mainstream since around 2014, there’s already a kind of nostalgic belief in a pre-digital before-times of dating, that was somehow intrinsically safe and transparent. And no one misled each other, or coerced each other, or was in any way abusive. So I think we need to unpack really, some of our assumptions. If we think about a platform governance in relation to safety and risk, and relationships and intimacy, we need to unpack some of our nostalgia and our nostalgic assumptions around aspects of sexuality. So the idea for example, that marriage is intrinsically safe whereas hooking up with a casual partner is inherently risky. There have been suggestions for example that what’s needed on apps is stricter identity verification, or increased police access to user chats and profiles. And I think the speakers so far have been very clear on why these are not guarantees of safety for many users. We know that there are documented harms associated with these things. They don’t make sex workers safer for example. Automated identity verification systems don’t make trans people feel safer on apps, in fact they single them out for all kinds of marginalisation and targeting and harassment. Aboriginal and Torres Strait Islander people certainly don’t feel safer knowing that the police can access their chats. So there are lots of things we need to think about in relation to safety in digital spaces. And the question is, safer for who? Or what does safety feel like and look like, and what do we need to really think about in relation to the distinct differences in platform spaces in relation to other spaces. Rather than assuming certain kinds of sexual relationships are inherently harmful. Thank you.
Dr Thao Phan:
I’m going to jump in. Yeah cool. So, I think when it comes to the provocation of you know, how do we recognise the different forms of race and racialisation that are occurring today, I do think it is a problem of accessing better tools of recognition, of identification. So some of those tools really don’t yet exist. You know we need lots of different kinds of tools, theoretical, methodological, and yes sort of technical tools as well. I mean I think some of the projects that have been outlined over the course of the News and Media Symposium are really like exemplary in this regard. So The Australian Search Project that was outlined yesterday by Axel, The Facebook Ad Observatory project is going to be discussed this afternoon- these kinds of projects I think, are really what we need. Because at the moment our current way of recognising when there’s a problem with race is just purely through controversy, right. When things start to fall apart, when things go wrong. And that’s really the only means of recognition that we have. I also wanted to just like touch on what Zahra was saying around slowness. I think sort of slow scholarship is a really big part of that as well. Giving people sort of the tools and resources and time, to think about really, really difficult subjects like race and racism. I mean one thing that I always find really interesting, I mean, is that nobody really acknowledges how hard it is to talk about race and to talk about race in this particular country as well. You know, where most of us you know, our schooling did not equip us with any of the tools of racial literacy. So when we come to these problems of race, thinking about them later in life, we don’t necessarily have the language. We don’t have the concepts to unpack and deal with them. So a lot of us have to sort of like acquire that expert knowledge on the side. And you know for most of us as well, thinking about race and races, it is incredibly uncomfortable because you need to confront feelings or experiences that you haven’t really yet reckoned with in your life, and that really does take additional support and time. So I mean, what does it mean for us then. It means that we need to treat critical race studies as a legitimate field of study, that is an expertise that we need to invest in and to grow. And it might mean that we need to be open to the fact that we will make mistakes, and that others will make mistakes and that maybe we need to be more forgiving around that. You know as Rosalie was sort of outlining, letting go of this carceral mindset, this punitive mindset that there is an easy answer to race, and if you don’t get it right then you know you need to be punished in some way.
Dr Emma Quilty:
Great. Thank you so much Thao. That is such a good point. We really are missing that level of racial literacy in our education system and that would be a fantastic research project actually. I think we’ve run out of time, is that Craig? Are we finishing up at 11, yes?
11:15. Oh great. Fantastic. Sorry, brilliant moderating skills. So we have time for some questions from the audience, so I’m going to direct this one at Kath. This is from Jean. So expected damage still appears to align with law liability, so building on your field work, how can we think about safety in more social and collective ways?
Prof Kath Albury:
Yeah, thank you that’s a good question, and the research group whose work I’ve been using to think about this is indeed working in the field of risk studies. And most of their publications are about industrial spaces and kind of OH&S. So that’s been quite an interesting foray into new literature from me, in terms of the research that we did. It was really interesting that by inviting app users to self-define safety, what makes you feel safer or less safe, and to do it in those kind of relative as opposed to absolute terms, people offer a whole lot of personal strategies that are very much about weighing up the kinds of cues that they receive from other people. What they want from the intimate encounter, what they believe is possible, and that might be limited in some ways. So the term satisficing has been used in some publications in relation to the ways that trans-people for example, negotiate the app space where they can in some cases encounter hostility or unwanted fetishisation. But also great connection, and joy, and friendship building, and network building, and all kinds of emotions and cultures that contribute to safety, in a kind of broader collective sense. So yeah, it is a space where legal frameworks or risk management frameworks are I think, insufficient. But often those terms and those modes of understanding safety and risk get very blurry when platform governance is discussed, because the legal and the common-sense frameworks begin to blur and mesh, and it can be quite tricky to unravel the tangle. And I know even in the Slido, a lot of people have heard the question I asked about dating and dating apps in very different ways. A very different wording has been heard from what I said. And so my question that I asked was, is dating with apps at least as safe as dating without them? And if you look in the Slide, that’s not I think what some of the questioners is heard. So yeah, there’s a lot still I think to unpack in our conception of safety and risk, and how we translate that into a space of where the, for one of a better term, guard rails should be in relation to platform governance.
Dr Emma Quilty:
Great, thank you Kath. There is so much that needs to be untangled, and there’s so much work that needs to be done and needs to be improved. So this next question from Slido is for Zahra, but really could be answered by anyone on the panel. So you’ve covered so much in your overview in terms of what needs to be improved- is there a starting point for you in terms of low-hanging fruit?
Dr Zahra Stardust:
Yes, well look, a number of us do consultations every now and then with the Facebook content moderation team, and that’s one place where we have been starting. Obviously Facebook and Instagram are notorious with their policies on sex, nudity, and solicitation. So, and there’s some great research by the way, by Alice Whit around Instagram’s moderation of women’s bodies in particular. So those are kind of key platforms that impact many people. Recently I did a talk at Google, on non-consensual intimate imagery, and how we need to be attuned to reframing that conversation to think about a range of different kinds of ways in which that affects many different users, in terms of extraction and piracy, and theft of people’s content.
I mentioned payment processes as well. There’s a project that’s happening at the moment around how we have ethical and consensual adult content distribution, but also keep in mind issues around child protection and intimate imagery. And that is something that’s being worked up to be presented at the Internet Governance Forum in December. So there’s a number of kind of events coming up where there’s a lot happening in this space. But I wanted to also say there’s so many, this is really about rebalancing the internet economy, to take a phrase from digital rights watch. They have a whole series at the moment on rebalancing the internet economy. And so it’s about thinking about where the kind of, where their profits and benefits can be redistributed. And there are many local projects that are doing this really well at the moment in exciting ways. So generating attention towards them, and investing in them. Assembly Four is one, they are a sex worker collective who have created an alternative to twitter. It’s called Switter. There’s another platform cooperative called Peep Me, which is a sex worker platform cooperative where they give ten percent of their profits back to sex worker organisations. There’s Lip Social, which is a new social media platform which is deliberately responding to, or pushing back against Facebook and Instagram, and welcoming sexual expression on their platform. And there’s also many indie porn websites which have existed for a long time like Pink Label TV, which actually have a really nuanced way of thinking about content and kind of queer community standards. So I think we ought to be looking to those platforms for guidance for both governments, and tech platforms in terms of how we can do better. And the other thing I wanted to say is that there’s actually, well in addition to changing the framework from thinking about risk and safety and harm, perhaps thinking more about libertarian imagining and speculative dreaming. And opening up those more generative conversations that are less reactive. That’s a really productive space at the moment, and there’s some great work to look at coming out of the World Association for Sexual Health which created a Declaration on Sexual Rights in 2014. And their focus is really on pleasure and sexual pleasure, being we often think about this in terms of freedom of expression, but actually their focus is around the human right to the highest attainable standard of health, and sexual pleasure is part of achieving that. Right, so there’s some great work coming out of that from an international human rights perspective, about how that can provide some kind of framework towards the responsibilities of platforms in this space.
Dr Rosalie Gillet:
I might also just add something there quickly too. I think that, I mean this is really hard what I’m about to say, but I think we need to figure out how we can actually centre victim survivors and their needs, and repair harm, and rehabilitate those who have caused this harm, and achieve a sense of community online. And so in a work in progress study that I’m conducting with professor Nic Souza, we’ve been examining the challenges of combating misogyny and harmful social norms on Reddit. And we argue that Reddit must focus on improving the culture of its subreddits. and that driving this real change requires the participation of the platform’s moderators who can undertake this really important work of challenging the hateful ideology that actually brings these communities together. And so I won’t go into the details of that study now, but I think the first step to changing how people behave isn’t content moderation, but it’s trying to instil a respect for women and other vulnerable and marginalised groups. Of course Reddit is governed differently to most large dominant platforms, given that it’s subreddit moderators regulate the subreddits. But I think there’s a lot that we can learn from this work as well. But I think there are also clearer opportunities at the moment for how we can centre victim survivors and Bumble the dating app Bumble I think, is a really interesting example of how platforms are currently triaging this survivor-led care. And so Bumble recently announced a partnership with the non-profit feminist survivor-led organisation called Chain, and this is a remote trauma counselling service that they’ve now deployed on Bumble. And so for people who report to have been sexually assaulted by someone who they met on Bumble, they can then use this remote trauma counselling service that uses automation in some kind of chat system, but also this one-on-one counselling should they need it. And so this has only been on the platform for a couple of months now, so we really don’t know how effective it’s going to be, but I think that this is a really interesting and important example of how we can use automation and how we can move beyond thinking about content moderation, and in ways that really centres victim survivors.
Dr Emma Quilty:
Great, thank you Zahra. Thank you Rosie. Yes for pleasure, yes for respect yes to all of that. I’m going to throw it over to Thao now. Can you give me some of your juicy, low hanging fruits?
Dr Thao Phan:
Always after my low-hanging fruits, Emma. Yeah, I mean earlier I said as part of my provocation that race isn’t what it used to be, you know. And I think to make a statement like that you really need to know that race really never was what you thought it was, you know. It is constantly being made and remade and it’s something that sort of requires constant attention to like, as with gender, as with sexuality, as with any kind of form of classification identification. I mean, this is the thing- this culture is that when you have forms of classification that are rigid, that are operating at scale, and automatically they don’t sort of account for these sort of shifting concepts and shifting ways that we see ourselves. I think sort of the real challenge there is again, is returning to this sort of question of racial literacy, you know. Because to reckon with the fact that race is different now, we need to know what race was before, and before that and before that, you know. It’s a historically sort of like contextualised thing, and it’s something that we can study through sort of like tried methods, like you know doing genealogical analyses, and so on. And we can look to lots of other fields that are doing good work there and that are able to sort of, have those difficult conversations. Yeah.
Dr Emma Quilty:
Beautiful thank you Thao. Juicy as always. Kath.
Prof Kath Albury:
The low hanging fruit. Yeah look, it’s an interesting question because um the low-hanging fruit i think if we were thinking about dating out. You know, dating app cultures for example, the low-hanging fruit in Australia which is where I’m working, would be to actually implement consent education in schools. You know it has nothing to do with our platforms really if we’re thinking about safety in dating culture, or being at least as safe as in dating culture. We don’t have a great culture around consent in this country and so you know, the thrill of the chase, and a kind of very strange polarised beliefs about gender lead to really nasty and often predatory cultures in relation to dating and intimate relationships and sexual relationships. And we have sexuality education policies in this country that aren’t actually enacted in individual schools, and we have the same controversies coming up over and over and over again and I’ve been researching in this field for 20 years and it’s still controversial to discuss pleasure in school sex education. Or you know, think about actually making it compulsory to talk about consent in schools. So just use the curriculum you’ve got, experiment, see if it works. I think that’s kind of the first step in the Australian context. It doesn’t- it’s not low hanging fruit in a platform governance context, but it’s kind of going to the root of a lot of the issues with dating apps, I would say. Can I just jump in really quickly. Kath’s comments just reminded me you know, I think one of the greatest tragedies is the politicisation of topics like race, like gender, like sexuality, because it stops us from doing the work that needs to be done because you know, in Tony Morrison’s great words: the very serious function of racism is distraction, because we constantly have to sort of say the same things over and over again. Because we have to sort of bring the conversation back and it really takes up a lot of our time, and that time that could be spent doing other things you know really putting in the hard work that needs to be done that we know that needs to be done.
Dr Emma Quilty:
Yeah, absolutely. Absolutely 100 percent. So, I think, okay, that is definitely our time now can we get a huge round of applause for this incredible panel.