PODCAST DETAILS

Making Sense of Deepfakes
13 June 2022
Speakers:
Dr Jenny Kennedy, RMIT University
Prof Anthony McCosker, Swinburne University
Listen on Anchor
Duration: 19:27

TRANSCRIPT

Jenny: Welcome to the automated decision-making and society podcast. My name is Jenny Kennedy and in this episode, we’re talking about deepfakes. 

Deep fakes are a type of synthetic media created with the replacement of faces and voice and digital videos. You may have seen the very convincing Tom Cruise deepfakes that went viral on Tik Tok in 2021. These AI generated deepfakes are becoming more common and harder to spot with the potential to create convincing footage of any person doing anything anywhere. Joining us today is Anthony McCosker to help us better make sense of deep fakes. Anthony is a professor in media and communication at Swinburne University and a chief investigator here at the center. He researches the impact and uses of social media and new communication technologies with a focus on digital inclusion and participation and digital literacy.

Thanks for joining us Anthony.

 Anthony: No, thank you. 

 Jenny: Can you tell us more about deepfakes and how they’re generated?

Anthony: Sure, the term comes from a combination of deep from deep neural networks and fake from fake news, so it’s already invoking something technical and something scary or dangerous. 

So deepfakes as you said, there are a form of synthetic media where there’s a convincing replacement of faces sad voice in digital video.  And the fact that it’s video is something new, so we’re used to seeing this with still images with Photoshop, but not so used to seeing it with video and this is made possible by developments in deep learning AI systems, but also it’s worth remembering that it’s, it also relies on large video data sets that are used to train generative learning models to produce synthesized outputs, so they apply what’s called convolutional Neural networks, or generative adversarial networks and similar kinds of techniques for automating image classification and transformation.

I know that’s a lot of technical terms, but we can kind of get a sense of what they do as we go. 

 Jenny: You’re right, like if they are very technical terms, but in terms of how we then experience them are very similar to what we might see in like special effects in movies?

Anthony: Yeah, there’s crossover, but it’s also different, so Chris Uma, who created the super high quality Tom Cruise deep fakes that are popular on Tik Tok describes himself as a VFX or visual effects artist as well as a deep fake artist. So he’s had a long career in building visual effects for the movies for television, for all sorts of different environments. 

So special effects are certainly computer manipulated imagery and they often use machine learning and deep learning techniques to achieve certain effects. So, there might be things like realistically animated water or snow or rain for example or a landscape generator so that you don’t have to find all of that in real life but deepfakes make use of the vast amounts of visual and video data available as imports into a learning model that goes on to generate new outputs. 

And this is most often combining features of one data set with those of another.  So this means that the opportunity for creativity really comes from the vast availability of video data on the Internet, so that’s probably the big difference. 

 Jenny: Right and how can we recognize a deep fake when we say once? 

 Anthony: Look, it’s it’s very tricky, tis is a big issue. So, Australia’s eSafety Office and others provide advice about what to look for in a deep fake in deep fake images, so they sometimes refer to things like the slight blurring of images as they move in in different ways, or little glitches or little differences in synthetic voice, for example, but the issue is with higher production deep fakes it’s not that easy, so at the moment really it’s about context and awareness, they’re the key things. And the talk surrounding deepfakes is really crucial to building that understanding of where they might appear and when a video is deepfake. 

 Jenny: If it’s difficult for the human eye to detect a deep fake, can we use AI to detect them? 

 Anthony: Yeah, that’s a good question. So to come back to the technical bit with GANS or the generative adversarial networks, the way that they work essentially is that a generator network creates new data instances right and it’s usually based on training data or parameters that have been set by the person creating it the model.  So, it might be like a whole heap of Tom Cruise faces, for example, it uses to train. So at the same time a discriminator network evaluates these new instances and it tries to identify anomalies against the training data to assess whether the outputs actually look like Tom Cruise or not. 

 So, the whole process pushes the generator to create more accurate outputs to fool the discriminator, and so when you try to build a detection system, you’re essentially competing against the same processes and it makes it really difficult to do so.  Yeah, there’s a bit of a what’s sometimes called an arms race in terms of detecting deepfakes and producing better ones. 

Jenny: There’s been coverage of concerns of deepfakes and the ways they might use, can you give us an overview of what are some of the obvious and perhaps more subtle, harms or potentials of deep fakes?

 Anthony: Yeah so deepfakes essentially had their origins in amateur porn production cultures, so they spread initially through some now banned Reddit subreddits on Reddit and pornhub and so like many emerging technologies, users in porn has driven the initial innovation to some to some extent and where this is nonconsensual, it’s deeply harmful and falls under the harms of image based abuse. 

There are some other issues around identity theft in particular, and it’s another reason that our personal data and you know our images and our voice are vulnerable and need great greater protection. So those, yeah, so we’ve sort of broadened out from those initial harms around image-based abuse based abuse to harms that are more placed within the broader sphere of misinformation and disinformation and those kind of processes that are disrupting our trust in media and media systems and media institutions as well. 

 Jenny: Are there any regulations around deep fakes and their potential harms?

Anthony: Not really, no. So, if you think about the vast you know amount of manipulated media out there, there’s nothing specific that has been created, no specific regulation that’s been created to address manipulations of media through deepfake production. What we do have is an emerging set of regulations around you know around image-based abuse. 

So it’s really about the abuse, the abuse and the harms and the nonconsensual arrangement that takes place there. And some of our colleagues in the Centre are looking at that really broadly. So you know, so I think this is an emerging area what has happened is that some of the platforms have taken on different policies around restricting the access to deepfakes.  So, Facebook for example, has created policy around essentially banning deepfakes, with some caveats around parity and those kind of uses. 

 Jenny: So, there’s kind of self-propelled responsibility in terms of managing or mitigating harms of deepfakes?

Anthony: Yeah, absolutely. So yeah.  So, I think there’s there’s a sense that there’s a bit of a responsibility for creators as well as platforms and some responsibility for you know the kind of larger audiences for those platforms, so whether it’s YouTube or Reddit as well. 

Jenny: And you’ve been doing research in this space for some time. Can you tell us about what research you are currently doing in this area? 

Anthony: Yeah, sure as part of the as part of the work that we’re doing in the ADM+S Centre, my teams focusing on the new kinds of learning and literacy that are needed to understand and live with AI and automated or algorithmic systems. 

So, these, you know, these technologies are fast becoming part of our everyday life so we need new ways of learning the way that I see it is learning with AI, and my hunch is that we can make far better use of informal learning environments and social learning environments in order to do this. 

So, this knowledge and learning has to be shared.  It can’t be left to a handful of specialists and academics, and it’s especially can’t be left to the big tech corporations.  That’s the, that’s the key for us. So, we’re looking at a whole heap of different issues around building capability in the community and building learning and really understanding the kind of AI and data literacies that that we need to develop. 

Jenny: And you’ve been observing GitHub repositories and YouTube accounts that teach how to deep fake. What sort of things have you observed through analyzing these? 

 Anthony: Look to go back a little bit.  I remember in my undergraduate days right studying the emergence of radio and television as new communication technologies.  The different forms of radio persisted, right? So, there were there were homemade transistors setups, there were two-way radio allowing anyone to create a network all over the world, Long Range radio and then AM and FM radio. Later on, with TV that two-way set up was almost possible early on before the systems were institutionalized, right and locked down with licensing requirements and heavy regulation.  And So what I’ve seen with deepfakes are that it’s one of those areas of AI and machine learning and deep learning where we’ve seen some really interesting wide reaching experimentation, right? 

So this is this experimentation hasn’t come out of nowhere, like clearly there was, there’s been plenty of generative AI arts you might have seen that before the kind of emergence of deepfakes and that it has included people building kind of code interfaces for mashing say, you know, images of flowers with self-portrait to mix the image in interest ways. But that used to be kind of static. So, I saw the growth of a huge network of people engaging in this kind of thing and when deepfakes came along, or when that initial code was produced and made available through GitHub and through Reddit, you know there was a huge explosion of interest. There were channels set up on YouTube to show outputs, there were discord server groups, you know, talking about how to do this. It was really interesting just in that sense of the Community building around this kind of work. So I was initially following the harms, as we’ve talked about but this broadened out pretty quickly. 

 Jenny: So what’s the most interesting or surprising thing you found?

Anthony: Well, look from the start, I was surprised about how lateral and educational many of the GitHub repository’s were, so for listeners who don’t know GitHub repository is just a place to store code that can be shared openly so that other people can take an instance of it, build on it, and then share it themselves.  And it’s a way of, you know, building a community around software development, but also what you get, there’s a whole heap of other supporting material as well, so you get these readme files which explain what’s going on and talk about the context, the software development or the thing that people are working on. 

So these repository’s as they’re called sort of started with deepfake code and those sort of generative models and training data sets, but often open out to explain a huge amount more about AI techniques and deep learning models and their processes, but also the ethics and the aspects of responsibility and accountability that’s surround them, so I was really interested in that side as well. This is not to say that that necessarily flows through, but the fact that it was there in the first place and people were having these conversations, or putting up on and posting this kind of material was really important and interesting I thought. So, I was amazed also, at some of the communication taking place around there, some of the great communicators in in the developer world are amazing to watch.  Actually, these are essentially software developers or people who understand code and data science and who have fantastic ways of explaining how it’s done and how the models were and how to produce them. 

 Jenny: So, what lessons can we take from these observations? 

 Anthony: Yeah, so we need to, look, we need to use this kind of enthusiasm for social learning. First of all, we can widen our knowledge and understanding about AI systems and about their outputs and their outcomes using these kind of examples. So some of our colleagues in New York talk about this as objects to think with. So if you take a deepfake as an object to think with, you can really unpack it in a learning environment and really try to understand you know how it’s put together what what’s the training data that’s used, what’s the data set that that it’s used to build on, you know, how can we understand the kind of model used and then the outputs in the context of that and how it actually, you know, disrupts the media environment or the media ecosystem through all of those processes. But the other side of things is that I think we can look at these spaces as places to intervene as it’s happening, so to embed that kind of explainability or accountability and ethics into AI processors and processes of learning AI.  So if that’s there from the start, I think we can achieve a lot through these kinds of objects to think with. So and then more broadly, I think the more familiar that we are with media manipulation, the more we have those kind of schemas, the preconceived kind of understandings of that what we’re looking at can be altered, and then we’re better equipped I think as a society, to address any harms and misuse, they can be found and spotted and flagged much more easily. 

As I said, we’re not really used to doing this with moving images and video and faces and bodies, we tend to see them as truthful and as an instance of a true person, but I think we can get there the more that we talk about what’s happening with deepfakes and you know. Finally, I’d also like to emphasize that these sort of social learning responses still need to be combined with regulation, and you know regulation within digital platforms, but also that kind of technical oversight that that’s about, you know, sort of breaking down the way that deepfakes are built and created and produced and circulated. 

 Jenny: So one last question then, are there any existing or potential good uses of deepfakes?

Anthony: Yeah, there’s like it has been really interesting to see some of the new uses that have been emerging. So just recently, for example, we’ve seen Kendrick Lamar has started using deepfakes in some of his music videos, and it’s really powerful way for him in that instance, to explore Issues of, you know race and oppression and and power throughout history by combining, you know the words of of his lyric, his ideas, his music with those images of past, great African American people as they merge and meld with his face and. you know, essentially using deep fake technology to sing his lyrics or to present his lyrics. So, I think while those harms are always going to be there, this is a kind of technology that opens a lot of doors to creativity and innovation in media production, so I think you know if you think about deepfakes as just one of a number of automated and synthetic media types that are emerging at the moment, we’re going to see you know a huge amount of creativity and action in this space. So, this includes language based generative technology as well. So this is about you know anyone who creates content for the Internet using AI to help with that process to assist it in some way just by putting in a few you know, kind of indicators, themes, parameters and letting the AI produce material similar with images and some of those big image generation models we’re seeing will be put to use all over the place very quickly. 

So we’re actually, you know, there’s a lot of work to do for us here, as media and communications scholars and as researchers and as part of our Centre in trying to keep pace with those developments and trying to understand our media and communication ecosystem because it’s going to be massive. How this is going to change. 

Jenny: Indeed, thank you for joining us and talking to us today. I really appreciate how in your work you’re shifting the dialogue around deepfakery away from concerns about alarm and more of using this space to allow us to think more about how we might improve inclusive and accountable AI, but also thinking about the kind of cultures of social learning and knowledge building that are occurring in these spaces. 

Anthony: Thanks Jenny, it’s been great talk. 

 Jenny: So you’ve been listening to a podcast from the ARC Centre of Excellence for Automated Decision-Making and Society. For more information on the Centre, go to admscenter.org.au

SEE ALSO