[social-share]
Making sense of deepfakes
Author Kathy Nickels
Date 11 May 2022
AI-generated deepfakes are becoming more common and harder to spot. They have the potential to create convincing footage of any person doing anything, anywhere.
Deepfakes are a type of synthetic media created with the replacement of faces and voice in digital videos. You may have seen the very convincing Tom Cruise deepfakes that went viral on TikTok in 2021.
These videos are made possible by developments in deep learning systems and the availability of extensive video datasets used to “train” generative learning models and to produce synthesized outputs.
Deep fakes bring new types of informational harms and possibilities for image-based abuse, especially in their historical origins in porn production cultures
The question of how to detect, ban or regulate, or educate to mitigate the harms of deepfakes needs to address the multiple dimensions of AI and data literacies, and the contexts of their development and deployment.
In this article Making sense of deepfakes: Socializing AI and building data literacy on GitHub and YouTube, ADM+S researcher Professor Anthony McCosker focuses on educational and social learning responses, asking what kind of AI and data literacy might make a difference in addressing deepfake harms.