Deepfakes on Twitter: Which Actors Control Their Spread?
Fecha
2021-03-03Metadatos
Mostrar el registro completo del ítem
Media and Communication 9(1) : 301-312 (2021)
Resumen
The term deepfake was first used in a Reddit post in 2017 to refer to videos manipulated using artificial intelligence techniques
and since then it is becoming easier to create such fake videos. A recent investigation by the cybersecurity company
Deeptrace in September 2019 indicated that the number of what is known as fake videos had doubled in the last
nine months and that most were pornographic videos used as revenge to harm many women. The report also highlighted
the potential of this technology to be used in political campaigns such as in Gabon and Malaysia. In this sense, the phenomenon
of deepfake has become a concern for governments because it poses a short-term threat not only to politics,
but also for fraud or cyberbullying. The starting point of this research was Twitter’s announcement of a change in its protocols
to fight fake news and deepfakes. We have used the Social Network Analysis technique, with visualization as a key
component, to analyze the conversation on Twitter about the deepfake phenomenon. NodeXL was used to identify main
actors and the network of connections between all these accounts. In addition, the semantic networks of the tweets were
analyzed to discover hidden patterns of meaning. The results show that half of the actors who function as bridges in the
interactions that shape the network are journalists and media, which is a sign of the concern that this sophisticated form
of manipulation generates in this collective.