The Cyber Helpline

View Original

Taylor Swift Deepfakes & the misuse of AI. We need to talk.

In the last few days, the news broke that deepfake, explicit images of Taylor Swift were being spread all around the internet, including mainstream platforms. The Guardian reported that there were at least 47M views on the X (Twitter) post before the content was taken down. This has sparked a media and public outrage, leading to a bill being introduced in the US and the inclusion of deepfakes in the Online Safety Act of the UK, that would criminalise the spread of nonconsensual, sexualised images generated by AI.


The Cyber Helpline’s Statement:

We are saddened to hear about what has happened to Taylor Swift over the last few days. No one should have to suffer the consequences of technology being used to objectify and harm them this way. Regrettably, the use of AI as a tool for abuse is nothing new; wherever technology goes, perpetrators will find a way to exploit it for harm. But silencing women by blocking all their content on social media platforms (as X has done, by blocking the hashtag #TaylorSwift)  is not the answer– this just perpetuates the underlying societal problem of misogyny, silencing the victim instead of addressing the abusive behaviour.

We need to welcome conversations with high profile platforms about the steps they could be taking to better protect women on their platforms because, simply put, there are measures that can be put in place to make the internet a more empowering and safe space for everyone.

At The Cyber Helpline, we stand behind harnessing AI for good. Every day, we continue supporting people who have experienced these types of online crimes, enabling them to rebuild their lives through the positive use of technology, as well as helping the investigation to bring the perpetrators of harm before the justice system. That's why we have launched the Global Online Harms Alliance, a network of organisations that work together to mitigate this type of harm globally. Ultimately, these crimes do not have borders, and our approach to resolving it needs to reflect that. 

We also welcome initiatives such as StopNCII, which empower people who have experienced this type of abuse to take back control by giving them a way to prevent these images from being shared. But in order for this to work, big tech and social media platforms need to get on board with such initiatives and welcome the use of technology to prevent the issue. 

New Legislative Measures: As of January 31st, 2024, Deepfakes have been encompassed within the legal framework by the Sexual Offences Act 2003, as indicated by the Online Safety Act. This legislative development underscores the United Kingdom's proactive stance in addressing the potential harms associated with Deepfakes, positioning the country at the forefront of global efforts to regulate this emerging threat.

The incidents involving Taylor Swift have underscored the urgent need for collaborative efforts to address the misuse of technology, particularly AI, in causing harm. Ultimately, a collective and global commitment is necessary to harness technology for the better and protect individuals from online abuse.

Our team is here to provide the guidance you need to navigate the digital landscape. If you've experienced any cybersecurity crime or online harm, visit The Cyber Helpline for further support.


Author: Kathryn Goldby

Access our guides for dealing with cybercrime: