Grok AI’s “Spicy” Mode Creates Explicit Deepfakes Without Prompting, Raising Safety Concerns

Elon Musk’s AI video generation tool has come under fire for automatically creating sexually explicit content of celebrities without being explicitly prompted to do so, highlighting growing concerns about AI safety and consent.

grok logo

The Problem with “Spicy” Mode

According to a detailed investigation by The Verge, Grok Imagine’s new “spicy” mode has been generating fully explicit video content of public figures, including Taylor Swift, without users specifically requesting such material. The investigation revealed that simply selecting the “spicy” animation option for otherwise innocent prompts would result in explicit content being generated automatically.

Clare McGlynn, a law professor at Durham University who specialises in online abuse, described this as a systematic issue rather than an accident.

“This is not misogyny by accident, it is by design,”

McGlynn stated, emphasising that the technology demonstrates deliberate choices in its programming.

How the Testing Revealed the Issue

During testing by The Verge journalist Jess Weatherbed, a seemingly innocent prompt about Taylor Swift at Coachella was entered into the system. While the initially generated images were appropriate, selecting the “spicy” animation mode immediately produced explicit content without any additional prompting.

“It was shocking how fast I was just met with it – I in no way asked it to remove her clothing, all I did was select the ‘spicy’ option,”

Weatherbed explained to BBC News.

Similar results were reported by Gizmodo, which found comparable explicit content generation involving other female celebrities, though some searches did return moderated or blurred results.

Age Verification Concerns

The investigation also revealed inadequate age verification systems. Despite UK laws that took effect in July 2025 requiring robust age verification for platforms displaying explicit content, Grok Imagine only requested a date of birth without implementing more stringent verification methods.

According to media regulator Ofcom, platforms with generative AI tools capable of creating pornographic material are regulated under the new Act. The regulator stated they are

“working to ensure platforms put appropriate safeguards in place to mitigate these risks,”

particularly regarding children’s safety.

Legal and Policy Context

This incident occurs against the backdrop of evolving legislation around deepfake technology. Currently, generating pornographic deepfakes is illegal in the UK when used for revenge porn or when depicting minors. However, Professor McGlynn has helped draft broader amendments that would make all non-consensual pornographic deepfakes illegal.

Baroness Owen, who proposed related amendments in the House of Lords, emphasised the importance of consent:

“Every woman should have the right to choose who owns intimate images of her… whether she be a celebrity or not.”

The UK government has committed to implementing these amendments, with a Ministry of Justice spokesperson stating:

“Sexually explicit deepfakes created without consent are degrading and harmful. We refuse to tolerate the violence against women and girls that stains our society.”

Previous Incidents and Platform Response

This isn’t the first time Taylor Swift has been targeted by deepfake technology. In January 2024, sexually explicit deepfakes using her likeness went viral on X (formerly Twitter) and Telegram, accumulating millions of views. At that time, X temporarily blocked searches for Swift’s name and stated they were actively removing the content.

Notably, XAI’s own acceptable use policy explicitly prohibits

“depicting likenesses of persons in a pornographic manner,”

raising questions about the company’s enforcement of its own rules.

The Broader Implications

Professor McGlynn argues that this incident represents a broader pattern in AI development:

“That this content is produced without prompting demonstrates the misogynistic bias of much AI technology. Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to.”

The incident highlights the ongoing challenges in AI governance, particularly around consent, safety measures, and the protection of individuals’ likenesses in an era of increasingly sophisticated generative technology.

Bottom Line: As AI generation tools become more powerful and accessible, the need for robust safeguards, proper age verification, and respect for consent becomes increasingly critical. This incident serves as a stark reminder that technical capability must be balanced with ethical responsibility and legal compliance.

Sources:

The Verge – Original investigation into Grok Imagine’s “spicy” mode
BBC News – Reporting and expert interviews
Gizmodo – Additional testing and verification
Deadline – Independent verification testing
Ofcom – Regulatory statements
Durham University – Expert commentary from Professor Clare McGlynn
UK Ministry of Justice – Official statements on legislation

Grok AI’s “Spicy” Mode Creates Explicit Deepfakes Without Prompting, Raising Safety Concerns

Share this article

We do
British politics
on these platforms: