Summary: A backlash has erupted in the UK over the ability of Elon Musk’s Grok AI to generate image edits that effectively “undress” people. After criticism, X limited the feature so that only paying users can use it. UK ministers called the move “insulting” to victims of misogyny and sexual violence.
This isn’t a niche product controversy. It’s a preview of the next regulatory and platform governance fight: what happens when powerful generative tools make harassment cheap, scalable, and hard to trace.
What happened
From the BBC video explainer:
- Grok AI was used to create edited images that digitally undress people.
- Following backlash, X restricted Grok image editing so it’s available only to users who pay a monthly fee.
- The UK government criticised the move as “insulting” to victims of misogyny and sexual violence.
Even without every technical detail, the shape of the problem is clear: a generative tool made it easy to create abusive sexualised imagery.
Why the paywall makes people angrier, not calmer
At first glance, “limit it to paying users” sounds like a control.
But it creates two bad signals:
- Monetisation of harm: it looks like you’re charging for a capability widely viewed as abusive.
- Misaligned incentives: if revenue comes from the feature, the platform has less incentive to eliminate it.
It’s similar to how some spam and fraud ecosystems work: a small group is willing to pay for capabilities that most users never want.
This is part of a larger category: non-consensual intimate imagery
Digitally “undressing” people sits in the same harm family as:
- deepfake pornography
- revenge porn
- sexual harassment using synthetic media
The key element is non-consent.
The internet already struggles with this harm at human scale. Generative AI pushes it into industrial scale.
The technical issue: models don’t “understand” consent
A model can be trained to follow rules (“don’t do X”), but:
- it can be prompted around restrictions
- it can generalise in unexpected ways
- it can be fine-tuned or jailbroken
That means safety cannot rely only on “model behaviour.” It also requires:
- product design constraints
- detection and enforcement
- user identity and traceability
The platform governance issue: where does responsibility sit?
When a tool enables abuse, responsibility often fragments:
- “the user did it”
- “the model just generates images”
- “we restricted it behind a paywall”
Regulators are increasingly rejecting this buck-passing.
The likely direction of policy is:
- platforms must show they designed systems to reduce foreseeable harms
- not merely respond after outrage
What effective controls could look like
If a platform wants to demonstrate seriousness, the control stack typically includes:
-
Hard capability limits
Don’t allow certain transformations at all (e.g., nudification). -
Strong detection
Detect and block generation of non-consensual sexualised imagery. -
Watermarking and provenance
Make synthetic media easier to identify and trace. -
Reporting and rapid takedown
Fast user reporting tools and dedicated enforcement. -
Meaningful consequences
Account penalties that deter repeat abuse.
A paywall is not inherently a safety measure; it’s a distribution choice.
The cultural issue: “just a joke” isn’t a defence
A common pattern in online harms:
- abusers frame it as humour
- victims experience it as violation
Generative tools amplify this dynamic by reducing effort and increasing reach.
Why this is likely to escalate in 2026
Because:
- generative tools are getting easier
- image editing is becoming a default feature in platforms
- victims’ images are widely available online
The combination makes abuse low-friction.
Bottom line
The Grok controversy is a warning that platform safety debates are moving from content moderation (what users post) to capability moderation (what tools can easily produce).
If platforms treat abusive synthetic imagery as a paid feature to be managed rather than a harm to be eliminated, governments will step in—and not gently.