Technology/ X restricts Grok AI amid deepfake legal pressure
X has announced it will geoblock the ability of its AI tool, Grok, from editing images of real people into revealing clothing in jurisdictions where such edits are illegal. This follows investigations by California and the UK into the spread of sexualized AI deepfakes, including those of children.
The move was hailed by the UK government as a vindication of its demands for platform accountability.
A spokesperson for regulator Ofcom called it a “welcome development,” though its investigation into potential legal breaches remains active.
Elon Musk defended the platform earlier this week, asserting critics “want to suppress free speech,” and shared AI-generated images of UK Prime Minister Keir Starmer in a bikini.
He stated Grok’s “not safe for work” settings are designed to align with regional laws, allowing only fictional adult nudity comparable to R-rated films in the U.S.
Campaigners welcomed the change but emphasized it comes too late for many victims. “The damage has already been done,” said Professor Clare McGlynn, while advocates called for proactive measures against evolving AI harms.
Questions remain about how effectively X can enforce these geographic restrictions.
