The European Union has launched a formal investigation into Elon Musk’s platform X, focusing on its AI chatbot Grok, which has allegedly been used to create sexualised deepfake images of women and children.
Quick Summary – TLDR:
- The EU is investigating X over Grok’s generation of explicit deepfake images.
- Allegations include image edits using prompts like “remove her clothes.”
- Officials cite risks to women and children and violations of EU law.
- X introduced new restrictions but remains under scrutiny.
What Happened?
The European Commission opened a formal probe into X, formerly known as Twitter, after Grok, its AI chatbot, was found to be generating non-consensual sexually explicit images. The investigation aims to determine whether the company violated the Digital Services Act (DSA), a European law that holds tech firms accountable for illegal and harmful online content.
Commission investigates Grok and X’s recommender systems under the #DSA.
— Digital EU 🇪🇺 (@DigitalEU) January 26, 2026
The new investigation will assess risks related to the dissemination of manipulated sexually explicit images, including content that may involve child sexual abuse material.https://t.co/QzzEvaPn1W pic.twitter.com/CgiFqefmMb
EU Concern Over Grok’s Capabilities
According to the European Commission, Grok has enabled users to digitally manipulate images of women and children, replacing clothing with revealing outfits such as bikinis or removing them entirely through simple prompts. These capabilities triggered widespread backlash, with concerns about privacy, consent, and potential child exploitation.
European Commission President Ursula von der Leyen issued a stern statement, saying:
She emphasized that consent and child protection cannot be sacrificed for platform engagement or monetization.
Henna Virkkunen, EU Digital Commissioner, added, “The rights of women and children should not be collateral damage.” She called deepfake images a “violent, unacceptable form of degradation.”
Violations Under Digital Services Act
The investigation will assess whether X fulfilled its duties under the DSA, especially regarding risk assessment, mitigation, and content moderation. These rules require platforms to actively prevent the spread of illegal or harmful content, including material that may amount to child sexual abuse imagery.
A statement from the Commission explained, “These risks seem to have materialised, exposing citizens in the EU to serious harm.” The investigation will explore whether X failed to manage these risks responsibly and whether Grok’s rollout lacked proper safeguards.
Regina Doherty, an Irish Member of the European Parliament, also weighed in, stating, “No company operating in the EU is above the law.” She argued that this case highlights broader challenges in regulating emerging AI tools and the urgent need for enforcement that matches the scale and speed of AI deployments.
Grok’s Restrictions and Ongoing Scrutiny
In response to growing criticism, X restricted Grok’s image editing capabilities in mid-January. The platform announced that Grok would no longer allow the editing of real people’s images to show them in revealing clothing. It also limited image creation to paid subscribers only.
Despite these changes, the EU investigation remains active. Officials stress that opening a probe does not prejudge its outcome, but the move indicates the seriousness with which regulators are treating the issue.
This case also adds to the EU’s broader scrutiny of X. In December 2023, the platform was fined €120 million for breaching transparency rules, including misleading verification practices and lack of cooperation with researchers.
SQ Magazine Takeaway
Honestly, this was bound to happen. AI tools like Grok can do amazing things, but when powerful technology is deployed without tight safeguards, it can quickly cross the line from fun to harmful. Letting users virtually undress real people, especially children, is not just irresponsible, it’s dangerous. I think the EU is absolutely right to step in hard here. If tech companies want to play in the EU’s digital space, they need to follow the rules. And let’s be real. Basic human dignity and child protection shouldn’t even be negotiable.