India has given Elon Musk’s X a 72-hour ultimatum to take action over obscene and AI-altered images of women allegedly generated by its chatbot, Grok.
Quick Summary – TLDR:
- India’s IT Ministry issued a notice to X over Grok AI generating explicit images of women and minors.
- X must submit an action taken report within 72 hours or risk losing legal immunity.
- Lawmaker Priyanka Chaturvedi’s complaint triggered the government’s response.
- Non-compliance may lead to severe legal action under multiple Indian laws.
What Happened?
The Indian government has directed X, formerly known as Twitter, to take immediate corrective steps after reports emerged that users were misusing Grok AI to create sexually explicit and degrading images, mainly of women and minors. The Ministry of Electronics and Information Technology (MeitY) has warned that failure to comply could result in legal consequences, including the loss of X’s safe harbor protections.
India’s MeitY minister asks X to immediately review its Grok AI chatbot & submit an Action Taken Report within 72 hours over misuse to generate explicit content.
— Sidhant Sibal (@sidhant) January 2, 2026
Government Raises the Heat Over AI Misuse
The Ministry issued a stern notice to X Corp on Friday, citing serious lapses in the company’s compliance with the Information Technology Act, 2000, and IT Rules, 2021. The issue came to light after Member of Parliament Priyanka Chaturvedi raised a complaint with IT Minister Ashwini Vaishnaw, flagging how users were exploiting Grok to generate obscene content by altering women’s photos.
Key directives from the Indian government include:
- A 72-hour deadline for X to submit an action taken report (ATR).
- Immediate removal or disabling of any content violating Indian laws.
- A complete review of Grok’s technical, procedural, and governance safeguards.
- Strict enforcement of platform policies and possible account suspensions.
The ministry highlighted that such AI-enabled features were being used to generate images of women in minimal clothing and in sexually suggestive poses, based on images users uploaded to the platform. Some of the altered content also reportedly involved minors, prompting urgent government intervention.
Legal Risks for Musk’s Platform
The notice clearly stated that non-compliance could lead to loss of protection under Section 79 of the IT Act, which shields platforms from liability for user-generated content. It may also trigger prosecution under:
- Bharatiya Nyay Sanhita (BNS).
- Bharatiya Nagarik Suraksha Sanhita (BNSS).
- Indecent Representation of Women (Prohibition) Act.
- Protection of Children from Sexual Offences (POCSO) Act.
MeitY also raised concerns about the role and responsibility of X’s chief compliance officer, questioning whether legal obligations under the BNSS, 2023 have been met. Copies of the notice were circulated to various ministries and state authorities, hinting at a coordinated crackdown on AI-generated obscenity.
Rising Political and Legal Tensions for X
This controversy adds to a growing list of regulatory clashes between X and the Indian government. The platform is already challenging certain content takedown rules in court, arguing that government intervention risks overreach. Yet, this case shows that compliance expectations are becoming stricter, especially with AI tools like Grok entering mainstream use.
Grok has been promoted as a tool for real-time fact-checking and user interaction, but the lack of adequate safeguards has now put X in the crosshairs of Indian regulators.
SQ Magazine Takeaway
I think this case is a wake-up call for any company working with generative AI. When your tools are capable of producing content that violates people’s privacy and dignity, you have to build strong guardrails, not just launch cool features. India’s response here shows just how seriously governments are beginning to treat AI misuse, especially when it harms women or minors. If X doesn’t clean up Grok fast, it could set a precedent for how AI regulation unfolds globally and not in a way tech companies would like.
