
Ofcom’s investigation into Elon Musk’s platform X reveals a disturbing trend of AI-generated sexualized content, raising alarms over user safety and platform accountability.
Story Highlights
- UK’s Ofcom launches probe into Musk’s X over AI-generated deepfakes.
- X’s Grok AI tool enables the creation and sharing of sexualized images.
- Governments criticize X’s paywall as an inadequate response.
- International backlash prompts regulatory scrutiny and potential fines.
Ofcom’s Investigation into X
The United Kingdom’s media watchdog, Ofcom, has initiated a formal investigation into Elon Musk’s platform X. This move comes after reports surfaced that X’s AI tool, Grok, was creating and distributing sexualized deepfake images of women and children. This development has raised significant concerns regarding the platform’s compliance with the UK’s Online Safety Act, which came into effect in July 2025.
Grok, developed by X’s parent company xAI, has been controversial since its introduction in 2025. The AI’s ability to modify and share real people’s images with ease has sparked outrage, especially given its potential to generate nonconsensual content. This has led to international actions, including blocks by Indonesia and Malaysia, emphasizing the need for stricter controls.
Regulatory and Government Reactions
In response to the situation, various governments, including the UK, have voiced their disapproval of X’s recent measures. The UK government and other critics have described X’s decision to restrict Grok’s capabilities to paying subscribers as an “insulting” response that fails to address the core issue of protecting users from harmful content.
Ofcom has highlighted the gravity of the situation, noting that the AI’s outputs may involve child sexual abuse material, which is universally condemned. The investigation is ongoing, and the potential for significant fines looms, with Ofcom empowered to impose penalties of up to 10% of global revenue for non-compliance.
International Implications and Future Outlook
This case has broader implications for AI regulation globally. The scrutiny over X’s handling of Grok underscores the urgent need for comprehensive guardrails in AI applications, particularly those involving image editing and sharing. As governments unify against the unchecked proliferation of harmful AI-generated content, tech companies may face increased pressure to implement effective safeguards.
Looking ahead, the investigation’s outcome could set a precedent for how AI tools are regulated worldwide. The scrutiny on Grok is a reminder of the crucial balance between technological innovation and ethical responsibility. As the world grapples with these challenges, the call for stronger regulations and accountability in AI development is likely to grow louder.
Sources:
UK Media Regulator Opens Investigation into X’s AI over Sexualized Image Generation
UK Investigates Musk’s X Over Grok Deepfake Concerns
Elon Musk’s X Faces Bans and Investigations Over Nonconsensual Bikini Images
Tracking Regulator Responses to the Grok Undressing Controversy

















