French ministers referred X (formerly Twitter) to prosecutors following reports that Grok AI, the platform's generative AI tool, created sexualized deepfake images of women and minors. The government action reflects escalating European concerns about AI-generated inappropriate content and platform accountability for generative AI tools integrated into social media services.

Grok AI Content Generation Concerns

Reports emerged that users exploited Grok AI's image generation capabilities to create deepfake images depicting real individuals in sexualized or inappropriate contexts without consent. French authorities expressed particular alarm regarding images involving minors, raising serious child protection concerns alongside broader privacy and dignity violations.

The incident demonstrates challenges platforms face when deploying powerful generative AI tools accessible to millions of users without comprehensive content filtering or usage restrictions. X implemented Grok AI as premium feature differentiating its paid subscription tiers, but safeguards appear inadequate preventing malicious applications.

Similar concerns have emerged across jurisdictions, with Brazilian musicians discovering AI-altered inappropriate images of themselves circulating after users targeted photos with Grok's generation tools. The pattern suggests systemic rather than isolated problems with the tool's deployment.

French Regulatory Response

France's prosecutor referral represents escalation beyond preliminary investigations or warnings, indicating government willingness to pursue criminal charges if evidence supports violations of laws protecting minors, privacy rights, or dignity. French law provides substantial protections against non-consensual intimate imagery and child exploitation materials.

The action aligns with broader European Union regulatory approaches emphasizing platform accountability for content enabled by their systems. The EU's Digital Services Act and AI Act both address generative content risks, though implementation timelines vary across member states.

Meta previously paused international rollout of Ray-Ban Display smart glasses in France, UK, Italy, and Canada due to demand constraints, but French officials have separately scrutinized AI-related privacy and content concerns across technology platforms.

Cross-Border Coordination Questions

Ireland, South Korea, and Canada officials discussed at CES 2026 that shared rules are essential as AI scales across borders. France's unilateral action against X demonstrates tensions when individual nations pursue enforcement without comprehensive international coordination.

X's global operations complicate jurisdictional questions about which nations can enforce regulations against platforms headquartered elsewhere. However, EU regulations increasingly assert extraterritorial reach when European residents are affected, establishing precedents for aggressive enforcement despite companies' non-European locations.

Source: Gizmodo