A bipartisan group of U.S. senators has launched a formal inquiry demanding that X, Meta, Alphabet, and other major tech platforms explain their policies and safeguards regarding the proliferation of non-consensual sexualized deepfakes. This legislative pressure follows escalating concerns over AI-generated imagery and the apparent failure of existing content moderation systems to protect users from malicious synthetic media.
A Growing Crisis of Non-Consensual Content
The prevalence of AI-generated abuse is well-documented. Meta’s Oversight Board previously highlighted two cases of explicit AI images targeting female public figures, and platforms have struggled with “nudify” apps advertising on their networks, leading Meta to sue a company called CrushAI. Beyond public figures, there have been multiple reports of kids spreading deepfakes of peers on Snapchat, while Telegram has gained notoriety for hosting bots designed to undress photos of women.
Corporate Responses and Evasive Action
In response to the inquiry, X directed attention to its recent announcement regarding updates to Grok. Reddit provided a more direct stance, with a spokesperson stating, “We do not and will not allow any non-consensual intimate media (NCIM) on Reddit. Reddit strictly prohibits NCIM, including depictions that have been faked or AI-generated. We also prohibit soliciting this content from others, sharing links to “nudify” apps, or discussing how to create this content on other platforms.” Alphabet, Snap, TikTok, and Meta did not immediately respond to requests for comment.
Specific Demands from Lawmakers
The letter, signed by Senators Lisa Blunt Rochester, Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff, demands comprehensive data on how these companies manage synthetic media. Key requirements include:
- Policy definitions of “deepfake” content and “non-consensual intimate imagery.”
- Enforcement approaches for non-nude alterations and “virtual undressing.”
- Internal guidance provided to human moderators.
- Technical guardrails used to prevent the generation of deepfakes.
- Mechanisms to prevent the re-uploading of identified deepfake content.
- Procedures for notifying victims of non-consensual sexual deepfakes.
The Shadow of xAI and Grok
The senatorial move follows comments from Elon Musk, who claimed he was “not aware of any naked underage images generated by Grok.” Shortly thereafter, the California attorney general opened an investigation into the chatbot. While xAI has maintained that it removes illegal content, the company has yet to address why its systems were initially capable of generating such harmful material.
Beyond Sexualized Imagery: The Broader AI Threat
The scope of the problem extends beyond non-consensual sexual content. OpenAI’s Sora 2 reportedly allowed users to generate explicit videos involving minors, while Google’s Nano Banana tool generated controversial political imagery. Furthermore, racist videos produced via Google’s AI video model have gained significant traction on social media.
Regulatory Challenges and Future Outlook
The complexity is compounded by global competition. Many image and video generators linked to companies like ByteDance offer advanced editing features that bypass Western safety standards. While the U.S. enacted the “Take It Down Act” in May to criminalize the dissemination of non-consensual imagery, critics argue it places too much burden on individual users rather than holding platforms accountable. In response, states like New York are pursuing their own legislation, with Governor Kathy Hochul proposing laws that would mandate the labeling of AI-generated content and ban deepfakes during election cycles.
