On January 13, 2026, eight United States Senators sent a letter to Alphabet, Meta, Reddit, Snap, TikTok, and X stating that they“are alarmed by reports of users exploiting generative AI tools to produce sexualized ‘bikini’ or ‘non-nude’ images of individuals without their consent and distributing them on platforms including X and others.” The senators requested that the companies provide information and documents relating to policies around deepfakes, non-consensual intimate imagery and non-nude manipulations, governance of AI tools related to “sexually suggestive or intimate content,” and preventative measures to identify and block non-consensual deepfakes. The senators noted that the fake images that are generated and shared without the knowledge or consent of the individuals who are depicted raise “serious concerns about harassment, privacy violations, and user safety.”
The letter was prompted by a Wired Magazine article that “described users taking photos of fully clothed women and using AI chatbots to ‘undress’ them into bikini-clad deepfakes, including by exchanging tips to bypass content filters.”
The letter emphasizes that in late December 2025, “X was filled with requests for Grok, its AI platform, to create non-consensual bikini photos based on users’ uploaded images.” As a result, the platform, xAI, has been sued over sexually exploitative deepfake images that were generated without consent.
The California Attorney General has launched an investigation into xAI over non-consensual deepfake pornography, including those who “facilitate its distribution,” and issued a cease and desist to xAI over the creation and distribution of non-consensual images created by Grok. If you believe you have been a victim of a bikini or non-nude deepfake, contact your state Attorney General’s office.