A survivor of notorious sex offender Jeffrey Epstein has filed a class action lawsuit against the Trump administration and Google, representing herself and other victims. The lawsuit alleges that both parties wrongfully disclosed and published sensitive personal information about Epstein survivors.
Filed on Thursday in the U.S. District Court for the Northern District of California where Google is headquartered, the complaint claims that the Justice Department exposed the identities of nearly 100 Epstein survivors between late 2025 and early 2026. Although the government later acknowledged the error and removed the data, the lawsuit argues that “online entities like Google continuously republish it, refusing victim’s pleas to take it down.”
The suit specifically targets Google’s core search engine and its AI-powered feature, AI Mode, accusing them of surfacing and distributing victims’ personal details.
“Survivors now face renewed trauma,” the suit says. “Strangers call them, email them, threaten their physical safety, and accuse them of conspiring with Epstein when they are, in reality, Epstein’s victims.”
The case was filed under the pseudonym Jane Doe, representing one of the survivors.
Earlier this year, after months of mounting pressure, the Department of Justice released over 3 million additional pages of Epstein-related documents, including images and videos. Epstein died by suicide in August 2019 while in a New York City jail, just weeks after his arrest on federal child sex trafficking charges.
By taking legal action against Google, the plaintiffs are challenging the limits of Section 230 of the Communications Decency Act—a law that has historically shielded internet platforms from liability for user-generated content.
With the rapid rise of AI-generated content and growing concerns around non-consensual sexual material, including deepfake pornography, tech companies are facing increasing scrutiny. Earlier this month, Google was also sued in a wrongful death case, where a father alleged that the company’s Gemini chatbot encouraged his 36-year-old son to carry out a “mass casualty attack” and ultimately take his own life.
The Epstein survivors’ lawsuit claims that Google “intentionally,” through its platform design, enabled harassment by hosting and amplifying victims’ personal information. It further argues that the AI Mode feature “is not a neutral search index.”
The case comes amid two recent jury verdicts this week—both against Meta and one involving Google’s YouTube—which found that these platforms failed to properly regulate harmful content leading to real-world consequences.
New Mexico Attorney General Raúl Torrez, who led the case against Meta, told CNBC this week that “there’s a distinct possibility that these cases motivate Congress to re-examine Section 230 and, if not eliminate it, dramatically revise it.”
According to the lawsuit, Google’s AI-generated responses exposed sensitive personal data by directly answering user queries seeking such details.
The complaint also argues that government authorities have historically failed to compel tech platforms to remove such harmful content, leaving victims vulnerable.
“As a part of this response, generated repeatedly on multiple platforms and across various devices, Google’s AI Mode included Plaintiff’s full name, displayed her full email address, and generated a hypertext link allowing anyone to send direct email to Plaintiff with the click of a button,” the suit says.
Representatives from Google and the Trump administration have not yet responded to requests for comment.








