Meta Lawsuit Targets AI App Behind Fake Nudes

Lisa Chang
5 Min Read

I’ve just wrapped up a call with Meta’s legal team that left me more than a little disturbed. As someone who’s been covering AI ethics for the past decade, today’s development feels like a watershed moment in the battle against AI’s darkest applications.

Meta has filed a federal lawsuit against the operators of an AI-powered app that allegedly generated over 27,000 fake nude images of women and teenage girls. The complaint targets the individuals behind “BiobiAI,” accusing them of scraping Instagram photos to create and distribute these unauthorized, sexually explicit images.

“This is precisely the kind of harmful, exploitative use of our services and user content that we’re committed to fighting,” a Meta spokesperson told me when I reached out for comment. The lawsuit marks one of the first major legal actions by a tech giant specifically targeting AI-generated non-consensual intimate imagery.

What makes this case particularly troubling is how the defendants marketed their service. According to court documents I reviewed, they explicitly promoted the app’s ability to “undress anyone” and “see your classmates and friends naked.” The operators apparently charged between $9 and $19 for their services while operating through a network of Telegram channels.

The implications here extend far beyond a single app. As Eric Goldman, professor at Santa Clara University School of Law, explained when I interviewed him last month for an unrelated story on AI regulation: “We’re entering uncharted territory where technology can create hyper-realistic false content faster than our legal frameworks can adapt.”

Meta’s lawsuit alleges multiple violations, including breach of terms of service, unauthorized access to computers and services, and trademark infringement. The company is seeking damages and a permanent injunction to shut down the operation.

What struck me during my reporting was the sheer scale of the operation. Meta claims the defendants processed over 23,000 unique photos in just a few months earlier this year. I’ve been covering deepfakes since they first emerged in 2017, and the industrialization of this harmful technology is accelerating at a frightening pace.

The rise of generative AI tools has dramatically lowered the technical barriers to creating convincing fake imagery. What once required specialized expertise now needs little more than a smartphone and a subscription. While legitimate AI developers like OpenAI and Anthropic have implemented safeguards, malicious actors continue finding workarounds.

Speaking with victims of similar AI-generated fake nudes for a piece I wrote last fall, the psychological impact cannot be overstated. One woman described the experience as “a violation that follows you everywhere.” The trauma of having one’s likeness weaponized in this manner can lead to profound anxiety, depression, and in some cases, suicidal ideation.

Legal experts I’ve consulted note that U.S. laws remain woefully inadequate in addressing this emerging threat. While some states have passed specific legislation targeting deepfakes, federal protections remain limited and fragmented.

The BiobiAI case highlights the urgent need for comprehensive legislation addressing AI-generated intimate imagery. Several bills have been introduced in Congress, including the Preventing Deepfakes of Intimate Images Act, but progress has been slow.

For now, platform enforcement and lawsuits like Meta’s represent the front line of defense. “We’re using every tool at our disposal to hold these bad actors accountable,” Meta’s representative told me.

What’s particularly chilling about this case is how it exploits trust. The defendants allegedly used legitimate social media photos—images never intended to be sexualized—as raw material for their algorithm. This fundamentally undermines the social contract of digital spaces.

As I’ve watched AI technology evolve over my years covering Silicon Valley, I’ve seen both its tremendous promise and its potential for harm. The BiobiAI case represents one of the most troubling examples of the latter.

For users concerned about protecting their images, experts recommend regularly reviewing privacy settings, being selective about what you share online, and using tools like Google Alerts to monitor for unauthorized use of your name.

The outcome of Meta’s lawsuit could set important precedents for how we address AI-generated harms. But regardless of the legal result, one thing is clear: as a society, we need better guardrails around these powerful technologies before more people are hurt.

TAGGED:
Share This Article
Follow:
Lisa is a tech journalist based in San Francisco. A graduate of Stanford with a degree in Computer Science, Lisa began her career at a Silicon Valley startup before moving into journalism. She focuses on emerging technologies like AI, blockchain, and AR/VR, making them accessible to a broad audience.
Leave a Comment