Section 230

Bartone v. Meta Platforms, Inc.

🏛 District Court, N.D. California · 1 filing
2026-03-04 Complaint Section 230 First Amendment

Issue: Whether Meta Platforms, Inc. and Luxottica of America, Inc. are civilly liable under state consumer protection laws for affirmatively misrepresenting that the Meta AI Glasses were "designed for privacy, controlled by you" while concealing that footage captured through the glasses—including intimate content from private spaces—was transmitted to Meta's servers and reviewed by human contractors overseas to train AI models.

Plaintiffs Gina Bartone and Mateo Canu filed a putative nationwide class action complaint on March 4, 2026 in the Northern District of California, alleging that Defendants engaged in false advertising and fraudulent omission in marketing the Meta AI Glasses. Plaintiffs allege that Meta's privacy representations were materially false because, contrary to those representations, video footage captured by the glasses—including intimate footage from bedrooms and bathrooms—was routed to a subcontractor (Sama) in Nairobi, Kenya, where human data annotators reviewed and labeled the content to train Meta's AI models, and that Meta's advertised face-anonymization measures failed to function as represented. Plaintiffs seek class-wide relief under state consumer protection statutes, asserting they would not have purchased the product, or would have paid less, had the true data practices been disclosed.

This complaint represents an early test of whether consumer protection and deceptive advertising theories—rather than privacy torts or data protection statutes—can serve as the primary vehicle for imposing civil liability on AI hardware developers who allegedly misrepresent the data practices underlying AI training pipelines, potentially signaling a litigation strategy that sidesteps §230 and focuses instead on affirmative product marketing claims as the basis for holding AI developers accountable for undisclosed human-review data collection practices.