Section 230

Beltran v. Meta Platforms, Inc.

🏛 District Court, N.D. California · 1 filing
2026-03-16 Complaint Section 230 First Amendment

Complaint

Issue: Whether Meta Platforms, Inc., Sama, and Luxottica violated the federal Wiretap Act (ECPA), California's Invasion of Privacy Act, and multiple state consumer protection statutes by capturing, transmitting, and routing to third-party human annotators the private audiovisual recordings of Meta AI Glasses users without their informed consent, while affirmatively marketing the device as "designed for privacy" and "built for your privacy."

Plaintiffs Steven Beltran, Alicia Perez, and Terrance Moore filed a putative class action complaint on March 16, 2026 in the Northern District of California, seeking relief on behalf of themselves and similarly situated purchasers of Meta AI Glasses. Plaintiffs allege that when users activate the glasses by pressing a button or speaking "Hey Meta," high-resolution audio and video is captured and transmitted via the user's smartphone to Meta's cloud servers, and subsequently routed to Sama's data-annotation facility in Nairobi, Kenya, where thousands of human workers review, label, and analyze the footage — including intimate recordings from bedrooms and bathrooms — to train Meta's AI systems. Plaintiffs further allege that Meta affirmatively represented the device was "controlled by you" and "built for your privacy" while concealing that human contractors overseas would review the captured footage, that anonymization measures allegedly failed to consistently obscure identifiable information, and that Meta altered its policies to prevent users from opting out of recording and storage features. The complaint asserts claims under the ECPA (18 U.S.C. § 2510 et seq.), CIPA (Cal. Penal Code § 631), the Illinois Eavesdropping Act, state consumer protection statutes in California, New York, and Illinois, and common law theories of invasion of privacy, negligence, and unjust enrichment.

This complaint presents an early test of civil liability exposure for AI hardware developers whose training-data pipelines involve undisclosed human review of sensitive user-generated recordings, potentially establishing that wiretapping and consumer protection statutes apply to wearable AI devices that funnel private audiovisual data to offshore annotators without adequate disclosure. The case may also signal growing judicial and legislative scrutiny of the intersection between AI training data collection practices and informed-consent requirements under both federal and state privacy law.