AI Liability Amended Complaint

Garcia v. Character Technologies, Inc.

🏛 United States District Court, Middle District of Florida, Orlando Division · 📅 2024-10-22

Issue

Whether Character Technologies, Inc., its individual founders, and Google LLC are civilly liable under strict product liability (design defect and failure to warn), common law negligence, negligence per se, and related state law theories for the death of a 14-year-old user allegedly caused by the defective design of the Character.AI generative AI chatbot product.

What Happened

Plaintiff Megan Garcia, individually and as personal representative of the estate of her son Sewell Setzer III, filed this First Amended Complaint in the Middle District of Florida following the death of the 14-year-old on February 28, 2024. Plaintiff alleges that Character.AI's chatbot product was defectively designed with anthropomorphic features that blurred fiction and reality, was deliberately marketed to minors without adequate safety features or warnings, and was knowingly rushed to market. The complaint asserts strict liability for design defect and failure to warn, common law negligence, negligence per se based on alleged violations of laws prohibiting sexual solicitation of minors, aiding and abetting liability against Google, unjust enrichment, intentional infliction of emotional distress, and violations of Florida's Deceptive and Unfair Trade Practices Act. Plaintiff expressly pleads that C.AI is an "information content provider" under 47 U.S.C. § 230(f)(3) and that all claims arise from defendants' own conduct rather than third-party content, apparently anticipating and preemptively addressing a Section 230 defense.

Why It Matters

This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.

Related Filings

Other proceedings in the same litigation tracked by this monitor.