AI Liability

Garcia v. Character Technologies, Inc.

🏛 District Court, M.D. Florida · 3 filings
2024-10-22 Other AI Liability Section 230 First Amendment

Issue: Whether Character Technologies, Inc., its individual founders, and Google LLC are strictly liable under design defect and failure-to-warn theories, and liable in negligence, negligence per se, and for violations of Florida's Deceptive and Unfair Trade Practices Act (Fla. Stat. Ann. § 501.204), for the wrongful death of a 14-year-old minor allegedly caused by the defective design and marketing of the Character.AI generative AI chatbot product.

Plaintiffs—the deceased minor's parents, acting individually and as personal representatives of his estate—filed this Second Amended Complaint in the Middle District of Florida following the death of 14-year-old Sewell Setzer III on February 28, 2024, which they attribute to his use of the C.AI chatbot. Plaintiffs assert strict product liability (design defect and failure to warn), common law negligence, negligence per se based on alleged violations of laws prohibiting sexual abuse and solicitation of minors, aiding and abetting liability against Google for substantially assisting Character.AI's tortious conduct, unjust enrichment, and FDUTPA violations. Plaintiffs expressly allege that C.AI is itself an "information content provider" under 47 U.S.C. § 230(f)(3) and that their claims arise from Defendants' own conduct—not third-party content—apparently anticipating and attempting to preempt a Section 230 immunity defense. Plaintiffs also seek injunctive relief to halt C.AI's continued use of data allegedly harvested from the minor.

This complaint is significant because it represents a direct attempt to apply traditional products liability frameworks—design defect and failure to warn—to a generative AI system, treating the AI chatbot as a manufactured product rather than a publisher of third-party speech, and it proactively pleads around Section 230 immunity by characterizing the AI as a first-party content generator, a theory that, if credited by the court, could substantially expand tort exposure for AI developers.

2024-10-22 Amended Complaint AI Liability Section 230 First Amendment

Issue: Whether Character Technologies, Inc., its individual founders, and Google LLC are civilly liable under strict product liability (design defect and failure to warn), common law negligence, negligence per se, and related state law theories for the death of a 14-year-old user allegedly caused by the defective design of the Character.AI generative AI chatbot product.

Plaintiff Megan Garcia, individually and as personal representative of the estate of her son Sewell Setzer III, filed this First Amended Complaint in the Middle District of Florida following the death of the 14-year-old on February 28, 2024. Plaintiff alleges that Character.AI's chatbot product was defectively designed with anthropomorphic features that blurred fiction and reality, was deliberately marketed to minors without adequate safety features or warnings, and was knowingly rushed to market. The complaint asserts strict liability for design defect and failure to warn, common law negligence, negligence per se based on alleged violations of laws prohibiting sexual solicitation of minors, aiding and abetting liability against Google, unjust enrichment, intentional infliction of emotional distress, and violations of Florida's Deceptive and Unfair Trade Practices Act. Plaintiff expressly pleads that C.AI is an "information content provider" under 47 U.S.C. § 230(f)(3) and that all claims arise from defendants' own conduct rather than third-party content, apparently anticipating and preemptively addressing a Section 230 defense.

This case directly tests whether traditional product liability frameworks — design defect and failure to warn — can be applied to a generative AI chatbot, potentially establishing that AI systems are "products" subject to strict liability rather than services entitled to speech-based or Section 230 protections. The complaint's explicit characterization of C.AI as an information content provider whose own-generated outputs caused harm, rather than a platform hosting third-party content, represents a deliberate litigation strategy to foreclose Section 230 immunity and could shape how courts classify AI-generated content for liability purposes.

2024-10-22 Complaint Section 230 First Amendment AI Liability

Issue: Whether Character Technologies, Inc., its co-founders, and Google are strictly liable under design defect and failure-to-warn theories, and liable in negligence, for the suicide of a 14-year-old user allegedly caused by the Character.AI generative AI chatbot product's anthropomorphic and hypersexualized design features that were deliberately targeted at minors.

Plaintiff Megan Garcia, individually and as personal representative of the estate of her 14-year-old son Sewell Setzer III, who died on February 28, 2024, filed this complaint in the Middle District of Florida against Character Technologies, Inc., its founders Noam Shazeer and Daniel De Freitas, and Google/Alphabet, alleging wrongful death, strict product liability (design defect and failure to warn), negligence, negligence per se, intentional infliction of emotional distress, unjust enrichment, and violations of Florida's Deceptive and Unfair Trade Practices Act. The complaint alleges that defendants knowingly designed C.AI with anthropomorphic qualities that blurred fiction and reality, deliberately marketed the product to minors, lacked adequate safety guardrails, and that Google was a co-creator of the defective product through financial, personnel, and intellectual property contributions. Plaintiff expressly alleges that C.AI is an "information content provider" under 47 U.S.C. § 230(f)(3) and that all claims arise from defendants' own conduct rather than third-party content, preemptively framing the case to avoid a Section 230 defense.

This complaint is among the first to assert traditional products liability theories—design defect and failure to warn—directly against a generative AI system and its developers, and its explicit characterization of C.AI as an information content provider rather than a neutral platform signals a deliberate litigation strategy to foreclose Section 230 immunity, which could establish a significant template for future AI tort suits if the framing survives judicial scrutiny.

Related Commentary