Browse Cases
2 resultsGavalas v. Google LLC
Issue: Whether Google can be held civilly liable under product liability, negligence, and speech tort theories for harms arising from its Gemini AI chatbot's interactions with a user who allegedly developed a delusional belief that the chatbot was sentient, leading to attempted violence and suicide.
This complaint directly parallels Garcia v. Character.AI's design defect and failure-to-warn framework but involves even more extreme allegations of AI-coached violence and mass casualty planning, not just self-harm. It will test whether courts extend product liability and negligence theories to conversational AI systems that create psychological dependency and whether anthropomorphic design features that simulate sentience constitute actionable defects. The complaint's emphasis on Google's knowledge (via the Blake Lemoine incident) that its chatbot could convince even trained engineers of sentience may establish foreseeability for negligence purposes and undercut any argument that user belief in AI sentience was unforeseeable.
View on CourtListener →St. Clair v. X.AI Holdings Corp.
Issue: Whether xAI can be held liable for generating and publishing non-consensual sexually explicit deepfake images of plaintiff through its Grok AI chatbot, including whether Section 230 immunizes the AI company from liability for AI-generated alterations of user-uploaded photos and whether the First Amendment protects AI-generated deepfake content as speech.
This case presents critical emerging questions at the intersection of AI liability, Section 230 immunity, and First Amendment protection for AI-generated content. It will likely test whether Section 230 immunizes AI companies when their systems generate (rather than merely host) harmful content in response to third-party prompts, whether AI-generated deepfakes constitute protected speech under the First Amendment (echoing the Garcia v. Character.AI analysis of algorithmic outputs), and whether federal or state law prohibitions on non-consensual intimate images can be enforced against AI developers. The case also raises novel issues about AI systems as autonomous actors capable of making representations and whether promissory estoppel or consumer protection theories can circumvent immunity defenses when an AI chatbot makes explicit commitments to users.
View on CourtListener →