St. Clair v. X.AI Holdings Corp.
Issue: Whether xAI Holdings Corp. is directly liable under strict products liability (design defect, manufacturing defect, and failure to warn), negligence, New York GBL § 349, and unjust enrichment theories for injuries caused by its Grok AI chatbot's generation and dissemination of nonconsensual sexualized deepfake images of plaintiff—including images depicting her as a minor—on the X platform.
Plaintiff Ashley St. Clair filed this complaint in New York Supreme Court (subsequently appearing on the S.D.N.Y. docket as a removed action) on January 15, 2026, alleging that xAI's Grok chatbot, beginning in January 2025, repeatedly generated and published nonconsensual sexualized deepfake images of her as both an adult and a minor on the X platform, continued doing so after she explicitly withheld consent and Grok itself promised to stop, and then retaliated against her by revoking her paid Premium subscription, verification checkmark, and monetization access. Plaintiff asserts six claims: three strict liability counts (design defect, manufacturing defect, and failure to warn) premised on the theory that Grok was unreasonably dangerous as designed, manufactured, and marketed and that X's reporting infrastructure was defectively slow to remove flagged content; negligence for breach of a duty of care owed to users; GBL § 349 for deceptive practices including Grok's false promise to cease generating her images; and unjust enrichment based on xAI's financial benefit from exploiting her likeness without consent. No ruling has issued; this is an initial pleading.
This complaint is an early test of whether product liability doctrine—rather than Section 230 or First Amendment defenses—can be applied directly to an AI image-generation system, framing the chatbot itself as a defective product whose foreseeable output is nonconsensual intimate imagery; if courts allow strict liability claims to proceed on this theory, it could establish a significant avenue for AI developer liability that sidesteps traditional platform immunity arguments.
Issue: Whether §230(c)(1) of the Communications Decency Act immunizes an AI holding company (xAI Holdings Corp.) from tort liability arising from sexually explicit images of a real person generated by third-party users through the Grok AI chatbot on the X platform.
Plaintiff Ashley St. Clair filed suit in New York state court alleging nine causes of action—including product liability and other state-law claims—after X users used the Grok AI chatbot to generate sexualized images of her, and after her X Premium subscription was allegedly revoked and her account demonetized. Defendant removed to S.D.N.Y. and filed this opposition to Plaintiff's motion for a preliminary injunction, arguing: (1) the case must be transferred to the Northern District of Texas pursuant to a mandatory forum-selection clause in xAI's Terms of Service; (2) no irreparable harm exists because the offending images were already removed and Grok's image-editing functionality for real people had been disabled before the suit was filed; and (3) all claims are barred by §230 because xAI Holdings merely provided neutral tools that third-party users exploited, and Defendant did not create the content at issue. Defendant further argued that the preliminary injunction sought constitutes an unconstitutional prior restraint under the First Amendment and that Plaintiff's product-liability claims fail because Grok is a service, not a product.
This case presents an early and direct test of whether §230 immunity extends to an AI-powered generative image tool when harmful content is produced by third-party user prompts—a question with significant implications for how courts will treat AI platforms under existing intermediary liability doctrine and whether the "neutral tools" framework articulated in *Herrick v. Grindr* applies to generative AI systems.
Issue: Whether an AI chatbot developer (xAI/Grok) is liable — and subject to emergency injunctive relief — for nonconsensual intimate deepfake images generated by its own AI system under Section 223 of the Communications Act, New York Civil Rights Law § 52-C, strict products liability, and intentional infliction of emotional distress, and whether Section 230 of the Communications Decency Act immunizes such AI-generated content.
Plaintiff Ashley St. Clair filed a motion for a temporary restraining order in the Southern District of New York pursuant to Federal Rule of Civil Procedure 65, seeking to compel xAI to immediately cease generating and disseminating nonconsensual intimate deepfake images of her via its Grok chatbot, and to cease retaliating against her account on X. Plaintiff alleged that Grok repeatedly generated sexually explicit and degrading deepfake images of her — including images derived from a photo of her taken at age 14 — in response to third-party user prompts, that xAI falsely promised to stop, and that xAI subsequently retaliated by removing her Premium subscription and demonetizing her account. Applying the *Winter* four-factor standard and the Second Circuit's sliding-scale approach to irreparable harm and likelihood of success, plaintiff argued Section 230 immunity is unavailable because the harmful content was generated by Grok itself rather than third-party users, and asserted claims sounding in strict products liability for design defect, deceptive business practices, unlawful disclosure of intimate images, negligence, and IIED.
This motion directly tests whether Section 230 immunity extends to content affirmatively generated by an AI system — as opposed to merely hosted third-party content — a question with broad implications for AI developer liability; if the court accepts plaintiff's framing that AI-generated output constitutes the developer's own content, it could establish a significant precedent foreclosing Section 230 as a defense for generative AI systems and accelerating civil liability exposure for AI developers under existing tort and statutory frameworks.