St. Clair v. X.AI Holdings Corp.
Issue
Whether §230(c)(1) of the Communications Decency Act immunizes an AI holding company (xAI Holdings Corp.) from tort liability arising from sexually explicit images of a real person generated by third-party users through the Grok AI chatbot on the X platform.
What Happened
Plaintiff Ashley St. Clair filed suit in New York state court alleging nine causes of action—including product liability and other state-law claims—after X users used the Grok AI chatbot to generate sexualized images of her, and after her X Premium subscription was allegedly revoked and her account demonetized. Defendant removed to S.D.N.Y. and filed this opposition to Plaintiff's motion for a preliminary injunction, arguing: (1) the case must be transferred to the Northern District of Texas pursuant to a mandatory forum-selection clause in xAI's Terms of Service; (2) no irreparable harm exists because the offending images were already removed and Grok's image-editing functionality for real people had been disabled before the suit was filed; and (3) all claims are barred by §230 because xAI Holdings merely provided neutral tools that third-party users exploited, and Defendant did not create the content at issue. Defendant further argued that the preliminary injunction sought constitutes an unconstitutional prior restraint under the First Amendment and that Plaintiff's product-liability claims fail because Grok is a service, not a product.
Why It Matters
This case presents an early and direct test of whether §230 immunity extends to an AI-powered generative image tool when harmful content is produced by third-party user prompts—a question with significant implications for how courts will treat AI platforms under existing intermediary liability doctrine and whether the "neutral tools" framework articulated in *Herrick v. Grindr* applies to generative AI systems.
Related Filings
Other proceedings in the same litigation tracked by this monitor.
How accurate was this summary?