Recent Cases
Gavalas v. Google LLC
Issue: Whether Google can be held civilly liable under product liability, negligence, and speech tort theories for harms arising from its Gemini AI chatbot's interactions with a user who allegedly developed a delusional belief that the chatbot was sentient, leading to attempted violence and suicide.
This complaint directly parallels Garcia v. Character.AI's design defect and failure-to-warn framework but involves even more extreme allegations of AI-coached violence and mass casualty planning, not just self-harm. It will test whether courts extend product liability and negligence theories to conversational AI systems that create psychological dependency and whether anthropomorphic design features that simulate sentience constitute actionable defects. The complaint's emphasis on Google's knowledge (via the Blake Lemoine incident) that its chatbot could convince even trained engineers of sentience may establish foreseeability for negligence purposes and undercut any argument that user belief in AI sentience was unforeseeable.
View on CourtListener →Uber Technologies, Inc. v. City of Seattle
Issue: Whether Seattle's App-Based Worker Deactivation Rights Ordinance, which requires network companies to inform workers in writing of deactivation policies that must be "reasonably related" to "safe and efficient operations," violates the First Amendment by compelling speech or regulating protected editorial activity.
This decision extends compelled-disclosure doctrine from traditional content platforms to gig economy apps, holding that requirements to communicate deactivation standards constitute regulation of conduct (or at most commercial speech subject to Zauderer) rather than editorial expression. The split reasoning—with the dissent arguing for intermediate scrutiny—reflects ongoing uncertainty about whether platform operational communications receive full First Amendment protection, particularly relevant as states increasingly regulate platform account termination and moderation explanation requirements post-Moody v. NetChoice.
View on CourtListener →Dowey v. Siems
Issue: Whether Meta is liable under product liability (design defect, failure to warn) and negligence theories for the deaths of minors who were sextorted by predators whom Meta's recommendation systems allegedly connected to the victims, or whether such claims are barred by Section 230 immunity.
This case directly tests the boundaries of Section 230's design-defect carve-out post-*Moody v. NetChoice* and in light of the Supreme Court's non-decision in *Gonzalez v. Google*. Plaintiffs invoke the emerging theory—successful in *Garcia v. Character.AI*—that platform architectural choices, recommendation algorithms, and data-sharing features constitute the platform's own product design decisions outside Section 230's scope, particularly where the platform allegedly knew its systems were connecting minors to predators and declined to implement identified safeguards. If the court permits these claims to proceed past a motion to dismiss, it would reinforce a narrowing of Section 230 immunity for algorithmic harms and establish that platforms face tort exposure for design decisions that foreseeably facilitate criminal exploitation, even when the harmful content itself is user-generated.
View on CourtListener →Woodlands Pride v. Paxton
Issue: Whether Texas Senate Bill 12, which regulates "sexually oriented performances" on public property and in the presence of minors, facially violates the First Amendment and is unconstitutionally void for vagueness.
This case addresses core First Amendment questions about government regulation of expressive performances based on content, including vagueness and overbreadth challenges to statutory definitions that could chill protected speech. The outcome affects states' ability to regulate expressive conduct through broad definitions of sexual content, with implications for how courts assess content-based restrictions on speech in the digital age where performances may be recorded and distributed on online platforms.
View on CourtListener →Armendariz v. City of Colorado Springs
Issue: Whether search warrants seeking (1) electronic devices and data from a protest organizer and (2) Facebook posts, chats, and events from a nonprofit organization's profile were overbroad in violation of the Fourth Amendment's particularity requirement.
This case implicates First Amendment associational rights and the limits on government investigation of online platform content related to protest activities. The decision establishes that warrants seeking broad categories of social media data (posts, chats, events) from advocacy organizations may violate Fourth Amendment particularity requirements, with implications for government access to platform-hosted speech and organizing activity. The involvement of major digital rights organizations as amici (EFF, CDT, EPIC, Knight Institute) signals broader concerns about investigatory overreach into digital speech and association.
View on CourtListener →State v. Andreas W. Rauch Sharak
Issue: Whether Google acted as a government agent (implicating Fourth Amendment protections) when it scanned user files for CSAM and reported flagged content to law enforcement pursuant to federal reporting requirements.
This case addresses Section 230's role in incentivizing platform content moderation by providing immunity from liability for voluntary scanning and reporting of illegal content. The court's interpretation that Section 230 was designed to encourage ESPs to engage in proactive content moderation—including automated scanning—without fear of liability directly implicates ongoing debates about the scope of Section 230 protections for active versus passive moderation practices and whether such activities transform platforms into "information content providers" or government agents.
View on CourtListener →Trupia v. X Corp.
Issue: Whether X Corp. is immune under Section 230 and the First Amendment from claims challenging its alleged suppression or moderation of a user's posts on its social media platform.
This case directly implicates the scope of Section 230 immunity and First Amendment protection for platform content moderation decisions post-*Moody v. NetChoice*. X Corp.'s invocation of both Section 230 publisher immunity and First Amendment editorial discretion as independent bars to liability for content moderation represents the standard defense posture for platforms facing user grievances over deplatforming or suppression, and the outcome will reflect how courts apply *Moody*'s editorial-discretion framework to individual user content-moderation disputes on major social media platforms.
View on CourtListener →Doe v. Meta Platforms, Inc.
Issue: Whether Meta/Instagram can be held liable for injuries to a minor allegedly groomed by a sexual predator through a fake Instagram account and subsequently assaulted, based on claims that appear to involve platform design, recommendation features, and failure to prevent predatory use of the service.
This case has potential significance for Section 230's application to platform design and safety features, particularly age verification, fake account detection, and grooming prevention systems. If plaintiffs frame claims around Instagram's product design choices (rather than traditional publisher liability for user content), the case could test the boundary between immune editorial functions and non-immune product liability theories post-Gonzalez, similar to recent social media harm litigation involving minors.
View on CourtListener →Netchoice v. Wilson
Issue: Whether South Carolina's Age-Appropriate Design Code Act violates the First Amendment by imposing content-based restrictions requiring websites to "exercise reasonable care" to prevent harms to minors, mandating specific design features and controls, prohibiting facilitation of certain commercial speech, and compelling submission to third-party audits and public reporting.
This case represents the next generation of state attempts to regulate social media platforms' content curation and design practices under the guise of child safety, testing the boundaries established in *Moody v. NetChoice*. The South Carolina statute's "duty of care" framework attempts to impose tort liability for editorial choices that cause specified harms to minors, directly implicating the question left open in *Moody* about whether content-neutral design regulation can avoid First Amendment scrutiny—and whether framing speech restrictions as product safety obligations evades constitutional protection for platform editorial judgment.
View on CourtListener →Stokinger v. Armslist, LLC
Issue: Whether Armslist.com, an online firearms marketplace, is subject to personal jurisdiction in New Hampshire based on its website design and operation, and whether claims alleging that Armslist negligently designed its website to facilitate illegal firearms sales are barred by Section 230 of the Communications Decency Act.
This case presents the design-defect theory of platform liability similar to cases like Garcia v. Character.AI—plaintiffs allege the platform's design choices (not merely hosting third-party content) created liability exposure. The jurisdictional posture may interact with Section 230's scope: if design claims fall outside Section 230 immunity, platforms face multi-jurisdictional exposure based on purposeful availment through website architecture targeting specific states' users for harmful transactions.
View on CourtListener →Recent Commentary
The administration's threatened destruction of Anthropic through national security designations in response to the company's ethical stance on autonomous weapons raises constitutional questions about government coercion of AI companies' speech, product design choices, and editorial guidelines under the First Amendment jawboning doctrine.
Utah's proposed law would criminalize adult websites' speech about VPN use to circumvent age verification blocks, raising novel First Amendment questions about government restrictions on platforms' communications with users regarding lawful circumvention of state content regulations.
The post chronicles fifteen years of Section 230 legislative threats, encryption policy battles, and platform liability disputes, demonstrating recurring patterns in government attempts to regulate online intermediaries and weaken immunity protections.
The FTC has formally acknowledged that age verification technology involves COPPA-violating collection of children's personal information, but resolved the contradiction through a non-enforcement pledge rather than legal reform or honest acknowledgment that the mandates cannot be reconciled with existing law.
Section 230 reform proposals aimed at addressing internet harms would paradoxically worsen those problems by raising compliance costs that advantage incumbent platforms over potential competitors offering better alternatives.