Browse Cases

207 results
Brief AI Liability Section 230 First Amendment Complaint

Garcia v. Character Technologies, Inc.

District Court, M.D. Florida · 2024-10-22 · Character Technologies, Inc. (Character.AI)

Issue: Whether Character Technologies, Inc., its co-founders, and Google are strictly liable under design defect and failure-to-warn theories, and liable in negligence, for the suicide of a 14-year-old user allegedly caused by the Character.AI generative AI chatbot product's anthropomorphic and hypersexualized design features that were deliberately targeted at minors.

Why It Matters: This complaint is among the first to assert traditional products liability theories—design defect and failure to warn—directly against a generative AI system and its developers, and its explicit characterization of C.AI as an information content provider rather than a neutral platform signals a deliberate litigation strategy to foreclose Section 230 immunity, which could establish a significant template for future AI tort suits if the framing survives judicial scrutiny.

View on CourtListener →
Brief Section 230 First Amendment Other

Stebbins v. Rumble Inc.

District Court, D. Delaware · 2024-10-21 · Rumble Inc.

Issue: In *Stebbins v. Rumble Inc.*, plaintiff David Stebbins argues that a statement Rumble made in a related miscellaneous proceeding — acknowledging an editorial decision to permit anonymous posting — constitutes newly discovered evidence sufficient under FRCP 60(b)(2) to reopen the court's prior dismissal of Rumble as a defendant. The non-obvious dimension is whether a platform's litigation statement made to *resist* a third-party subpoena on First Amendment grounds can be repurposed as an affirmative admission of tortious editorial control, and whether such an admission could itself defeat § 230 immunity by recharacterizing a general anonymity policy as the platform's "own conduct" causally contributing to the alleged harm.

Why It Matters: This motion illustrates a strategy plaintiffs have repeatedly attempted with limited success: taking a platform's statement made in an unrelated legal context to protect its users and repackaging it as a confession of liability. The legal obstacle is twofold — courts have consistently treated decisions about anonymous posting as quintessential editorial functions protected by § 230, and statements made to assert a procedural or constitutional right are not equivalent to admissions of underlying tortious conduct. The motion also tests the outer boundary of the "platform's own conduct" exception established in cases like *Roommates.com*: whether a documented platform policy enabling anonymity could ever constitute material contribution to the *unlawfulness* of specific content, rather than merely to its delivery — a question that remains theoretically open but has yet to find a receptive court on analogous facts. More broadly, the filing is a useful marker of how the procedural vehicle of FRCP 60(b) is being used in pro se platform-liability litigation to challenge interlocutory § 230 dismissals, a recurring posture that existing doctrinal commentary has not yet systematically addressed.

View on CourtListener →
Brief First Amendment Other

Stebbins v. Google LLC

District Court, D. Delaware · 2024-10-01 · Rumble Inc.

Issue: In *Stebbins v. Google LLC*, Rumble Inc. argues that a DMCA § 512(h) subpoena seeking to identify an anonymous user must be quashed both because its return date preceded service by 19 days — affording Rumble negative time to comply — and because compelling disclosure of the user's identity would violate the First Amendment right to speak anonymously, particularly where the content at issue appears to constitute political commentary on judicial accountability. The case raises the non-obvious question of whether a copyright enforcement tool expressly authorized by Congress in 1998 must nonetheless satisfy a constitutional balancing test before a court will compel a platform to unmask one of its users.

Why It Matters: DMCA § 512(h) subpoenas are a routinely used mechanism for copyright holders to identify anonymous alleged infringers, but they simultaneously function as tools for unmasking internet users who may be engaged in protected speech — a tension Congress did not resolve when it enacted the statute in 1998. This brief illustrates an emerging litigation strategy in which platforms assert both user-side anonymity rights and their own editorial First Amendment interests as independent grounds to resist identity subpoenas, a combination that no circuit court has yet validated in this context. If courts without settled precedent begin adopting the *Art of Living* balancing framework, copyright holders will face a meaningfully higher threshold to obtain user identities through § 512(h). The ulterior-motive theory is also worth watching: if credited by courts, it could eventually support sanctions or abuse-of-process arguments against serial DMCA filers who use the subpoena mechanism to identify critics rather than remedy genuine infringement.

View on CourtListener →
Opinion First Amendment Section 230 Appellate Opinion

Computer & Comm v. Paxton

Court of Appeals for the Fifth Circuit · 2024-09-13 · Social media platforms (case involves trade associations representing Meta, Google, X Corp., and other major platforms)

Issue: Whether Texas House Bill 18's requirements that covered digital service providers monitor and block broadly defined categories of content accessible to minors violate the First Amendment as content-based and viewpoint-based prior restraints on protected speech, and whether those requirements are preempted by 47 U.S.C. § 230.

Why It Matters: The case presents a direct First Amendment challenge to state-mandated content filtering for minors—an emerging category of legislation enacted across multiple states—and the Fifth Circuit's ruling could establish binding precedent on whether such monitoring-and-blocking mandates survive strict scrutiny and on the scope of § 230 preemption of state child-safety internet laws.

View on CourtListener →
Opinion Section 230 Motion to Dismiss (Reversed)

Anderson v. TikTok, Inc.

3d Cir. · 2024-09-04 · TikTok (ByteDance)

Issue: Whether § 230 bars wrongful death claims against TikTok based on the platform's algorithm recommending the "Blackout Challenge" — a dangerous viral trend — to a 10-year-old girl who died attempting it.

Why It Matters: The first circuit decision to hold that algorithmic content recommendations fall outside § 230's protection as the platform's own independent speech. Directly conflicts with the Second Circuit's Force v. Facebook and is the leading authority for plaintiffs arguing that AI-powered content recommendation is not publisher activity. Represents the most significant circuit split in current § 230 doctrine and raises fundamental questions about the future scope of platform immunity as algorithms become the dominant mechanism of content distribution.

View on CourtListener →
Opinion Section 230 Motion to Dismiss (Reversed)

Estate of Bride v. Yolo Technologies, Inc.

9th Cir. · 2024-08-08 · Yolo Technologies (anonymous messaging app)

Issue: Whether § 230 bars wrongful death claims against Yolo based on design-defect theories targeting Yolo's anonymity features, and on assumption-of-duty theories arising from Yolo's promises in its terms of service to prevent cyberbullying.

Why It Matters: Extended both the Lemmon design-defect framework and the Barnes assumption-of-duty doctrine in the same case. Established that a platform's contractual promises to users about safety features — even in standard ToS language — can give rise to an independent duty of care that § 230 does not preempt. A leading case in the § 230 litigation over anonymous messaging apps and cyberbullying-related youth harms.

View on CourtListener →
Opinion Section 230 Preliminary Injunction (Granted)

NetChoice LLC v. Reyes

D. Utah · 2024-07-10 · Social media platforms (collectively)

Issue: Whether Utah's Social Media Regulation Act — requiring platforms to verify user ages, restrict minors' access to certain features, and give parents supervisory access — violated the First Amendment and was preempted by § 230.

Why It Matters: Part of the wave of state child online safety legislation enacted in 2023–2024. The court's First Amendment and § 230 preemption analysis reflects the complex intersection of constitutional law and federal preemption doctrine in the youth social media regulation context. A precursor to the broader national legal battle over state-level children's online safety laws.

View on CourtListener →
Opinion Section 230 Motion to Dismiss (Affirmed in Part, Reversed in Part)

Calise v. Meta Platforms, Inc.

9th Cir. · 2024-06-05 · Meta (Facebook)

Issue: Whether § 230 bars claims that Meta's advertising targeting algorithm matched vulnerable users with fraudulent investment and romance scam advertisements, causing financial losses.

Why It Matters: Applied and extended Barnes v. Yahoo! to Meta's advertising infrastructure, distinguishing between Meta-as-publisher (immune) and Meta-as-developer of its own targeting product (not immune). An important precedent for claims that a platform's monetization algorithms — not just its content-hosting function — can constitute independent conduct outside § 230's reach.

View on CourtListener →
Opinion Section 230 Demurrer (Overruled)

Neville v. Snap, Inc.

Cal. Superior Ct. · 2024-01-02 · Snapchat (Snap, Inc.)

Issue: Whether § 230 bars California state law products liability and negligence claims against Snap for design features that allegedly facilitated the drug trafficking death of a minor.

Why It Matters: A California state court application of the Lemmon / design-defect framework in the context of the fentanyl crisis. Part of the wave of state court litigation applying design-defect theories to social media features in cases involving drug trafficking and minor victims.

View on CourtListener →
Opinion Section 230 Motion to Dismiss (Denied in Substantial Part)

Commonwealth v. Meta Platforms, Inc.

Mass. Superior Ct. · 2024-01-01 · Meta (Instagram, Facebook)

Issue: Whether § 230 bars the Massachusetts Attorney General's parens patriae claims that Meta designed its platforms to be addictive to children and to expose them to harmful content, in violation of Massachusetts consumer protection law.

Why It Matters: Part of the wave of state attorney general actions against social media platforms for child safety violations. The court's refusal to dismiss on § 230 grounds reflects the growing judicial receptivity to design-defect and deceptive-business-practice theories that target platform architecture rather than content moderation decisions.

View on CourtListener →
Opinion First Amendment Section 230 Trial Court Opinion

AYYADURAI v. UNITED STATES OF AMERICA

District Court, District of Columbia · 2023-12-04 · Meta (Facebook), Google (YouTube), X Corp. (Twitter)

Issue: Ayyadurai v. United States of America* asks whether a pro se plaintiff can sustain constitutional, statutory, and common-law claims against social media platforms and federal government defendants based on an alleged conspiracy to suppress his political speech, arising from his deplatforming and shadowbanning following posts questioning ballot-image destruction in a prior election. The case requires the court to determine whether Article III standing survives where the alleged suppression stems from claimed government coercion of private platforms, whether § 230 immunizes the platforms' content-moderation decisions, and whether sovereign immunity bars the federal claims — each a distinct threshold that must be cleared before any merits analysis begins.

Why It Matters: The ruling makes two meaningful contributions to § 230 doctrine: it reaffirms that conclusory bad-faith allegations cannot pierce § 230(c)(2)'s good-faith safe harbor at the pleading stage, and it deliberately declines to extend § 230(c)(1) to cover affirmative content-removal decisions — flagging that such an extension would render § 230(c)(2)'s good-faith requirement superfluous, a structural concern previously voiced only in Justice Thomas's *Malwarebytes* cert-denial statement. By resolving all platform claims under (c)(2) alone, the court consciously preserves the (c)(1)-removal question, creating a potential development opportunity in future litigation where a plaintiff pleads bad faith with sufficient specificity to survive (c)(2) and force the (c)(1) issue to appeal. The court's application of *Murthy v. Missouri* to defeat standing on the government-coercion theory also signals that such claims now face an exceptionally high traceability burden in social-media suppression cases, reinforcing *Murthy*'s practical reach well beyond its original First Amendment context.

View on CourtListener →
Section 230

People of the State of California v. Meta Platforms, Inc.

District Court, N.D. California · 3 filings
2023-10-24 · Other

Why It Matters: Meta's central defense at summary judgment is that Section 230 extinguishes the states' consumer protection claims before they can reach a jury, on the theory that those claims would effectively hold Meta liable as a publisher of harmful user-generated content. The Massachusetts Supreme Court — one of the most respected state courts of last resort in the country — just rejected that argument in a case involving the same defendant and a structurally similar legal theory, and Plaintiffs are placing that ruling before the MDL judge at the earliest opportunity. Whether it moves the needle depends on how closely the Massachusetts claims and pleadings track those at issue in the MDL, a question the filing conspicuously leaves unaddressed and that Meta will almost certainly contest. The filing also signals a deliberate multi-forum strategy by the state AGs: collecting appellate-level authority across jurisdictions to build persuasive momentum against Section 230 preemption — a campaign worth watching as similar litigation proceeds in other states.

View on CourtListener →
2023-10-24 · Appellate Opinion

Why It Matters: The opinion is the most prominent state appellate-court decision to date to categorically hold that Section 230 does not immunize platform design-defect or product-deception claims, and it does so through a textually grounded, common-law publisher framework that is methodologically distinct from — and directly contests — the reasoning this MDL court has previously applied. By naming and rejecting the MDL court's prior rulings, the SJC supplies a reasoned, appellate-level counter-analysis that, while not binding in federal court, materially reduces those rulings' persuasive authority and gives this Court a fully developed alternative framework to consider when resolving the pending motion. The procedural holding — that Section 230 immunity supports interlocutory appeal under the present-execution doctrine — also signals that state courts of last resort are prepared to treat Section 230 as a true immunity from suit, consistent with federal consensus but now carrying explicit state appellate endorsement. What remains open is whether this Court will credit the SJC's common-law publisher test over its own prior analysis, a question that will be resolved when it rules on Related Doc. 266.

View on CourtListener →
2023-10-24 · Other

Why It Matters: This filing advances the critical and unsettled question of whether §230 immunizes a platform's affirmative design decisions—such as algorithmic features allegedly engineered to maximize adolescent engagement—when challenged by state enforcement authorities rather than private plaintiffs, potentially establishing that state AG consumer protection actions targeting platform architecture fall outside §230's immunity.

View on CourtListener →
Opinion Section 230 Motion to Dismiss (Denied in Substantial Part)

L.W. v. Snap Inc.

S.D. Cal. · 2023-06-22 · Snapchat, TikTok, Instagram/Meta, YouTube

Issue: Whether § 230 bars products liability and negligence claims against social media platforms for designing features — including addictive engagement loops, infinite scroll, and content recommendation — that allegedly caused serious psychological harm to minor users.

Why It Matters: An important district court application of the post-Lemmon design-defect doctrine to the broader youth mental health litigation against social media platforms. The court's willingness to allow design claims to proceed past a motion to dismiss reflected the growing judicial recognition that § 230 does not immunize all harms that can be traced to social media platform design.

View on CourtListener →
Opinion Section 230 Certiorari (Vacated and remanded)

Gonzalez v. Google LLC

U.S. · 2023-05-18 · Google LLC (YouTube)

Issue: Whether § 230(c)(1) immunizes Google from Anti-Terrorism Act liability for YouTube's algorithmic recommendations of ISIS videos, on the theory that targeted algorithmic recommendations constitute Google's own expressive conduct rather than merely hosting third-party content.

Why It Matters: The Supreme Court's first opportunity to definitively address § 230's application to algorithmic content recommendations produced no ruling on that question. The Court's restraint left the circuit split between Force v. Facebook (Second Circuit, algorithmic recommendations are publisher activity) and Anderson v. TikTok (Third Circuit, targeted recommendations are platform speech) unresolved. Gonzalez is significant as much for what it did not decide as for what it held — the most pressing open question in § 230 doctrine remains unanswered at the Supreme Court level.

View on CourtListener →
Opinion Section 230 Certiorari (Reversed)

Twitter, Inc. v. Taamneh

U.S. · 2023-05-18 · Twitter, Facebook, Google

Issue: Whether Twitter, Facebook, and Google aided and abetted an ISIS terrorist attack under 18 U.S.C. § 2333(d)(2) by hosting ISIS content, allowing ISIS to recruit and raise funds on their platforms, and algorithmically recommending ISIS-related content to users.

Why It Matters: Established that online platforms do not face ATA aiding-and-abetting liability merely by knowingly hosting content from a terrorist organization or operating recommendation algorithms that surface that content, without evidence of specific, targeted assistance to the tortious act at issue. The decision effectively disposed of most terrorism-based ATA claims against social media platforms on the merits, without reaching § 230 — the companion case Gonzalez v. Google addressed § 230 but declined to decide it, leaving that question open.

View on CourtListener →
Opinion Section 230 Motion to Dismiss (Denied in Relevant Part)

Bride v. Snap, Inc.

C.D. Cal. · 2023-01-10 · Snapchat (Snap, Inc.)

Issue: Whether § 230 bars wrongful death claims against Snap arising from Snapchat's design features — including its anonymous messaging and ephemeral content features — allegedly used to facilitate drug trafficking that resulted in a teenager's death.

Why It Matters: A significant application of the Lemmon design-defect framework to the fentanyl trafficking epidemic on social media platforms. Part of a growing body of litigation testing whether the Lemmon exception is limited to specific features like speed filters or extends broadly to platform design choices that facilitate offline criminal conduct. The case contributed to the litigation that eventually produced Estate of Bride v. Yolo Technologies in the Ninth Circuit.

View on CourtListener →
Opinion Section 230 Demurrer (Sustained — Affirmed)

Prager Univ. v. Google LLC

Cal. App. Ct. · 2022-12-20 · Google (YouTube)

Issue: Whether YouTube, as a private company, violated the First Amendment or California unfair competition law by restricting PragerU's videos through its "Restricted Mode" and "Ad Friendly" content policies.

Why It Matters: Rejected both constitutional and statutory challenges to viewpoint-based content moderation by private platforms. Confirmed that private social media companies are not state actors bound by the First Amendment. The decision also illustrates how platforms' terms of service — which expressly reserve broad editorial discretion — can defeat contract-based and consumer protection challenges to content moderation.

View on CourtListener →
Opinion First Amendment Section 230 Appellate Opinion

NetChoice, LLC v. Bonta

District Court, N.D. California · 2022-12-14 · Online platforms generally (NetChoice members include Amazon, Google, Meta, Netflix)

Issue: NetChoice v. Bonta* asks whether a facial First Amendment challenge can sustain a wholesale injunction against California's Age-Appropriate Design Code Act when the statute's coverage definition reaches businesses through both content-based and purely demographic indicators — meaning some covered services may have no expressive dimension at all. The case also asks whether terms like "best interests of children" and "materially detrimental," borrowed from individualized family-law proceedings, are unconstitutionally vague when applied as prospective, industry-wide compliance standards. The answers turn on how courts measure the proportion of unconstitutional applications against all applications under the demanding framework the Supreme Court established in *Moody v. NetChoice* (2024).

Why It Matters: This ruling significantly raises the evidentiary bar for industry coalitions seeking to block child online-safety laws through facial First Amendment challenges: plaintiffs must now map a statute's *entire* universe of applications — including non-expressive ones — before a court can find unconstitutional applications substantial enough to justify a wholesale injunction. The vagueness holding breaks new ground by applying *FCC v. Fox*'s void-for-vagueness standard to child-welfare design mandates, establishing that family-law welfare terms cannot be transplanted into prospective regulatory compliance obligations without adequate definitional grounding. Significant questions remain open on remand, including whether the age-estimation requirement implicates First Amendment-protected speech and whether the CAADCA's valid provisions are severable — determinations that will shape California's enforcement posture and may influence how courts in other circuits assess analogous age-gating laws.

View on CourtListener →