Browse Cases
207 resultsLemmon v. Snap, Inc.
Issue: Whether § 230(c)(1) bars a negligent design products liability claim against Snap for creating a "Speed Filter" feature that allegedly incentivized users to drive at dangerously high speeds by displaying their real-time speed and rewarding high-speed posts.
Why It Matters: The Ninth Circuit's leading decision establishing that § 230 does not immunize a platform from products liability claims targeting the design of the platform's own features. The design-defect claim targets what the platform built — not what users post — and therefore falls outside § 230's scope. Lemmon is the foundational precedent for the wave of social media design-defect litigation, including cases involving fentanyl trafficking on Snapchat, TikTok's Blackout Challenge, and youth mental health harms from social media features.
View on CourtListener →Loomis v. Amazon.com LLC
Issue: Whether Amazon could be strictly liable as a seller for defective hoverboards sold by third-party merchants through the Amazon Marketplace, where Amazon took a more passive role in the transaction than it did in Bolger.
Why It Matters: Extended Bolger beyond the FBA context, establishing that Amazon's marketplace model more broadly — not just its fulfillment services — can constitute seller status in California. Clarified that strict products liability in the e-commerce marketplace context turns on the totality of the platform's commercial involvement, not solely on whether it physically handled the product.
View on CourtListener →Murphy v. Twitter, Inc.
Issue: Whether § 230 bars a California state civil rights claim against Twitter for suspending a user's account and allegedly discriminating against conservative political viewpoints in its content moderation.
Why It Matters: Applied § 230(c)(2)(A) to defeat a state civil rights challenge to social media content moderation, reinforcing that California's Unruh Act cannot be used to force platforms to reinstate suspended accounts or to impose viewpoint neutrality requirements on editorial decisions. A key case in the debate over whether § 230 forecloses state public accommodations law as a tool for challenging platform moderation.
View on CourtListener →Bolger v. Amazon.com, Inc.
Issue: Whether Amazon was strictly liable under California products liability law as a seller in the chain of distribution for a defective product sold by a third-party merchant through the Amazon Marketplace.
Why It Matters: A significant products liability decision establishing that marketplace platforms that take an active role in fulfilling consumer transactions can be treated as sellers subject to strict liability, independent of § 230. The case is important in the e-commerce liability context because it does not rest on § 230 — it applies traditional products liability doctrine to Amazon's fulfillment activities. Subsequent California cases (Loomis, Lee) have refined the standard.
View on CourtListener →Dyroff v. The Ultimate Software Grp., Inc.
Issue: Whether § 230 bars wrongful death claims against an online community platform whose recommendation features allegedly connected a user with the drug dealer who sold him the heroin that killed him.
Why It Matters: Applied § 230 to algorithmic connection-recommendation features, distinguishing the neutral recommendations in WeConnect from the structured, discriminatory questionnaire in Roommates.com. The case illustrates the line between passive publication of connections (protected) and active development of harmful content (not protected), in a context involving serious offline harm from drug dealing facilitated by the platform.
View on CourtListener →Enigma Software Grp. USA, LLC v. Malwarebytes, Inc.
Issue: Whether § 230(c)(2)(B) immunizes a cybersecurity company that blocks a competitor's software products as "potentially unwanted programs," where the plaintiff alleges the blocking was motivated by anticompetitive rather than content-quality reasons.
Why It Matters: The leading case establishing that § 230(c)(2) immunity is not unlimited — the "good faith" requirement has teeth. Filtering tools and platforms cannot invoke § 230(c)(2) when blocking or filtering decisions are driven by anticompetitive motivations rather than genuine content-quality concerns. Important for understanding the limits of the Good Samaritan provision of § 230 in the cybersecurity and software industry context.
View on CourtListener →Force v. Facebook, Inc.
Issue: Whether Facebook was liable under the Anti-Terrorism Act for allegedly providing Hamas with a communications platform, and whether Facebook's content recommendation algorithm constituted independent tortious conduct not shielded by § 230(c)(1).
Why It Matters: The Second Circuit's definitive pre-Gonzalez holding that algorithmic content recommendation is publisher activity protected by § 230(c)(1). Force v. Facebook directly conflicts with the Third Circuit's subsequent Anderson v. TikTok decision, which held that a platform's targeted algorithmic recommendations constitute the platform's own speech. The resulting circuit split on whether recommendation algorithms are publisher functions or independent platform speech is the central unresolved question in § 230 doctrine post-Gonzalez.
View on CourtListener →Marshall's Locksmith Serv. Inc. v. Google LLC
Issue: Whether § 230 bars claims against search and directory platforms for listing and promoting fraudulent locksmith businesses that deceived customers with bait-and-switch pricing.
Why It Matters: Applied § 230 to a marketplace fraud context, holding that listing businesses and users who later engage in fraud does not strip a platform of publisher immunity. Consistent with the broad reading of § 230 as immunizing platforms for harms traced back to the conduct of third-party users or service providers listed on the platform.
View on CourtListener →Herrick v. Grindr, LLC
Issue: Whether § 230 bars tort claims against Grindr for failing to remove fake profiles and implement safety features, given that a malicious third party used the platform to orchestrate a harassment campaign against the plaintiff.
Why It Matters: The sharpest circuit-level rejection of the Internet Brands framework. The Second Circuit effectively held that a platform's failure to remove user content after notice — even notice of a coordinated offline harm campaign — is publisher activity within § 230(c)(1). The case illustrates the breadth of § 230 immunity in the Second Circuit and the difficulty plaintiffs face in finding a viable legal theory when a platform's inaction causes serious offline harm.
View on CourtListener →HomeAway.com, Inc. v. City of Santa Monica
Issue: Whether Santa Monica's short-term rental ordinance — which prohibited hosting platforms from processing booking transactions for unregistered properties — was preempted by § 230(e)(3) or violated the First Amendment.
Why It Matters: An important § 230(e)(3) preemption decision establishing that state and local laws imposing transactional or conduct-based obligations on platforms are not preempted by § 230, even if compliance requires the platform to check information related to user listings. Demonstrates the limits of § 230 preemption as a blanket shield against generally applicable regulatory obligations — the statute preempts liability for publishing third-party content, but not regulation of a platform's own commercial conduct.
View on CourtListener →Hassell v. Bird
Issue: Whether a state court can enforce a defamation judgment by ordering Yelp — which was not a party to the underlying lawsuit — to remove reviews posted by the defendant from its platform.
Why It Matters: Established that § 230 preempts state court attempts to conscript non-party platforms into removing content through injunctions entered in litigation to which the platform was not a party. The decision is significant at the intersection of § 230, injunctions, and due process: a plaintiff cannot circumvent § 230 by obtaining a content-removal order against a platform without naming it as a defendant. The case is frequently cited in debates about procedural mechanisms for victims of online defamation to obtain meaningful relief.
View on CourtListener →Bennett v. Google, LLC
Issue: Whether § 230 bars a defamation claim against Google for autocomplete search suggestions that displayed allegedly defamatory phrases when users began typing the plaintiff's name.
Why It Matters: Applied § 230 to search engine autocomplete, holding that algorithmically aggregated suggestions based on third-party search patterns are publisher activity, not independent content creation. Consistent with Force v. Facebook in the Second Circuit on the treatment of algorithmic functions as publisher activity.
View on CourtListener →Pennie v. Twitter, Inc.
Issue: Whether Twitter, Facebook, and Google were liable under the Anti-Terrorism Act for knowingly providing material support to ISIS by allowing the organization to use their platforms to spread propaganda, recruit members, and raise funds.
Why It Matters: An early application of § 230 to anti-terrorism material support litigation, foreshadowing the broader circuit-level debate that culminated in Gonzalez v. Google and Twitter v. Taamneh at the Supreme Court. The district court held that however the claims were framed, the underlying theory required treating the platforms as responsible for user-generated content and account activity — conduct immunized by § 230(c)(1).
View on CourtListener →Doe v. Internet Brands, Inc.
Issue: Whether § 230 bars a California negligence failure-to-warn claim against a platform that had actual independent knowledge of an ongoing offline predatory scheme targeting its users, where the plaintiff's claim did not seek to hold the platform liable as the publisher of any user-generated content.
Why It Matters: The most significant post-Zeran departure from a maximalist reading of § 230 at the circuit level. Doe v. Internet Brands established that § 230(c)(1) is limited to claims that would hold a platform liable as a publisher or speaker of third-party content — it does not reach claims grounded in the platform's own first-party conduct or independent knowledge. The case created a doctrinal fault line: the Second Circuit in Herrick v. Grindr declined to apply the reasoning where the platform's failure to act was itself treated as a failure to remove content.
View on CourtListener →Sikhs for Justice, Inc. v. Facebook, Inc.
Issue: Whether § 230(c)(1) bars a Title II Civil Rights Act claim against Facebook for blocking access to a Sikh advocacy group's Facebook page in India, allegedly at the request of the Indian government.
Why It Matters: Applied § 230(c)(1) to bar a federal civil rights challenge to a platform's content removal decision, even where the removal allegedly occurred at the behest of a foreign government and was motivated by discriminatory animus. The case established that § 230's publisher immunity applies to a platform's decision to block or restrict access to third-party content regardless of the reason for that decision — including allegations of politically or discriminatorily motivated moderation. Frequently cited in debates over whether § 230 should be conditioned on neutral content moderation.
View on CourtListener →Jones v. Dirty World Entm't Recordings LLC
Issue: Whether TheDirty.com and its operator Nik Richie materially contributed to the defamatory content of two user-submitted posts about the plaintiff such that § 230(c)(1) immunity did not apply.
Why It Matters: The Sixth Circuit's leading § 230 decision, formally adopting the material contribution standard and clarifying what it requires. A platform must do something specifically to shape or facilitate the unlawful character of the particular content at issue — general encouragement of harmful user submissions is not enough. The case sets a high bar for defeating § 230 under the material contribution theory and remains the definitive Sixth Circuit authority on the scope of platform immunity.
View on CourtListener →Fraley v. Facebook, Inc.
Issue: Whether § 230 immunizes Facebook's "Sponsored Stories" feature, which automatically converted users' "like" activity into paid commercial endorsement advertisements displayed to their friends without consent or compensation.
Why It Matters: An important early decision holding that § 230 does not protect a platform from claims arising from content the platform itself creates by combining user data with advertiser content. The case foreshadowed later disputes about whether algorithmic targeting and data-driven ad products constitute independent platform conduct outside § 230's scope, a line of reasoning developed further in Calise v. Meta.
View on CourtListener →FTC v. Accusearch Inc.
Issue: Whether Accusearch was immune under § 230 for operating a data-broker service that sold confidential telephone records obtained by paid third-party researchers through illegal pretexting.
Why It Matters: Applied the material contribution standard in the Tenth Circuit and confirmed that a data broker loses § 230 immunity when its business model directs third parties to acquire content through inherently illegal means. Contrasts with the Fourth Circuit's Henderson v. Source for Public Data, where a data broker that gathered public records lawfully was granted § 230 protection. Together, Accusearch and Henderson bracket the outer limits of § 230 immunity for the data-broker industry.
View on CourtListener →Barnes v. Yahoo!, Inc.
Issue: Whether § 230 bars a promissory estoppel claim against Yahoo! arising from a Yahoo! employee's specific oral promise to remove unauthorized, harassing profiles — as distinct from a negligence claim based on Yahoo!'s failure to monitor its platform.
Why It Matters: The foundational Ninth Circuit authority establishing that § 230 does not bar claims arising from a platform's own first-party conduct and representations, as opposed to claims that hold the platform responsible for third-party content. Barnes drew the line between publisher liability (barred) and independent duty arising from a platform's own words and conduct (not barred). The case is the direct ancestor of Estate of Bride v. Yolo Technologies, in which the Ninth Circuit extended this reasoning to promises in terms of service.
View on CourtListener →Doe v. MySpace, Inc.
Issue: Whether § 230 bars negligence and gross negligence claims against MySpace for failing to implement adequate age-verification measures that allegedly would have prevented a minor user from being sexually assaulted by an adult she met on the platform.
Why It Matters: An important early Fifth Circuit application of § 230 to negligence claims arising from online predatory conduct. The court held that § 230 applies regardless of how a plaintiff frames a claim — if the underlying duty would require the platform to monitor, screen, or regulate user content and communications, the claim is barred. Doe v. MySpace reflects the broad pre-Lemmon approach to § 230 that treated platform safety measures as editorial functions immune from liability claims.
View on CourtListener →