|
|
First Amendment · Section 230 · AI Liability
Nerdy Skynet!
March 24, 2026
|
|
Coverage: 2026-03-20 through 2026-03-24
·
4 new developments this period
|
|
|
|
▷ Speech Regulation / Platform Autonomy
|
|
State of Texas v. Snap Inc.
District Court, E.D. Texas
· 2026-03-21
· Snap Inc. (Snapchat)
Speech Regulation / Platform Autonomy Section 230
Other
Issue: Whether Snap may remove to federal court under the federal officer removal statute, and whether the First Amendment and Section 230 constitute colorable federal defenses against Texas DTPA and SCOPE Act claims targeting Snapchat's content ratings, safety disclosures, and parental control obligations.
The State of Texas filed a twelve-count action in Texas state court alleging that Snap violated the Texas Deceptive Trade Practices Act by misrepresenting Snapchat's content maturity ratings in the Apple, Google Play, and Microsoft app stores, and violated the SCOPE Act by failing to provide adequate parental verification and supervision tools. Snap removed to federal court under 28 U.S.C. § 1442(a)(1), the federal officer removal statute, arguing that its work assisting DHS (Know2Protect, Pledge2Protect, and Blue Campaign) and the FDA (tobacco/vaping public health campaigns) constitutes action "under color of federal office" sufficient to confer federal jurisdiction. In its notice of removal, Snap expressly identified Section 230 and the First Amendment as colorable federal defenses, arguing that the DTPA claims impermissibly compel disclosure of highly subjective opinions about content harms (citing NetChoice v. Bonta), that platform features like notifications and lenses implicate Snap's own protected speech (citing Moody v. NetChoice), and that the SCOPE Act's content-based exemptions render it unconstitutional under strict scrutiny.
Why it matters: This case presents a significant intersection of First Amendment compelled-speech doctrine and state child-safety platform regulation, directly implicating the Moody v. NetChoice framework as applied to disclosure and content-rating mandates; the explicit invocation of Section 230 as a colorable federal defense to state consumer protection claims targeting platform safety representations also tracks the growing debate over whether Section 230 and First Amendment defenses can preempt state AG enforcement actions aimed at platform design and content policies.
Read full opinion →
|
|
Commentary & Analysis
|
3 items
|
|
|
Eric Goldman (Technology & Marketing Law Blog)
|
|
Section 230’s Application to Account Terminations, CSAM, and More
Eric Goldman (Technology & Marketing Law Blog)
· 2026-03-20
Commentary
The post surveys several recent Section 230 decisions, including a California appellate court applying §230(c)(1) to immunize Google's suspension of a financial services advertiser's account (Weiss v. Google), an Eastern District of Pennsylvania case analyzing the elements of §230(c)(2)(A)'s good-faith defense in the context of a deactivated livestreaming account (Thompson v. The Meet Group), and a Northern District of California case addressing right-of-publicity claims against Ancestry.com for use of yearbook images in promotional emails (Gehringer v. Ancestry.com). Goldman highlights doctrinal tensions including the content moderation dilemma created by liability in both directions for ad removals, the strategic choice between §230(c)(1) and §230(c)(2)(A) defenses (particularly given Anderson v. TikTok's constraining effect in the Third Circuit), and the interplay between copyright preemption and state right-of-publicity claims. These cases collectively illuminate how §230 operates — and sometimes fails to operate — in account termination and advertising contexts.
Key point: Courts continue to confirm broad §230(c)(1) immunity for platform account terminations and content removal decisions, while Goldman underscores the practical and doctrinal risks of defendants electing §230(c)(2)(A) over §230(c)(1), especially in circuits bound by Anderson v. TikTok.
Read post →
|
|
Tech Policy Press
|
|
Transcript: Senate Commerce Hearing on 30 Years of Section 230
Tech Policy Press
· 2026-03-20
Commentary
This transcript covers a Senate Commerce Committee hearing marking the 30th anniversary of Section 230 of the Communications Decency Act, providing a primary-source record of congressional testimony and debate on the statute's scope, reform proposals, and continued relevance to platform liability. Senate hearings on Section 230 reform directly bear on the newsletter's core tracking area — the statutory framework governing platform immunity — and frequently surface the contested doctrinal questions (algorithmic amplification, AI-generated content, the ICP exception) that are currently unresolved in litigation. A 30th-anniversary hearing is likely to canvas the full landscape of proposed reforms, including FOSTA, KOSA, and AI-specific amendments, making it a significant policy document for readers tracking where §230 doctrine may be heading legislatively.
Key point: A Senate Commerce Committee hearing on Section 230's 30th anniversary is a high-priority primary source for tracking congressional sentiment and potential reform proposals that could reshape platform immunity doctrine across all three pillars of the newsletter's coverage.
Read post →
|
|
Techdirt
|
|
Rep. Finke Was Right: Age-Gating Isn’t About Kids, It’s About Control
Techdirt
· 2026-03-21
Commentary
The post analyzes Minnesota's proposed HF1434 age-verification bill, arguing it unconstitutionally burdens First Amendment rights of both adults and minors by mandating identity verification (via government ID or biometrics) for access to broadly defined "harmful to minors" content that sweeps in protected speech about sexual orientation, gender identity, and reproductive health. It contextualizes the proposal against *Free Speech Coalition v. Paxton*, in which the Supreme Court upheld Texas's narrower age-verification law for explicit sexual content, warning that Minnesota's broader definitional scope goes well beyond what *Paxton* sanctions. The post is directly relevant to the newsletter's tracking of government-mandated access controls on internet platforms, compelled disclosure of user identity, and First Amendment limits on state regulation of online speech.
Key point: Age-verification mandates like Minnesota's HF1434 exceed the constitutional bounds established in *FSC v. Paxton* by extending "harmful to minors" definitions to cover protected speech about sexuality and gender identity, threatening both user anonymity and adults' First Amendment right to access lawful online content.
Read post →
|
|
Sources: CourtListener API ·
All 13 federal circuit RSS feeds ·
All 50 state supreme courts + intermediate appellate courts (8 states)
via Justia ·
Eric Goldman · Techdirt
◆
Generated automatically. Next edition in approximately 3–4 days.
◆
Unsubscribe
|
|