ILS Legal Monitor

First Amendment · Section 230 · AI Liability

Nerdy Skynet!

March 24, 2026

Coverage: 2026-03-17 through 2026-03-24   ·   7 new developments this period

First Amendment 2 items
▷ Defamation and Speech Torts

Dr. Lana Foster v. Shannon King

Eleventh Circuit  · N/A (appears to be traditional defamation case, not platform-focused based on excerpt)

Defamation and Speech Torts Appellate Opinion

Issue: Whether statements made by defendant Shannon King about plaintiff Dr. Lana Foster constitute actionable defamation under applicable First Amendment standards.

The Eleventh Circuit is reviewing a defamation claim brought by Dr. Lana Foster against Shannon King. Based on the limited excerpt provided (case caption and document header only), this appears to be an appellate opinion in a defamation dispute. The substantive analysis, parties' arguments, and the court's holding are not visible in the excerpt provided, making it impossible to determine whether this involves a technology platform, algorithmic amplification, Section 230 immunity, or any other issue within the newsletter's core scope.

Why it matters: Cannot be determined from the excerpt provided. If the case involves defamatory statements published or amplified through a social media platform with Section 230 immunity argued by the platform, or if it involves AI-generated defamatory content, it would be highly significant. However, the excerpt does not contain sufficient information to make this determination. Human review of the full opinion is recommended to assess whether this is a traditional interpersonal defamation case (outside scope) or a platform/AI defamation case (within scope).

Read full opinion →

▷ Speech Regulation / Platform Autonomy

State of Texas v. Snap Inc.

District Court, E.D. Texas  · 2026-03-21  · Snap Inc. (Snapchat)

Speech Regulation / Platform Autonomy Section 230 Other

Issue: Whether Snap may remove to federal court under the federal officer removal statute, and whether the First Amendment and Section 230 constitute colorable federal defenses against Texas DTPA and SCOPE Act claims targeting Snapchat's content ratings, safety disclosures, and parental control obligations.

The State of Texas filed a twelve-count action in Texas state court alleging that Snap violated the Texas Deceptive Trade Practices Act by misrepresenting Snapchat's content maturity ratings in the Apple, Google Play, and Microsoft app stores, and violated the SCOPE Act by failing to provide adequate parental verification and supervision tools. Snap removed to federal court under 28 U.S.C. § 1442(a)(1), the federal officer removal statute, arguing that its work assisting DHS (Know2Protect, Pledge2Protect, and Blue Campaign) and the FDA (tobacco/vaping public health campaigns) constitutes action "under color of federal office" sufficient to confer federal jurisdiction. In its notice of removal, Snap expressly identified Section 230 and the First Amendment as colorable federal defenses, arguing that the DTPA claims impermissibly compel disclosure of highly subjective opinions about content harms (citing NetChoice v. Bonta), that platform features like notifications and lenses implicate Snap's own protected speech (citing Moody v. NetChoice), and that the SCOPE Act's content-based exemptions render it unconstitutional under strict scrutiny.

Why it matters: This case presents a significant intersection of First Amendment compelled-speech doctrine and state child-safety platform regulation, directly implicating the Moody v. NetChoice framework as applied to disclosure and content-rating mandates; the explicit invocation of Section 230 as a colorable federal defense to state consumer protection claims targeting platform safety representations also tracks the growing debate over whether Section 230 and First Amendment defenses can preempt state AG enforcement actions aimed at platform design and content policies.

Read full opinion →

Commentary & Analysis 5 items

Techdirt

Brendan Carr Pretends To Be Tough, Demands Broadcasters Support Disastrous War

Techdirt  · 2026-03-17

Commentary

This post analyzes FCC Chairman Brendan Carr's public threat to revoke broadcast licenses of news outlets that report critically on the Trump administration's Iran war, framing it as government jawboning designed to chill protected journalistic speech. The author argues Carr's threat—claiming broadcasters reporting "hoaxes and news distortions" risk losing licenses for failing to operate "in the public interest"—constitutes unconstitutional coercion under the First Amendment's prohibition on government officials pressuring private intermediaries to suppress disfavored speech, even though the FCC's actual enforcement authority is limited and any license revocation would face insurmountable legal challenges. The post connects this to the broader pattern of Trump administration officials threatening media companies with regulatory retaliation for critical coverage, applying the Bantam Books/Backpage/Murthy framework of distinguishing permissible government criticism from unconstitutional threats leveraging official power.

Key point: FCC Chair Carr's public threat to deny broadcast license renewals to outlets reporting critically on government war policy represents potential First Amendment jawboning—using regulatory authority to coerce editorial compliance—though the threat's practical enforceability is limited by decades of First Amendment precedent protecting editorial independence.

Read post →

Tech Policy Press

A Building Code for Digital Infrastructures

Tech Policy Press  · 2026-03-17

Commentary

This post appears to propose a regulatory framework analogizing platform governance to building codes — suggesting that digital infrastructures should be subject to mandatory design standards similar to physical infrastructure safety requirements. If the post argues for government-mandated platform design requirements, it implicates the Moody v. NetChoice framework governing whether such mandates burden platforms' editorial discretion and expressive choices. The "building code" metaphor likely frames platforms as non-expressive infrastructure subject to common-carrier-style obligations, directly contesting the editorial-discretion doctrine established in Moody and raising questions about compelled design features as compelled speech.

Key point: The post likely advocates for treating platform architecture as regulable infrastructure rather than protected editorial expression, engaging the core doctrinal tension between common-carrier and publisher models that Moody v. NetChoice addressed but left partially unresolved for non-expressive platform functions.

Read post →

Eric Goldman (Technology & Marketing Law Blog)

Section 230’s Application to Account Terminations, CSAM, and More

Eric Goldman (Technology & Marketing Law Blog)  · 2026-03-20

Commentary

The post surveys several recent Section 230 decisions, including a California appellate court applying §230(c)(1) to immunize Google's suspension of a financial services advertiser's account (Weiss v. Google), an Eastern District of Pennsylvania case analyzing the elements of §230(c)(2)(A)'s good-faith defense in the context of a deactivated livestreaming account (Thompson v. The Meet Group), and a Northern District of California case addressing right-of-publicity claims against Ancestry.com for use of yearbook images in promotional emails (Gehringer v. Ancestry.com). Goldman highlights doctrinal tensions including the content moderation dilemma created by liability in both directions for ad removals, the strategic choice between §230(c)(1) and §230(c)(2)(A) defenses (particularly given Anderson v. TikTok's constraining effect in the Third Circuit), and the interplay between copyright preemption and state right-of-publicity claims. These cases collectively illuminate how §230 operates — and sometimes fails to operate — in account termination and advertising contexts.

Key point: Courts continue to confirm broad §230(c)(1) immunity for platform account terminations and content removal decisions, while Goldman underscores the practical and doctrinal risks of defendants electing §230(c)(2)(A) over §230(c)(1), especially in circuits bound by Anderson v. TikTok.

Read post →

Tech Policy Press

Transcript: Senate Commerce Hearing on 30 Years of Section 230

Tech Policy Press  · 2026-03-20

Commentary

This transcript covers a Senate Commerce Committee hearing marking the 30th anniversary of Section 230 of the Communications Decency Act, providing a primary-source record of congressional testimony and debate on the statute's scope, reform proposals, and continued relevance to platform liability. Senate hearings on Section 230 reform directly bear on the newsletter's core tracking area — the statutory framework governing platform immunity — and frequently surface the contested doctrinal questions (algorithmic amplification, AI-generated content, the ICP exception) that are currently unresolved in litigation. A 30th-anniversary hearing is likely to canvas the full landscape of proposed reforms, including FOSTA, KOSA, and AI-specific amendments, making it a significant policy document for readers tracking where §230 doctrine may be heading legislatively.

Key point: A Senate Commerce Committee hearing on Section 230's 30th anniversary is a high-priority primary source for tracking congressional sentiment and potential reform proposals that could reshape platform immunity doctrine across all three pillars of the newsletter's coverage.

Read post →

Techdirt

Rep. Finke Was Right: Age-Gating Isn’t About Kids, It’s About Control

Techdirt  · 2026-03-21

Commentary

The post analyzes Minnesota's proposed HF1434 age-verification bill, arguing it unconstitutionally burdens First Amendment rights of both adults and minors by mandating identity verification (via government ID or biometrics) for access to broadly defined "harmful to minors" content that sweeps in protected speech about sexual orientation, gender identity, and reproductive health. It contextualizes the proposal against *Free Speech Coalition v. Paxton*, in which the Supreme Court upheld Texas's narrower age-verification law for explicit sexual content, warning that Minnesota's broader definitional scope goes well beyond what *Paxton* sanctions. The post is directly relevant to the newsletter's tracking of government-mandated access controls on internet platforms, compelled disclosure of user identity, and First Amendment limits on state regulation of online speech.

Key point: Age-verification mandates like Minnesota's HF1434 exceed the constitutional bounds established in *FSC v. Paxton* by extending "harmful to minors" definitions to cover protected speech about sexuality and gender identity, threatening both user anonymity and adults' First Amendment right to access lawful online content.

Read post →

Sources: CourtListener API  ·  All 13 federal circuit RSS feeds  ·  All 50 state supreme courts + intermediate appellate courts (8 states) via Justia  ·  Eric Goldman  ·  Techdirt
 Generated automatically. Next edition in approximately 3–4 days. 

Unsubscribe