ILS Legal Monitor

First Amendment · Section 230 · AI Liability

Nerdy Skynet!

March 31, 2026

Coverage: 2026-03-27 through 2026-03-31   ·   5 new developments this period

Commentary & Analysis 5 items

Techdirt

Funniest/Most Insightful Comments Of The Week At Techdirt

Techdirt  · 2026-03-30

Commentary

The post features a substantive reader comment engaging with the design-defect vs. content distinction in Section 230 litigation against Meta/Instagram, directly addressing whether addictive design features (infinite scroll, autoplay, algorithmic recommendations) are separable from user-generated content for purposes of §230 immunity. The commenter argues that the "product design" carve-out used by courts in the California and New Mexico cases is analytically flawed because the allegedly harmful design features are inert without the underlying content they deliver, making the design/content distinction a false dichotomy. This goes to the heart of the most contested current §230 question — whether platform architecture and algorithmic recommendation systems constitute publisher activity immune under §230 or fall outside §230's scope as independent design choices.

Key point: The comment contends that framing Instagram's addictive design as a non-content product liability claim misreads §230, because the allegedly harmful design features are inseparable from the user-generated content they curate and deliver — undermining the product-design carve-out that allowed the Meta cases to proceed to trial.

Read post →

Hegseth’s War On Anthropic Encounters The First Amendment

Techdirt  · 2026-03-30

Commentary

This post covers a preliminary injunction issued against the Trump administration and Secretary Hegseth for retaliatory actions taken against Anthropic — an AI developer — after it publicly disputed Department of Defense contract terms, including a government-wide contract ban, a requirement that defense contractors sever ties with Anthropic, and a "supply chain risk" designation. The court found these actions likely violated the Fifth Amendment's due process protections and the Administrative Procedure Act, and the post analyzes the constitutional and statutory limits on government retaliation against a disfavored technology company. This is directly relevant as a jawboning/government coercion case involving an AI developer (Anthropic), sitting at the intersection of First Amendment retaliation doctrine and government pressure on technology companies.

Key point: The case establishes that the government's retaliatory escalation against Anthropic — beyond merely terminating its contracts — likely crossed constitutional and statutory lines, with the preliminary injunction restoring the status quo while preserving the government's right to lawfully stop doing business with the AI company.

Read post →

The Missouri v. Biden ‘Settlement’ Is A Fake Victory For A Case They Lost

Techdirt  · 2026-03-30

Commentary

The post analyzes the Trump administration's "settlement" in Missouri v. Biden (Murthy v. Missouri), arguing that the settlement misrepresents a case the plaintiffs definitively lost at the Supreme Court on standing grounds, and walks through the Court's reasoning that the platforms were exercising independent editorial judgment rather than capitulating to government coercion. The piece is directly relevant to the newsletter's jawboning/government coercion category, as it examines the doctrinal core of Murthy v. Missouri — what level of government pressure on social media platforms constitutes unconstitutional coercion versus permissible communication — and explains why the Supreme Court found no traceable link between government contact and platform moderation decisions. This matters for tracking the post-Murthy landscape: the settlement and its political framing could affect how future plaintiffs construct standing and coercion arguments in government-jawboning cases.

Key point: The Techdirt post argues that the Missouri v. Biden "settlement" is political theater masking a complete doctrinal defeat, because the Supreme Court found five times that there was "no evidence" of government-coerced censorship and held that platforms were making independent editorial decisions — establishing a demanding evidentiary standard for future government-jawboning claims.

Read post →

Eric Goldman (Technology & Marketing Law Blog)

Comments on the Jury Verdict in the Los Angeles Social Media Addiction Bellwether Trial (Expanded/Updated)

Eric Goldman (Technology & Marketing Law Blog)  · 2026-03-30

Commentary

Eric Goldman analyzes a $3M Los Angeles jury verdict against Meta and YouTube in a social media addiction bellwether trial, discussing the appellate grounds available to the defendants — including whether products liability doctrine applies to intangible services, causation challenges, and critically whether Section 230 immunity was correctly rejected below on the theory that platform design choices are distinct from publisher decisions about third-party content. Goldman argues that the trial court's line between "design" choices and editorial publication decisions regarding third-party content is illusory, and that the appellate court will need to resolve this distinction, which directly implicates the core Section 230 question of whether algorithmic curation and platform configuration constitute immune publisher activity. The post also flags First Amendment implications and notes that the verdict, combined with parallel legislation, poses existential liability risk to the social media industry.

Key point: The central appellate issue is whether Section 230 immunizes social media platforms' configuration and design choices — including algorithmic curation — as publisher decisions about third-party content, or whether plaintiffs can reframe those choices as non-immune product design defects, a question that goes to the heart of the Gonzalez-era debate over algorithmic recommendation liability.

Read post →

Tech Policy Press

Landmark Verdicts Could Unleash New Legal Playbook Over Social Media Harms

Tech Policy Press  · 2026-03-30

Commentary

The post discusses how recent landmark jury verdicts against social media platforms could signal a shift in litigation strategy for social media harm cases, potentially circumventing Section 230 immunity through novel legal theories targeting platform design and product liability rather than content moderation decisions. This is directly relevant to the newsletter's coverage of Section 230's scope, particularly the evolving question of whether product liability and design defect claims against platforms can survive Section 230 immunity by framing harms as arising from platform architecture rather than third-party content. The piece likely engages with the tension between Section 230 publisher immunity and the growing judicial and legislative appetite for holding platforms liable for the foreseeable harms caused by their algorithmic and design choices.

Key point: Recent verdicts may validate a product liability litigation playbook that reframes social media harms as design defect claims, potentially carving out space for plaintiff recovery outside Section 230's immunity shield.

Read post →

Sources: CourtListener API  ·  All 13 federal circuit RSS feeds  ·  All 50 state supreme courts + intermediate appellate courts (8 states) via Justia  ·  Eric Goldman  ·  Techdirt
 Generated automatically. Next edition in approximately 3–4 days. 

Unsubscribe