ILS Legal Monitor

First Amendment · Section 230 · AI Liability

Nerdy Skynet!

April 14, 2026

Coverage: 2026-04-07 through 2026-04-14   ·   5 new developments this period

Section 230 1 item

Commonwealth v. Meta Platforms, Inc.

Massachusetts Supreme Judicial Court  · 2026-04-10  · Meta (Instagram)

Section 230 First Amendment Appellate Opinion

Issue: *Commonwealth v. Meta Platforms, Inc.* asks whether Section 230 of the Communications Decency Act bars Massachusetts consumer protection and public nuisance claims against Meta arising from Instagram's deliberate engineering of features—including infinite scroll, autoplay, intermittent variable-reward notifications, and ephemeral content—designed to exploit adolescent neurological vulnerabilities. The question is non-obvious because Meta's algorithmic and design choices are intertwined with the platform's publication of third-party content, and federal courts have divided sharply on whether claims targeting such features are shielded as inherent to a publisher's role or survive as challenges to a platform's independent engineering decisions.

The Commonwealth filed suit in Superior Court in October 2023, alleging Meta knowingly built addictive product features into Instagram while publicly misrepresenting the platform's safety and maintaining ineffective age-verification systems despite knowing minors used it at scale. Judge Krupp denied Meta's Rule 12(b)(6) motion to dismiss on Section 230 grounds, and the Massachusetts Supreme Judicial Court granted direct appellate review limited to that immunity question. In a fully precedential published opinion dated April 10, 2026, the SJC affirmed the denial of dismissal, holding that Section 230(c)(1) bars only claims satisfying both a dissemination element and a content element rooted in third-party material—and that neither the addictive-design claims nor the affirmative-misrepresentation claims meet that dual requirement. The court grounded its analysis in the common-law defamation principles that informed Section 230's enactment and expressly rejected the broader immunity reading applied by the Northern District of California MDL handling parallel social media addiction litigation.

Why it matters: This ruling introduces a structurally distinct analytical framework—requiring both a dissemination element and a content element to trigger Section 230 immunity—that most federal courts have not articulated at this level of precision, and it squarely holds that addictive-design features are content-neutral as a matter of law because their alleged harm is independent of what any third party posts. By explicitly criticizing the N.D. Cal. MDL decisions and flagging the pending Ninth Circuit appeal in *California v. Meta Platforms* as presenting the same issues, the SJC openly anticipates a federal-state conflict that could fragment the national legal landscape for every state attorney general pursuing analogous claims. Significant questions remain open on remand, including Meta's dormant Commerce Clause, First Amendment, and other preemption defenses—any of which could independently limit or defeat the claims—and the opinion leaves unresolved where the line falls for features that curate or rank third-party content rather than merely delivering it through an engineered format.

Read full opinion →

First Amendment 1 item
▷ First Amendment

X. AI LLC v. WEISER

District Court, D. Colorado  · 2026-04-09  · xAI (Grok)

First Amendment AI Liability Complaint

Issue: In *X.AI LLC v. Weiser*, xAI argues that Colorado SB24-205—a state statute regulating "high-risk" AI systems that takes effect June 30, 2026—unconstitutionally compels the company to redesign its Grok AI to reflect Colorado's preferred positions on diversity and equity, restricts protected editorial decisions about training data and model outputs, and extraterritorially burdens AI development conducted entirely outside Colorado. The case raises the unsettled question of whether a state anti-discrimination mandate applied to AI development constitutes compelled expressive speech under the First Amendment or a permissible conduct regulation with only incidental effects on speech—a distinction courts have not resolved in the AI context.

X.AI LLC filed this complaint on April 9, 2026, in the U.S. District Court for the District of Colorado, naming Colorado Attorney General Philip J. Weiser as defendant in a pre-enforcement challenge to SB24-205 before the statute's June 30, 2026 effective date. The complaint asserts four constitutional claims: First Amendment compelled speech, Dormant Commerce Clause extraterritoriality, void for vagueness under the Due Process Clause, and equal protection. xAI's core theory is that every training-data selection and fine-tuning decision embedded in Grok is protected expressive conduct, and that requiring "algorithmic discrimination" mitigation forces xAI to adopt the state's ideological preferences, relying principally on *303 Creative v. Elenis* (2023) and *Moody v. NetChoice* (2024). The complaint also contends that statutory terms such as "high-risk AI system" and "algorithmic discrimination" are left to future Attorney General rulemaking, creating unconstitutional vagueness under $20,000-per-violation exposure. Relief sought is a declaratory judgment of unconstitutionality and preliminary and permanent injunctive relief barring enforcement.

Why it matters: This is among the first direct constitutional challenges to a state AI-regulation statute, and the court's treatment of xAI's compelled-speech theory will signal how far *303 Creative* and *Moody v. NetChoice* extend into the emerging AI regulatory space. The case puts in direct tension two competing post-*NIFLA* frameworks: the state's likely characterization of SB24-205 as conduct-based consumer protection, and xAI's characterization of algorithmic curation as protected editorial judgment—a question with implications for every AI company subject to state AI laws modeled on Colorado's. The Dormant Commerce Clause and vagueness claims, if successful, could invalidate "doing business in state" AI compliance triggers more broadly and constrain how states may delegate definitional authority to regulators in technology statutes. Colorado is not alone—similar legislation is advancing in other states—so the outcome here is likely to be watched as a template for or against constitutional challenges to the AI regulatory wave.

Read full opinion →

Commentary & Analysis 3 items

Techdirt

Techdirt Podcast Episode 449: The Dangers Of Product Design Liability For Social Media

Techdirt  · 2026-04-08

Commentary

The post summarizes a podcast discussion about recent court rulings in California and New Mexico holding that social media companies can be held liable for certain platform uses under product design liability theories. The conversation, featuring Techdirt's Mike Masnick on FIRE's So to Speak podcast, addresses the First Amendment concerns raised by imposing product design liability on social media platforms — a doctrinal area intersecting with Section 230 immunity and platform editorial discretion under Moody v. NetChoice. These rulings are directly relevant to the newsletter's tracking of how product liability theories interact with First Amendment protections for platform design choices and whether such claims are preempted by Section 230.

Key point: Courts in California and New Mexico applying product design liability to social media platforms raise significant First Amendment and Section 230 concerns about whether platform architecture and design decisions can be regulated through tort law without infringing on protected editorial discretion.

Read post →

Trump’s Two-Faced AI Policy

Techdirt  · 2026-04-11

Commentary

The post examines the Trump administration's contradictory AI policy — publicly embracing deregulation while simultaneously threatening AI companies like Anthropic with supply chain risk designations and executive coercion to remove ideological content restrictions from their models. This raises significant First Amendment jawboning concerns, as the administration's threats to impose "major civil and criminal consequences" and procurement bans unless Anthropic removed usage constraints mirrors the government coercion doctrine addressed in Bantam Books and Backpage.com v. Dart. Anthropic's resulting federal lawsuit challenging the designation directly implicates whether government pressure on AI providers to alter content policies constitutes unconstitutional compelled speech or unlawful retaliation.

Key point: The Trump administration's use of supply chain security designations and procurement bans to coerce Anthropic into removing content constraints on its AI models presents a novel jawboning scenario — applying government-pressure-on-intermediary doctrine to an AI developer rather than a traditional content platform.

Read post →

Eric Goldman (Technology & Marketing Law Blog)

With Opinions Like This, Congress Doesn’t Need to Repeal Section 230–Massachusetts v. Meta

Eric Goldman (Technology & Marketing Law Blog)  · 2026-04-12

Commentary

Eric Goldman analyzes a Massachusetts Supreme Court decision unanimously holding that Section 230 does not immunize Meta from state AG claims alleging unfair and deceptive business practices arising from Instagram's design targeting children and Meta's alleged misrepresentations about platform safety, finding the court's statutory analysis results-oriented and deeply flawed — particularly its omission of the word "speaker" from § 230(c)(1) and its reliance on the criticized Henderson Fourth Circuit decision. The post identifies the case as part of a broader wave of state AG lawsuits running parallel to the federal social media addiction MDL, and flags the court's novel two-part "dissemination and content" test for publisher liability as a significant doctrinal development that could severely limit Section 230's scope in Massachusetts. This matters because the decision represents a state supreme court substantially narrowing § 230 immunity for platform design and deceptive-practice claims — directly implicating the core Zeran publisher/speaker framework and the product design carve-out question central to the newsletter's coverage.

Key point: The Massachusetts Supreme Court's holding that § 230 immunity requires both a "dissemination element" and a "content element" effectively strips immunity from platform design-defect and deceptive-practice claims, representing one of the most significant state-court limitations on Section 230 in the statute's history.

Read post →

Sources: CourtListener API  ·  All 13 federal circuit RSS feeds  ·  All 50 state supreme courts + intermediate appellate courts (8 states) via Justia  ·  Eric Goldman  ·  Techdirt
 Generated automatically. Next edition in approximately 3–4 days. 

Unsubscribe