ILS Legal Monitor

First Amendment · Section 230 · AI Liability

Nerdy Skynet!

March 13, 2026

Coverage: 2026-03-11 through 2026-03-13   ·   18 new developments this period

First Amendment 3 items
▷ Class Action Complaint (theories to be determined from full complaint)

Angwin v. Superhuman Platform, Inc.

District Court, S.D. New York  · 2026-03-11  · Superhuman Platform, Inc.

Class Action Complaint (theories to be determined from full complaint) Complaint

Issue: Whether Superhuman Platform, Inc. can be held liable under theories alleged in a class action complaint brought by Julia Angwin on behalf of herself and a putative class (specific theories not provided in excerpt).

Plaintiff Julia Angwin filed a class action complaint against Superhuman Platform, Inc. in the Southern District of New York with a jury trial demand. The excerpt provides only the case caption and procedural header; the substantive allegations, legal theories, and factual basis are not included in the provided text. Based on the class action format and the defendant's identification as a "Platform," the case likely involves claims arising from the platform's AI systems, services, or outputs affecting a class of users or third parties.

Why it matters: This appears to be a new AI liability case against a platform entity, potentially involving product liability, negligence, consumer protection, or speech tort theories arising from AI-generated content or platform design. The class action format suggests systematic harm allegations rather than an individual incident, which could have broader implications for AI platform liability standards if the case survives motion to dismiss and proceeds to class certification. Without the substantive allegations, the precise doctrinal significance cannot yet be assessed, but any AI platform facing class liability claims represents a potentially significant development in the emerging AI tort liability landscape.

Read full opinion →

▷ Speech Regulation / Platform Autonomy

Netchoice, LLC v. Bonta

Court of Appeals for the Ninth Circuit  · 2 filings

2026-03-12  ·  Online platforms generally (represented by NetChoice trade association)

Speech Regulation / Platform Autonomy Appellate Opinion

Issue: Whether California's Age-Appropriate Design Code Act (CAADCA) — which imposes coverage requirements, age estimation mandates, data use restrictions, and dark patterns prohibitions on online services likely to be accessed by children — violates the First Amendment on its face.

The Ninth Circuit affirmed in part and vacated in part the district court's preliminary injunction against California's CAADCA on remand from a prior appeal. The panel held that NetChoice failed to carry its burden on facial challenges to the coverage definition and age estimation requirement because it did not develop a sufficient record cataloging the statute's full set of applications, as required by Moody v. NetChoice's facial-challenge framework. However, the court affirmed the injunction as to the data use and dark patterns restrictions on vagueness grounds, holding those provisions do not clearly delineate proscribed conduct. The court vacated the injunction insofar as it enjoined the entire statute, remanding for severability analysis and further proceedings.

Why it matters: This is a major post-Moody application of the Supreme Court's facial-challenge framework to state social media regulation, raising the evidentiary bar for platforms seeking to facially invalidate child-safety design mandates and holding that vague behavioral restrictions (data use, dark patterns) cannot survive First Amendment scrutiny. The decision signals that platforms challenging age-verification and design-code statutes must develop detailed records showing unconstitutional applications across the regulatory landscape — a demanding standard that may allow more narrowly tailored child-protection laws to survive preliminary review.

Read document →

 ·  Social media platforms (represented by NetChoice trade association)

Speech Regulation / Platform Autonomy Appellate Opinion

Issue: Whether California's law regulating social media platforms violates the First Amendment rights of platforms represented by NetChoice.

The Ninth Circuit is reviewing a district court decision (Judge Freeman, N.D. Cal.) in NetChoice's facial First Amendment challenge to a California platform regulation statute, with California Attorney General Rob Bonta appealing. This appears to be a published Ninth Circuit opinion in NetChoice's challenge to California legislation regulating social media platforms, following the pattern of NetChoice's challenges to state social media laws in Texas, Florida, Ohio, and Utah. The case number and procedural posture indicate this is NetChoice's appeal or cross-appeal from the Northern District of California's ruling on California's platform regulation law.

Why it matters: This is a Ninth Circuit opinion applying the Moody v. NetChoice framework to California's platform regulation statute, representing the first major circuit court application of the Supreme Court's 2024 editorial-discretion framework to a state social media law outside the Fifth and Eleventh Circuits. The published opinion will provide critical guidance on how intermediate scrutiny applies to state content-moderation mandates in the post-Moody era and may create or resolve circuit splits on the scope of platform editorial protection.

Read document →

Commentary & Analysis 15 items

Techdirt

Human Problems: It’s Not Always The Technology’s Fault

Techdirt

Commentary

The post examines lawsuits against Character.AI and OpenAI alleging that AI chatbots caused youth suicides and other harms, arguing that framing these tragedies as "technology problems" obscures underlying societal failures in mental health care, systemic support structures, and suicide prevention. The author contends that AI chatbot liability litigation follows a historical pattern of scapegoating new technologies (printing press, rock music, video games, social media) for complex human problems rooted in individual, relational, and societal factors. The post directly engages with the Garcia v. Character.AI case and related AI product liability theories, challenging the premise that chatbot design defects or algorithmic targeting—rather than systemic mental health infrastructure failures—should bear primary responsibility for youth suicide.

Key point: The post argues that AI chatbot liability lawsuits misattribute complex, multifactorial human tragedies like youth suicide to technology design rather than confronting systemic societal failures in mental health care and support systems.

Read post →

Congressional Republicans Push Bills That Would Block Kids Access To Content For Ideological Reasons

Techdirt

Commentary

The post analyzes two bills advancing in the House Energy & Commerce Committee—the App Store Accountability Act and the KIDS Act—that would require parental consent before minors can install apps or use direct messaging features on social media platforms. The author argues these measures raise serious First Amendment concerns by enabling ideological censorship: parents could use the consent requirements to block teens' access to LGBTQ support resources, contraception information, and other constitutionally protected content that conflicts with parental views, particularly affecting vulnerable teens in unsupportive households. The bills' application to non-profit educational platforms (unlike COPPA's commercial-only scope) and their potential use to enforce viewpoint-based restrictions implicate both compelled-speech doctrine and minors' independent First Amendment rights to receive information.

Key point: The bills would effectively deputize parents as government-empowered censors, allowing ideological filtering of teens' access to constitutionally protected speech on platforms and apps, raising First Amendment concerns about viewpoint-based access restrictions and minors' rights to receive information.

Read post →

EFF To Court: Don’t Make Embedding Illegal

Techdirt

Commentary

The EFF filed an amicus brief in the Fifth Circuit arguing against Emmerich Newspapers' attempt to overturn the "server test" for direct copyright infringement liability. Emmerich argues that entities embedding links to content should be directly liable for infringement, while EFF defends the longstanding rule that only the server host controlling the content can be directly liable. The post also discusses DMCA copyright management information claims related to URL modification.

Key point: The case challenges the server test for copyright liability when embedding content, potentially making common linking practices legally risky.

Read post →

Tech Policy Press

Defining Moral Reasoning as ‘Supply Chain Risk’ Threatens America’s AI Advantage—and Democracy

Tech Policy Press

Commentary

This post appears to address government regulation or characterization of AI system design choices—specifically moral reasoning capabilities in AI systems—as a national security or supply chain concern, which implicates First Amendment questions about whether the government can restrict or mandate particular value judgments or reasoning architectures in AI systems based on their expressive content. The framing of "moral reasoning" as regulable "supply chain risk" raises questions at the intersection of AI regulation, compelled or prohibited expressive architectural choices, and government content-based restriction of AI capabilities—issues left partially open by Justice Barrett's Moody concurrence regarding whether AI-generated outputs and AI system design choices qualify as protected expression.

Key point: The post argues that classifying AI moral reasoning capabilities as supply chain risk threatens both competitive advantage and constitutional protections for expressive AI system design, presenting a novel First Amendment frontier question about government power to regulate AI architectures based on their normative content.

Read post →

Trials Probe Tech Companies' Responsibility for Sexual Assaults and Abuse

Tech Policy Press

Commentary

This post discusses ongoing trials examining whether technology platforms can be held liable for sexual assaults and abuse facilitated through their services—a question that directly implicates Section 230's immunity scope and emerging product liability theories for platform design features. The cases likely address whether platforms' design choices (recommendation systems, anonymity features, inadequate safety measures) transform them from immune intermediaries into liable product designers, echoing the design-defect framework from Garcia v. Character Technologies and related cases challenging platforms' duty of care to prevent foreseeable harms. This represents a critical frontier in both Section 230 doctrine (whether immunity extends to design-based claims) and platform tort liability (whether platforms owe a duty to implement safeguards against facilitated crimes).

Key point: The trials test whether Section 230 immunizes platforms against liability for sexual assaults facilitated through their design features, or whether product liability and negligence theories can hold platforms accountable for failing to implement reasonable safeguards.

Read post →

Disinformation on Private Messaging Platforms Requires New Regulatory Approach

Tech Policy Press

Commentary

This post argues that disinformation propagated through private messaging platforms (e.g., WhatsApp, Signal, Telegram) requires a regulatory framework distinct from public social media moderation regimes, likely addressing tensions between encryption, user privacy, platform editorial discretion, and government content regulation mandates. The piece is relevant because any proposal to regulate content on messaging platforms implicates the Moody v. NetChoice compelled-speech framework (whether platforms can be required to monitor or moderate private communications), potential Section 230 questions if liability is proposed for user-generated harmful content in encrypted channels, and First Amendment limits on government mandates that would require platforms to undermine end-to-end encryption or implement proactive content scanning.

Key point: The post proposes or analyzes regulatory approaches to disinformation in private messaging contexts, directly implicating platform editorial autonomy, encryption policy, and First Amendment constraints on government-mandated content moderation in non-public communication channels.

Read post →

Unpacking The FTC’s Double-Edged Age-Verification Gamble

Tech Policy Press

Commentary

This post analyzes the FTC's push for age-verification requirements on digital platforms, examining the tension between child safety objectives and First Amendment concerns about compelled identity disclosure and anonymous access to protected speech. The analysis engages with the doctrinal framework from McIntyre v. Ohio Elections Commission on anonymous speech rights, Packingham v. North Carolina on platform access as a First Amendment interest, and the application of Zauderer's compelled-disclosure framework to age-gating mandates. The post is directly relevant because age-verification mandates imposed on platforms implicate both the compelled-speech doctrine (requiring platforms to collect and verify identity information) and users' constitutional right to access platforms anonymously, positioning the FTC's enforcement approach within the contested terrain of government regulation of platform editorial autonomy and user speech rights.

Key point: The FTC's age-verification enforcement strategy raises unresolved First Amendment questions about whether mandated identity collection constitutes compelled speech by platforms and whether verification requirements unconstitutionally burden users' anonymous access to digital speech forums under McIntyre and Packingham.

Read post →

If an Agent Extension Can Act as You, Marketplaces Need Minimum Duties

Tech Policy Press

Commentary

This blog post argues that as AI agent extensions gain the capability to act autonomously on behalf of users within digital marketplaces and platforms, those marketplaces should be subject to minimum regulatory duties to protect users from harmful or fraudulent agent behavior. The post likely engages with questions of platform liability, duty of care, and regulatory frameworks for AI-mediated transactions — issues that intersect with Section 230 immunity (whether platforms hosting AI agents are liable for agent-generated harms) and emerging questions about AI system accountability. The argument for "minimum duties" suggests a regulatory intervention that would impose affirmative obligations on platforms, potentially conflicting with traditional Section 230 immunity or raising First Amendment questions if those duties compel particular moderation or oversight practices.

Key point: The post advocates for imposing affirmative regulatory duties on platforms that host AI agents capable of acting autonomously on users' behalf, raising questions about the limits of platform immunity and the regulatory treatment of AI-mediated marketplace transactions.

Read post →

Shareholder Control and the New Politics of Platform Regulation

Tech Policy Press

Commentary

This post appears to address the intersection of corporate governance (shareholder control) and platform content moderation or regulatory policy. If it discusses how shareholder pressure, proxy contests, or ownership structure influences platforms' editorial decisions, content moderation practices, or responses to government regulation, it would implicate both the editorial discretion protected under Moody v. NetChoice and the broader political economy of platform speech governance. The relevance depends on whether the post engages with how ownership and control structures affect platforms' First Amendment-protected editorial autonomy or their strategic responses to content regulation.

Key point: Corporate governance and shareholder control mechanisms may be emerging as indirect vectors for influencing platform content moderation and editorial policy, with potential implications for platforms' First Amendment defenses and regulatory compliance strategies. --- **Note:** I've marked this MEDIUM confidence because the title and source suggest policy-relevant commentary on platform regulation, but without access to the full text, I cannot confirm whether the post substantively engages with First Amendment doctrine, Section 230, or specific regulatory frameworks within scope. If the post is primarily about corporate governance, investor activism, or business strategy without connecting to content moderation law or constitutional protections, it would be NOT_RELEVANT.

Read post →

House GOP Moves Ahead with Kids Online Safety Package as Democrats Balk

Tech Policy Press

Commentary

The article reports on House Republican efforts to advance a kids online safety legislative package (likely including KOSA or similar bills), with Democratic opposition emerging. This directly implicates First Amendment platform autonomy questions from Moody v. NetChoice, potential compelled content moderation or design requirements, and the intersection of age-appropriate design codes with platforms' editorial discretion. Legislative proposals mandating how platforms must treat youth users, restrict recommendation algorithms, or implement design features trigger the same First Amendment framework analyzed in the Texas and Florida social media cases.

Key point: Federal legislative proposals regulating platform design and content moderation in the name of child safety raise unsettled First Amendment questions about government power to compel or restrict platform editorial choices, particularly after Moody v. NetChoice established that platform curation is protected expressive activity.

Read post →

People Have the Right to Refuse AI

Tech Policy Press

Commentary

This blog post appears to discuss individual rights to refuse interactions with AI systems, which may implicate emerging questions about AI liability frameworks, consent requirements for AI deployment, and potentially First Amendment considerations regarding compelled engagement with AI-generated speech or AI-mediated services. The post likely addresses policy and legal frameworks for opt-out rights, user autonomy, and the boundaries of mandatory AI integration in consumer-facing services. This intersects with the newsletter's AI liability pillar if it discusses tort theories (duty to obtain consent, negligence in deployment), and potentially the First Amendment pillar if it addresses compelled interaction with AI systems as a speech or association issue.

Key point: The post advocates for legal recognition of individual rights to decline AI-mediated interactions, raising questions about consent frameworks, liability for non-consensual AI deployment, and potential constitutional dimensions of mandatory AI engagement.

Read post →

February 2026 US Tech Policy Roundup

Tech Policy Press

Commentary

This appears to be a monthly roundup aggregating multiple US tech policy developments from February 2026. Without access to the actual content, the title and format suggest it likely covers a range of developments across platform regulation, Section 230 litigation, AI policy, and First Amendment issues—all core newsletter topics. Monthly roundups from Tech Policy Press historically synthesize court filings, regulatory actions, and legislative developments across the newsletter's three pillars, making them useful for identifying developments that may have been missed in primary source monitoring.

Key point: A monthly policy roundup that likely aggregates multiple developments across Section 230, First Amendment platform regulation, and AI liability during February 2026, requiring review of actual content to assess substantive relevance of individual items discussed. **Note:** I've marked this LOW confidence because I cannot see the actual content of the blog post—only its title and metadata. Monthly roundups are *potentially* highly relevant because they often synthesize court decisions, agency actions, and legislative developments across all three newsletter pillars, but their relevance depends entirely on what specific items they cover. To properly assess this document, I would need to review the actual text to determine: - Whether it discusses specific court cases, regulatory actions, or legislation within scope - Whether the coverage is substantive or merely summarizes headlines - Which of the three pillars (if any) receive meaningful treatment If you can provide the actual content of the roundup, I can give you a definitive assessment with specific topic classifications for each relevant item discussed.

Read post →

Wyoming’s GRANITE Act Hints at Global Speech Battle to Come

Tech Policy Press

Commentary

This post discusses Wyoming's GRANITE Act, a state law regulating technology platforms' speech practices, and positions it within the broader context of international speech regulation battles. The post likely analyzes the Act's constitutional implications under the First Amendment framework established in Moody v. NetChoice, and examines how state-level platform regulation intersects with global regulatory trends. This represents a frontier question in platform speech regulation: how state laws governing content moderation or platform editorial practices will survive First Amendment scrutiny post-Moody, and how domestic constitutional constraints interact with extraterritorial regulatory pressures.

Key point: Wyoming's GRANITE Act represents an emerging state-level approach to platform speech regulation that implicates both the Moody v. NetChoice First Amendment framework and the growing tension between domestic constitutional protections and international regulatory models.

Read post →

Anthropomorphism Is Breaking Our Ability to Judge AI

Tech Policy Press

Commentary

This post addresses how anthropomorphic design choices in AI chatbot systems distort legal and regulatory judgments about AI liability, directly engaging with the central doctrinal question raised in Garcia v. Character Technologies: whether AI systems designed to simulate human intimacy and emotional connection constitute "products" subject to traditional product liability frameworks, or whether their outputs should be treated as protected speech immunizing developers from tort liability. The anthropomorphism question is critical to determining whether design features that encourage users to perceive AI as sentient companions constitute actionable design defects, and whether courts should apply consumer protection and product safety frameworks or First Amendment scrutiny to AI chatbot architecture.

Key point: The post argues that anthropomorphic AI design choices systematically undermine the ability of courts, regulators, and the public to correctly assess AI developer liability by obscuring whether harm arises from a defective product or from protected expressive content — the core unresolved question in the emerging AI tort litigation wave.

Read post →

Techdirt

Don’t Ban Kids From Using Chatbots

Techdirt

Commentary

This post analyzes proposed federal and state legislation (Senator Hawley's GUARD Act, Virginia, Oklahoma, and California bills) that would prohibit minors from accessing AI chatbots capable of generating conversational content. The author argues these age-gating mandates are content-based restrictions on minors' First Amendment right to receive information, subject to strict scrutiny, and would be struck down as unconstitutionally overbroad because less restrictive means (parental controls, safeguards against sexually explicit content) are available. The analysis directly engages the intersection of AI regulation, minors' First Amendment rights established in *Packingham* and related precedent, and the constitutional limits on age-verification mandates for expressive AI systems—an emerging question flagged in the high-priority tracking areas.

Key point: Federal and state proposals to categorically ban minors from accessing AI chatbots constitute content-based speech restrictions that would fail strict scrutiny under the First Amendment because they sweep far beyond unprotected obscene-to-minors content and are not narrowly tailored when less restrictive alternatives exist.

Read post →

Sources: CourtListener API  ·  All 13 federal circuit RSS feeds  ·  All 50 state supreme courts + intermediate appellate courts (8 states) via Justia  ·  Eric Goldman  ·  Techdirt
 Generated automatically. Next edition in approximately 3–4 days. 

Unsubscribe