Browse Cases
137 resultsNetChoice v. Ellison
Issue: In *NetChoice v. Ellison*, plaintiff NetChoice argues that a Minnesota law requiring social media platforms to display a state-authored mental-health warning to every user at the start of every session — with no option for users to permanently dismiss it — unconstitutionally compels private speech in violation of the First Amendment. The case turns on whether the more demanding strict-scrutiny standard applies, or whether the government can defend the mandate under the more permissive *Zauderer* framework, which permits rational-basis review for purely factual disclosures in commercial advertising contexts. The question is made legally significant because no Supreme Court ruling has definitively settled whether *Zauderer* can reach beyond its original advertising context to cover a warning displayed across a general-purpose speech platform.
Why It Matters: Minnesota's law is among the first state social-media warning-label statutes positioned to take effect following the wave of legislation enacted between 2023 and 2025, meaning a ruling here — even at the preliminary injunction stage — will carry significant persuasive weight in the dozen or more similar cases still pending in federal courts. A preliminary injunction granted by the District of Minnesota would further solidify the pattern of judicial resistance to state-mandated mental-health warnings, while a decision allowing enforcement could fracture that emerging consensus and accelerate the path to Supreme Court review. The case also gives a federal court an early opportunity to address the question *NIFLA* deliberately left open — whether *Zauderer* survives outside the commercial-advertising context at all — in a setting where the regulated entity is simultaneously a commercial service and a major forum for protected speech.
View on CourtListener →Commonwealth v. Meta Platforms, Inc.
Issue: Commonwealth v. Meta Platforms, Inc.* asks whether Section 230 of the Communications Decency Act bars Massachusetts consumer protection and public nuisance claims against Meta arising from Instagram's deliberate engineering of features—including infinite scroll, autoplay, intermittent variable-reward notifications, and ephemeral content—designed to exploit adolescent neurological vulnerabilities. The question is non-obvious because Meta's algorithmic and design choices are intertwined with the platform's publication of third-party content, and federal courts have divided sharply on whether claims targeting such features are shielded as inherent to a publisher's role or survive as challenges to a platform's independent engineering decisions.
Why It Matters: This ruling introduces a structurally distinct analytical framework—requiring both a dissemination element and a content element to trigger Section 230 immunity—that most federal courts have not articulated at this level of precision, and it squarely holds that addictive-design features are content-neutral as a matter of law because their alleged harm is independent of what any third party posts. By explicitly criticizing the N.D. Cal. MDL decisions and flagging the pending Ninth Circuit appeal in *California v. Meta Platforms* as presenting the same issues, the SJC openly anticipates a federal-state conflict that could fragment the national legal landscape for every state attorney general pursuing analogous claims. Significant questions remain open on remand, including Meta's dormant Commerce Clause, First Amendment, and other preemption defenses—any of which could independently limit or defeat the claims—and the opinion leaves unresolved where the line falls for features that curate or rank third-party content rather than merely delivering it through an engineered format.
View on CourtListener →X. AI LLC v. Weiser
Why It Matters: The complaint is worth watching because it advances a theory — that a state disparate-outcome liability regime is constitutionally equivalent to commanded racial classification — that, if accepted, would create significant friction with decades of federal disparate-impact jurisprudence under Title VII, ECOA, and the Fair Housing Act, frameworks the federal government itself administers. Count Two presents the stronger and more doctrinally grounded question: whether an explicit statutory authorization for race- or sex-conscious AI adjustments can survive strict or intermediate scrutiny absent the specific evidentiary findings *Croson* and its progeny demand, and that question is likely to survive early motion practice. The case is also a leading indicator of how the federal government intends to use constitutional litigation — rather than preemption doctrine — as a tool to contest state AI regulation, a strategic choice with broad implications for the emerging field of algorithmic governance. How the district court treats the *SFFA* "zero-sum" importation in a non-admissions context may become the most consequential doctrinal development to emerge from this litigation.
View on CourtListener →Why It Matters: This is among the first direct constitutional challenges to a state AI-regulation statute, and the court's treatment of xAI's compelled-speech theory will signal how far *303 Creative* and *Moody v. NetChoice* extend into the emerging AI regulatory space. The case puts in direct tension two competing post-*NIFLA* frameworks: the state's likely characterization of SB24-205 as conduct-based consumer protection, and xAI's characterization of algorithmic curation as protected editorial judgment—a question with implications for every AI company subject to state AI laws modeled on Colorado's. The Dormant Commerce Clause and vagueness claims, if successful, could invalidate "doing business in state" AI compliance triggers more broadly and constrain how states may delegate definitional authority to regulators in technology statutes. Colorado is not alone—similar legislation is advancing in other states—so the outcome here is likely to be watched as a template for or against constitutional challenges to the AI regulatory wave.
View on CourtListener →Doe/Podslurp v. Department of Homeland Security
Issue: In *Doe/Podslurp v. Department of Homeland Security*, an anonymous X/Twitter user argues that a federal administrative summons issued under 19 U.S.C. § 1509 — a customs-enforcement statute — cannot lawfully be used to compel a social media platform to reveal the identity of someone who posted criticism of a federal agent involved in a high-profile shooting death. The question is non-obvious because § 1509 grants DHS broad summons authority over records related to "laws administered by CBP and ICE," which DHS may argue extends to post-9/11 immigration enforcement functions, while Movant contends the statute was never designed to reach domestic political speech. The case also asks whether the First Amendment's protection of anonymous speech, and the Supreme Court's 2023 reformulation of the true-threats doctrine in *Counterman v. Colorado*, independently bar the government from unmasking the speaker before any charge is filed.
Why It Matters: This filing raises what appears to be a first-impression challenge to DHS's use of § 1509 as an identity-unmasking tool directed at domestic political speech, a practice a 2017 OIG report suggests has occurred systematically but that no court has squarely addressed. If the court reaches the First Amendment question, it would be among the first to apply *Counterman v. Colorado*'s 2023 subjective-recklessness requirement in the context of a government-initiated administrative summons — before any criminal charge — rather than in an ongoing prosecution, a doctrinal gap of genuine significance. The case also tests whether the *Florida Star*/*Smith v. Daily Mail* line of cases, which bars civil liability for truthful publication of lawfully obtained public information, constrains government investigative authority at the summons stage, a question those decisions did not resolve. A ruling quashing the summons could meaningfully limit the government's ability to use customs-era administrative tools as first-step instruments for identifying anonymous online critics of federal officials.
View on CourtListener →Anthropic, PBC v. United States Department of War, et al.
Issue: In *Anthropic, PBC v. United States Department of War, et al.*, the defendant-appellants argue that the Ninth Circuit should hold its interlocutory appeal in abeyance pending the D.C. Circuit's resolution of a parallel challenge to the same supply chain risk designations — raising the question of whether one circuit's expedited review of overlapping statutory questions justifies suspending an independent appellate proceeding in a sister circuit. The question is non-obvious because the two proceedings rest on distinct statutory authorities (10 U.S.C. § 3252 and 41 U.S.C. § 4713), the district court's injunction also covers social-media conduct not before the D.C. Circuit, and Anthropic has pressed constitutional claims that would survive any purely statutory ruling in the government's favor.
Why It Matters: The government is asking the Ninth Circuit to pause and let the D.C. Circuit go first — a tactically sensible request if the government anticipates a favorable ruling there that could undermine Anthropic's position in both forums. The practical stakes are asymmetric: abeyance would delay any Ninth Circuit ruling while the existing preliminary injunction remains nominally in place, but the government is simultaneously arguing in Washington that no injunction should exist at all. The motion's most contestable claim — that a favorable D.C. Circuit ruling on § 4713 would practically dissolve the § 3252 injunction — is legally underdeveloped and gives Anthropic a clear target in opposition, since the two statutes are independent grants of authority and the district court's injunction rests on additional constitutional grounds the D.C. Circuit will not reach. More broadly, the case surfaces an open and consequential question: when the same executive action is challenged simultaneously in multiple circuits under distinct legal frameworks, what weight — if any — should one circuit give to a sister circuit's expedited schedule? If the Ninth Circuit denies abeyance and the circuits diverge, pressure for en banc or Supreme Court review of the underlying designation authority would intensify quickly.
View on CourtListener →Doe v. Perplexity AI, Inc.
Why It Matters: Doe v. Perplexity AI is significant because Perplexity's business model — generating direct, synthesized answer-engine responses rather than hosting third-party content — places it at the frontier of the unresolved question of whether Section 230 immunizes AI-generated output or whether the AI developer is itself the "information content provider" stripped of immunity; it also implicates the Garcia v. Character Technologies question of whether AI-generated outputs constitute protected speech at the pleading stage, and may help define the duty-of-care standard for AI answer engines that represent their outputs as factually accurate.
View on CourtListener →Why It Matters: This case sits at the intersection of all three newsletter pillars and implicates the unresolved question of whether Section 230 immunizes AI-generated search output or whether Perplexity, as the system generating the content, is itself the information content provider and thus unprotected — a direct test of Priority Tracking Areas 3, 8, and 9. Given Perplexity's model of synthesizing and presenting AI-generated answers rather than merely hosting third-party content, the case may produce significant doctrine on the ICP status of generative AI search engines and the applicability of product liability and speech-tort theories to AI answer engines.
View on CourtListener →State of Texas v. Snap Inc.
Issue: Whether Snap may remove to federal court under the federal officer removal statute, and whether the First Amendment and Section 230 constitute colorable federal defenses against Texas DTPA and SCOPE Act claims targeting Snapchat's content ratings, safety disclosures, and parental control obligations.
Why It Matters: This case presents a significant intersection of First Amendment compelled-speech doctrine and state child-safety platform regulation, directly implicating the Moody v. NetChoice framework as applied to disclosure and content-rating mandates; the explicit invocation of Section 230 as a colorable federal defense to state consumer protection claims targeting platform safety representations also tracks the growing debate over whether Section 230 and First Amendment defenses can preempt state AG enforcement actions aimed at platform design and content policies.
View on CourtListener →Chicken Soup for the Soul, LLC v. Anthropic PBC
Issue: Whether the unauthorized downloading and reproduction of copyrighted books from shadow-library repositories (including LibGen, Z-Library, Books3/The Pile, and Anna's Archive) to train and optimize commercial large language models constitutes willful copyright infringement under the Copyright Act, actionable by the copyright owner against multiple AI developers including Anthropic, Google, OpenAI, Meta, xAI, Perplexity, Apple, and NVIDIA.
Why It Matters: This complaint is notable for framing industry-wide AI training practices as a coordinated, cascading pattern of willful infringement rather than isolated conduct, and for the plaintiff's deliberate rejection of class-action treatment as a mechanism it characterizes as systematically undervaluing individual copyright claims against AI developers. If litigated to verdict, it could produce the first jury-assessed statutory damages award — potentially at the willful-infringement ceiling — against multiple major AI companies for training-data copyright claims, establishing a damages benchmark that would significantly complicate the class settlement framework currently emerging in related litigation.
View on CourtListener →Beltran v. Meta Platforms, Inc.
Issue: Whether Meta Platforms, Inc., Sama, and Luxottica violated the federal Wiretap Act (ECPA), California's Invasion of Privacy Act, and multiple state consumer protection statutes by capturing, transmitting, and routing to third-party human annotators the private audiovisual recordings of Meta AI Glasses users without their informed consent, while affirmatively marketing the device as "designed for privacy" and "built for your privacy."
Why It Matters: This complaint presents an early test of civil liability exposure for AI hardware developers whose training-data pipelines involve undisclosed human review of sensitive user-generated recordings, potentially establishing that wiretapping and consumer protection statutes apply to wearable AI devices that funnel private audiovisual data to offshore annotators without adequate disclosure. The case may also signal growing judicial and legislative scrutiny of the intersection between AI training data collection practices and informed-consent requirements under both federal and state privacy law.
View on CourtListener →Netchoice, LLC v. Bonta
Issue: Whether California's Age-Appropriate Design Code Act (CAADCA), Cal. Civ. Code §§ 1798.99.28–1798.99.40, facially violates the First Amendment through its coverage definition, age estimation requirement, data use restrictions, and dark patterns prohibition, as evaluated under the *Moody v. NetChoice* standard for facial challenges.
Why It Matters: The decision reinforces that First Amendment facial challengers—including sophisticated litigants like NetChoice—bear a demanding burden under *Moody* to build a record mapping a law's full set of applications before courts can measure unconstitutional uses against the statute's legitimate sweep, effectively raising the evidentiary threshold for pre-enforcement facial injunctions against online child-safety laws. The ruling also signals that states retain meaningful room to enact children's digital privacy legislation, at least where challengers cannot demonstrate facial invalidity across a substantial majority of the law's applications.
View on CourtListener →Amazon.com Services, LLC v. Perplexity AI, Inc.
Issue: In *Amazon.com Services, LLC v. Perplexity AI, Inc.*, the ACLU, ACLU of Northern California, and Knight First Amendment Institute argue that the Computer Fraud and Abuse Act does not reach an AI-powered browser that accesses platform data on behalf of authenticated, consenting users. The brief presses the non-obvious question of whether a platform's unilateral cease-and-desist letter can convert user-delegated access into criminal unauthorized access — and whether any CFAA construction that permits platforms to define their own liability triggers by sending demand letters would unconstitutionally chill automated journalism and public-interest research.
Why It Matters: This brief pushes the Ninth Circuit toward a significant doctrinal extension of *hiQ Labs* — moving that decision's public-data logic into the contested terrain of authenticated, user-delegated AI agent access, a question no circuit has cleanly resolved. If the court accepts the user-authorization-as-delegation framework, it would effectively insulate a broad class of AI browsing and research tools from CFAA liability so long as they operate with a user's credentials and consent. The brief's treatment of *Facebook v. Power Ventures* is the argument's most vulnerable point: that decision specifically permitted CFAA liability to attach after an individualized cease-and-desist, and Amazon's stronger theory — that Perplexity was never independently authorized in the first place — maps more naturally onto *Power Ventures* than amici acknowledge. The constitutional avoidance thread is nonetheless significant: even if the textual argument fails, a ruling that endorses the chilling-effect analysis could constrain how broadly any CFAA holding is written. The case is worth watching as an early test of how appellate courts will apply *Van Buren*'s gates-up/down framework to AI agents acting on behalf of human users.
View on CourtListener →Canady v. Meta Platforms, Inc.
Issue: Whether Meta Platforms and Luxottica violated the federal Wiretap Act (18 U.S.C. § 2511(1)(a)), the California Invasion of Privacy Act, the California UCL and CLRA, and New York GBL §§ 349 & 350 by covertly capturing audiovisual recordings through AI-enabled smart glasses and transmitting them to third-party human reviewers without users' knowledge or consent, contrary to Defendants' affirmative privacy representations.
Why It Matters: This complaint represents an early consumer class action theory applying federal wiretap law and state consumer protection statutes to AI-enabled wearable hardware, testing whether affirmative privacy marketing claims create actionable liability when a device's actual data-collection practices—including undisclosed human review of intimate recordings for AI training—materially diverge from those representations; the case may signal how courts will assess deceptive-advertising and interception claims in the consumer AI hardware context.
View on CourtListener →Fricker v. Fireflies.AI Corp.
Issue: Whether Fireflies.AI Corp. violated §§ 15(a) and 15(b) of the Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14/1 et seq., by automatically collecting and retaining voiceprints of virtual meeting participants who never consented to or contracted with the AI transcription service, without publishing a biometric data retention policy or obtaining written informed consent prior to collection.
Why It Matters: This case raises a potentially significant question about AI transcription services' BIPA obligations toward non-consenting third-party participants — individuals who never interacted with the platform but whose biometric data was nonetheless captured through another user's account — which could broaden the class of plaintiffs who may assert BIPA claims against AI-enabled data collection tools well beyond the contracting user base. If the court adopts Plaintiff's theory, it would signal that AI meeting assistants must obtain affirmative consent not only from subscribing account holders but from every meeting participant whose voice is processed for speaker identification, substantially increasing compliance burdens for the rapidly growing AI productivity-tool sector.
View on CourtListener →Anthropic PBC v. United States Department of War
Why It Matters: This brief pushes the D.C. Circuit toward a significant and unresolved doctrinal question: whether the First Amendment protects not just a developer's written governance documents — which fit comfortably within existing editorial-judgment precedent — but also the design choices embedded in an AI system itself. The retaliation theory, grounded in publicly documented government hostility toward Anthropic's expressed values, is the brief's most legally orthodox argument and tracks the *Vullo* playbook closely enough to warrant serious merits attention. If the D.C. Circuit reaches the AI-expression question, whatever it says will carry substantial weight in future disputes over government leverage over AI developers' product decisions — a dynamic that extends well beyond the procurement context.
View on CourtListener →Why It Matters: This case tests whether courts will apply standard APA arbitrary-and-capricious review — including its requirement that agencies follow their own statutory sequence and engage with contrary factual evidence — to national-security procurement decisions that agencies have historically shielded from meaningful judicial scrutiny. The procedural-inversion argument, if accepted, would establish that even the § 4713 emergency carve-out has real limits when the record reflects self-induced urgency, a holding with broad implications for how agencies invoke national-security exigencies to bypass procedural requirements. The First Amendment retaliation theory is the brief's most novel and contested contribution: if the D.C. Circuit reaches it, the case could clarify whether *Vullo*'s government-coercion framework extends to procurement exclusions where agency officials have publicly disparaged a contractor's expressive advocacy, a question with significant consequences for AI companies whose public policy positions increasingly put them in tension with government clients.
View on CourtListener →Why It Matters: This brief is worth watching primarily because of its unconstitutional conditions framing: by grounding the First Amendment claim in the government-wide scope of the ban rather than the original contract dispute, TPAF gives the D.C. Circuit a doctrinal hook — rooted in *Alliance for Open Society* rather than the more government-favorable *Rust v. Sullivan* — that does not require the court to resolve whether an AI company's values statements and its product functionality are legally separable. That question is genuinely open: no court has squarely addressed whether a national-security procurement statute can support a cross-agency blacklist when the designated "risk" is a contractor's public advocacy about permissible uses of its own technology. The statutory misapplication argument, while creative, turns on whether courts will read § 4713's supply-chain-risk authority as limited to intentional adversarial actors — a reading the government can contest — making the First Amendment theory the stronger vehicle for Petitioner's relief.
View on CourtListener →COALITION FOR INDEPENDENT TECHNOLOGY RESEARCH v. RUBIO
Why It Matters: This case sits at the frontier of a rapidly developing conflict over whether the government may use investigative or regulatory pressure to punish researchers and civil society groups for influencing how platforms moderate content — a question the Supreme Court skirted rather than resolved in *Murthy v. Missouri* last term. A ruling granting even interim relief could constrain the current administration's ability to deploy such pressure against academics and NGOs who study or critique platform content decisions, making the preliminary injunction proceeding consequential well beyond the parties before the court. EFF's brief also implicitly surfaces two unresolved doctrinal questions: whether civil society actors engaged in advocacy-to-intermediaries hold cognizable First Amendment retaliation claims, and whether *Moody*'s recognition of platform editorial rights generates derivative injury for third parties whose work informs that editorial discretion. The brief's most significant vulnerability is its failure to engage *Murthy*, which erected substantial standing and traceability barriers that the Coalition plaintiffs must clear and that the government is virtually certain to invoke in opposition.
View on CourtListener →Why It Matters: No court has clearly resolved what constitutional status attaches to the ecosystem of civil society intermediaries — researchers, NGOs, platform accountability groups — when the government uses administrative tools, funding threats, or public condemnation to pressure or penalize them for their work on platform governance. If a court credits EFF's framing, even in dictum, it could establish a meaningful precedential foothold limiting the government's ability to chill independent technology research through means short of direct censorship. The case also sits at a relatively uncharted intersection of APA Section 705 stay doctrine and First Amendment injury, and could generate useful law on what constitutes irreparable harm in speech-chilling contexts under the APA. The brief is most significant not for the questions it answers, but for the ones it forces a federal court to confront directly.
View on CourtListener →