X. AI LLC v. Weiser
UNITED STATES OF AMERICA'S COMPLAINT in Intervention,… — Attachment 17
Issue: In *X.AI LLC v. Weiser*, the United States argues that Colorado's Senate Bill 24-205 — an AI consumer-protection law taking effect June 30, 2026 — violates the Fourteenth Amendment's Equal Protection Clause on two independent grounds: first, that its disparate-outcome liability framework leaves AI developers no viable compliance path other than sorting outputs by race, sex, or religion; and second, that the statute's explicit exemption permitting AI adjustments to "increase diversity" or "redress historical discrimination" authorizes race- and sex-conscious action that cannot survive heightened constitutional scrutiny. The case raises the unresolved question of whether equal-protection doctrine developed in university admissions and government contracting can be extended to regulate how states structure liability for algorithmic systems operating across billions of outputs and heterogeneous domains.
On April 24, 2026, the United States filed a Complaint in Intervention in the District of Colorado, intervening as a plaintiff pursuant to 42 U.S.C. § 2000h-2 and an Acting Attorney General certification. The complaint is an initiating pleading — not a ruling — and asserts two counts: compelled discrimination (Count One) and authorized discrimination (Count Two), both grounded in the Equal Protection Clause. On Count One, the United States relies on *Peterson v. City of Greenville* for the proposition that state regulatory pressure producing demographic recalibration is functionally equivalent to commanded racial classification, and imports *SFFA v. Harvard*'s "zero-sum" reasoning to argue that correcting a statistical disparity favoring one group necessarily penalizes another. On Count Two, the United States argues that the statute's diversity and historical-remediation exemption facially resembles the race-conscious programs invalidated in *SFFA*, *Croson*, and *Parents Involved*, and that Colorado's reliance on generalized statistical assertions — without findings of specific past intentional discrimination — cannot supply the "strong basis in evidence" those cases require. The United States seeks a declaratory judgment that SB24-205 violates the Equal Protection Clause and a preliminary and permanent injunction barring the Colorado Attorney General from enforcing it.
The complaint is worth watching because it advances a theory — that a state disparate-outcome liability regime is constitutionally equivalent to commanded racial classification — that, if accepted, would create significant friction with decades of federal disparate-impact jurisprudence under Title VII, ECOA, and the Fair Housing Act, frameworks the federal government itself administers. Count Two presents the stronger and more doctrinally grounded question: whether an explicit statutory authorization for race- or sex-conscious AI adjustments can survive strict or intermediate scrutiny absent the specific evidentiary findings *Croson* and its progeny demand, and that question is likely to survive early motion practice. The case is also a leading indicator of how the federal government intends to use constitutional litigation — rather than preemption doctrine — as a tool to contest state AI regulation, a strategic choice with broad implications for the emerging field of algorithmic governance. How the district court treats the *SFFA* "zero-sum" importation in a non-admissions context may become the most consequential doctrinal development to emerge from this litigation.
Complaint
Issue: In *X.AI LLC v. Weiser*, xAI argues that Colorado SB24-205—a state statute regulating "high-risk" AI systems that takes effect June 30, 2026—unconstitutionally compels the company to redesign its Grok AI to reflect Colorado's preferred positions on diversity and equity, restricts protected editorial decisions about training data and model outputs, and extraterritorially burdens AI development conducted entirely outside Colorado. The case raises the unsettled question of whether a state anti-discrimination mandate applied to AI development constitutes compelled expressive speech under the First Amendment or a permissible conduct regulation with only incidental effects on speech—a distinction courts have not resolved in the AI context.
X.AI LLC filed this complaint on April 9, 2026, in the U.S. District Court for the District of Colorado, naming Colorado Attorney General Philip J. Weiser as defendant in a pre-enforcement challenge to SB24-205 before the statute's June 30, 2026 effective date. The complaint asserts four constitutional claims: First Amendment compelled speech, Dormant Commerce Clause extraterritoriality, void for vagueness under the Due Process Clause, and equal protection. xAI's core theory is that every training-data selection and fine-tuning decision embedded in Grok is protected expressive conduct, and that requiring "algorithmic discrimination" mitigation forces xAI to adopt the state's ideological preferences, relying principally on *303 Creative v. Elenis* (2023) and *Moody v. NetChoice* (2024). The complaint also contends that statutory terms such as "high-risk AI system" and "algorithmic discrimination" are left to future Attorney General rulemaking, creating unconstitutional vagueness under $20,000-per-violation exposure. Relief sought is a declaratory judgment of unconstitutionality and preliminary and permanent injunctive relief barring enforcement.
This is among the first direct constitutional challenges to a state AI-regulation statute, and the court's treatment of xAI's compelled-speech theory will signal how far *303 Creative* and *Moody v. NetChoice* extend into the emerging AI regulatory space. The case puts in direct tension two competing post-*NIFLA* frameworks: the state's likely characterization of SB24-205 as conduct-based consumer protection, and xAI's characterization of algorithmic curation as protected editorial judgment—a question with implications for every AI company subject to state AI laws modeled on Colorado's. The Dormant Commerce Clause and vagueness claims, if successful, could invalidate "doing business in state" AI compliance triggers more broadly and constrain how states may delegate definitional authority to regulators in technology statutes. Colorado is not alone—similar legislation is advancing in other states—so the outcome here is likely to be watched as a template for or against constitutional challenges to the AI regulatory wave.