Section 230

IN RE: SOCIAL MEDIA ADOLESCENT ADDICTION/PERSONAL INJURY PRODUCTS LIABILITY LITIGATION

🏛 District Court, N.D. California · 16 filings
2022-10-06 Other Section 230

Reply Brief — Attachment 2990

Issue: In *In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation*, Meta Platforms and Instagram argue that an opposing damages expert should be excluded at the Daubert stage because his "Bad Experience Violations" methodology impermissibly counts harms arising from third-party content—conduct Meta contends Section 230 immunizes—and because his core extrapolation projects an 11-day, largely non-U.S. internal survey across six years without any statistical validation. The case raises the non-obvious question of whether Section 230 immunity can operate not merely as a defense to liability at the pleading or summary judgment stage, but as a freestanding basis to exclude an expert's quantification methodology under FRE 702.

Meta and Instagram filed this reply brief on April 24, 2026, in support of their pending motion to exclude or strike the expert opinions of Carl Saba, a damages witness offered by the California Attorney General and three co-plaintiff states in MDL No. 3047. The brief responds to the AGs' opposition and seeks full exclusion of four of Saba's opinions (Ops. 2–5) under FRE 702 and *Daubert v. Merrell Dow Pharmaceuticals*. Meta argues that Saba's "Bad Experience Violations" count necessarily attributes liability to Meta for third-party user content, placing the methodology beyond the reach of any cognizable legal theory under Section 230. Meta separately contends that Saba's central empirical foundation—an 11-day internal company survey called BEEF, predominantly reflecting non-U.S. data—cannot reliably support a six-year, nationwide harm projection under *General Electric Co. v. Joiner*, and that the Ninth Circuit's 2025 decision in *Engilis v. Monsanto Co.* forecloses the AGs' argument that such deficiencies go only to weight. Finally, Meta argues that Saba's time-spent thresholds were set by counsel rather than derived through any independent expert methodology, and that his disgorgement figure lacks both a causal nexus to specific wrongdoing and a statutory basis under the applicable consumer protection laws.

The Section 230 argument is the most doctrinally ambitious piece of this filing: if accepted, it would establish that Section 230 immunity can collapse the Daubert admissibility inquiry—barring an expert from quantifying harm attributable to third-party content even when the underlying claims have survived dismissal. That would mark a significant procedural extension of immunity doctrine well beyond its traditional deployment at the pleading stage, and courts in this MDL have already drawn lines that complicate Meta's position. The BEEF-survey extrapolation challenge is the brief's strongest technical argument, representing a clean application of *Joiner*'s analytical-leap standard to a fact pattern—counsel-selected, geographically limited, temporally narrow survey data projected across years—that is difficult to rehabilitate through rebuttal alone. More broadly, this filing is worth watching because the expert exclusion fight will shape what the jury-facing damages case looks like in one of the first state AG consumer protection trials to proceed in this MDL, and a successful Daubert challenge here could effectively cap the states' ability to quantify violations at scale.

2022-10-06 Opposition to Motion for Summary Judgment Section 230 First Amendment AI Liability

EXHIBITS re 2480 Brief, Partially Unsealed Meta Exhibits…

Issue: In *In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation*, the Plaintiffs' Steering Committee argues that Meta and co-defendants designed their platforms with features — including infinite scroll, autoplay, algorithmic notification timing, and gamification mechanics — that were unreasonably dangerous for minor users, and that commercially feasible safer alternatives existed and were knowingly bypassed. The legal question is whether that evidence is sufficient to create genuine disputes of material fact on design defect and feasibility, precluding summary judgment in Defendants' favor on core products liability claims.

Filed on April 13, 2026, as Amended Exhibit 989 to the Plaintiffs' Omnibus Opposition to Defendants' Motions for Summary Judgment (Dkt. 2480), this exhibit is the expert report of Tim Estes, dated May 16, 2025. Estes, who founded the child-safety platform AngelQ, offers opinions that Defendants' platforms were defectively designed through deliberate deployment of compulsive-engagement mechanics targeting minors, relying in part on Defendants' own internal documents to establish contemporaneous knowledge of foreseeable harm. He further argues that meaningful age verification and parental control mechanisms — including credit card checks, government ID scanning, and federated identity systems — were commercially available and in use by comparable platforms such as Xbox, Apple, and Google Family Link well before Defendants implemented them. The report contends that Defendants' eventual parental controls were opt-in, structurally ineffective, and arrived only after harm was already documented, characterizing their inadequacy not as a missed opportunity but as itself a design defect. Estes cites COPPA, the 2023 U.S. Surgeon General's Advisory, and the Kids Online Safety Act as reinforcing the regulatory and normative baseline against which Defendants' design choices should be measured.

This report represents a significant moment in the effort to establish that products liability design defect doctrine applies to social media platform architecture — a theory that, if credited at summary judgment, would move the litigation past the threshold question of legal viability and into full merits adjudication. The feasibility argument is particularly consequential: by grounding safer alternative design in real-world commercial comparators that predated the alleged harm period, Plaintiffs aim to foreclose any claim of technological impossibility as a matter of law, converting feasibility into a jury question. Two open doctrinal questions hang over the report's reception: whether courts will apply a minor-specific risk-utility standard for engagement features that serve adult users while foreseeably harming children, and whether COPPA compliance functions as a regulatory floor or a safe harbor that displaces common law claims — neither of which has been definitively resolved in this MDL. The report's individual causation gap and its use of Estes's own platform as a feasibility comparator are predictable pressure points that Defendants will likely press in both Daubert proceedings and in reply briefing.

2022-10-06 Other Section 230

STATEMENT OF RECENT DECISION pursuant to Civil Local… — Attachment 2940

Issue: In *In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation*, the State Attorneys General of California, Colorado, Kentucky, and New Jersey argue that Section 230 of the Communications Decency Act does not immunize Meta from state consumer protection claims premised on Meta's own deceptive conduct and product design choices — the central defense Meta is pressing in its pending Motion for Summary Judgment. The question is non-obvious because Section 230(e)(3) expressly preempts inconsistent state laws, and federal courts, including the Ninth Circuit, have historically construed that immunity broadly against claims that would treat platforms as publishers of third-party content.

Filed on April 13, 2026 — two days before oral argument on Meta's Motion for Summary Judgment — this is a Statement of Recent Decision submitted under Civil Local Rule 7-3(d)(2), a procedural mechanism that allows a party to bring a newly issued judicial opinion to the court's attention without extended argument. The filing attaches as Exhibit A the Massachusetts Supreme Judicial Court's April 10, 2026 decision in *Commonwealth v. Meta Platforms, Inc.*, No. SJC-13747, which the State AGs characterize as holding that Section 230 does not bar state consumer protection claims against Meta. The notice does not brief the underlying legal arguments; it presents the Massachusetts ruling as directly on-point authority supporting the AGs' opposition to Meta's summary judgment motion. The filing makes no attempt to reconcile the Massachusetts holding with controlling Ninth Circuit precedent, and it does not address whether the Massachusetts consumer protection statute is materially similar to the statutes at issue in this MDL.

Meta's core defense in this MDL is that Section 230 shields it from state liability for harms caused by its platforms — a defense that, if accepted at summary judgment, could end the case before trial. The State AGs are pointing to a brand-new ruling from Massachusetts's highest court as evidence that courts are increasingly unwilling to let Section 230 block consumer protection claims about how Meta designed and marketed its products, and the eve-of-argument timing is plainly strategic. Whether the filing moves the needle depends entirely on whether the MDL court finds the Massachusetts reasoning persuasive under Ninth Circuit law — a question this notice conspicuously declines to answer. More broadly, the filing adds one more data point to an emerging question in the courts: whether state attorneys general suing in their sovereign enforcement capacity occupy a distinct doctrinal position under Section 230 that is not yet resolved by existing federal precedent.

2022-10-06 Appellate Opinion Section 230

STATEMENT OF RECENT DECISION pursuant to Civil Local…

Issue: Commonwealth v. Meta Platforms, Inc.* asks whether Section 230 of the Communications Decency Act immunizes Meta from state-law claims targeting the deliberate design of addiction-inducing platform features — infinite scroll, autoplay, and intermittent-reward notifications — engineered to exploit adolescent neurology, and from claims based on Meta's own affirmative misrepresentations about Instagram's safety. The question is non-obvious because Meta argued these design features are inseparable from its role in curating and amplifying third-party content, which courts including the MDL district court had previously accepted as a basis for immunity.

The Massachusetts Supreme Judicial Court, on direct appellate review of an interlocutory order, unanimously affirmed the Superior Court's denial of Meta's motion to dismiss on Section 230 grounds. The SJC held that Section 230 immunity requires satisfying two elements: a dissemination element (the claim must be premised on the defendant circulating third-party content to others) and a content element (liability must turn on the specific harmful substance of that third-party content). The Commonwealth's design-feature claims failed the content element because addictive architecture operates identically regardless of what any user posts — it is content-indifferent — meaning neither element was met. Claims based on Meta's own marketing misrepresentations and deliberate failures of age-verification were held categorically outside Section 230 because the statute addresses liability for another's content, not a defendant's own speech. The Court explicitly criticized the MDL district court's contrary 2023 and 2024 rulings as failing to engage with Section 230's common-law foundations or congressional purpose. Plaintiff States filed the opinion in this MDL on April 13, 2026, as a Statement of Recent Decision in connection with pending dispositive motion Doc. 2779.

This opinion, from the highest court of Massachusetts, establishes the most analytically rigorous framework to date for limiting Section 230 immunity in platform-design cases, grounding a formal two-element test in a careful reconstruction of common-law publisher liability that competing courts will find difficult to dismiss as result-oriented. It directly and by name repudiates the MDL district court's Section 230 rulings, creating an explicit record of contrary authority as the Ninth Circuit considers an appeal of those very rulings argued in January 2026. For the AG plaintiffs in this MDL, the opinion supplies both doctrinal ammunition — a ready-made analytical framework — and a high-court imprimatur for the proposition that content-indifferent design claims fall entirely outside Section 230's scope. The Court left open whether the design-defect framing alone would independently defeat immunity and flagged without deciding that Meta's push-notification system may render Meta an information content provider, preserving additional avenues for future plaintiffs.

2022-10-06 Other Section 230 First Amendment

JOINT CASE MANAGEMENT STATEMENT for April Case… — Attachment 2

Issue: In *In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation*, defendants Meta and YouTube argue that Section 230 of the Communications Decency Act immunizes virtually every platform feature plaintiffs allege caused harm to adolescents — including recommendation algorithms, autoplay, infinite scroll, and engagement-maximizing notifications — on the theory that these constitute protected "publishing" decisions over third-party content rather than independent product design choices. The instruction also asserts that failure-to-warn claims are equally immunized, treating a platform's silence about its own design-generated harms as equivalent to an editorial decision about user-generated content — a position no circuit has cleanly endorsed.

At the federal MDL pretrial stage in the Northern District of California, the Plaintiffs' Steering Committee filed a Joint Case Management Statement for the April 2026 Case Management Conference, attaching as Exhibit 2 a redline of Defendants' Revised Proposed Jury Instruction #18, titled "Protection for Publishing and Expressive Activity." The instruction is defendants' revised attempt to secure court adoption of a broad Section 230 immunity charge after a prior version was rejected at the March 18, 2026 Pretrial Conference (ECF No. 2837). The proposed instruction enumerates sixteen categories of protected platform conduct and structures the only viable path to liability — involving undefined "Non-Protected Conduct," a negligence finding, and a substantial-factor causation showing — as a compounding standard that must be satisfied simultaneously. A case-specific passage further instructs jurors to consider evidence of third-party bullying solely to explain a plaintiff's continued platform use, preemptively limiting how causation arguments based on harmful content exposure may be presented. No case citations appear within the instruction text; the Section 230 framework is invoked implicitly through the instruction's definitional structure.

The platforms are asking the court to tell jurors, as a settled legal matter, that nearly everything plaintiffs challenge — recommendation algorithms, autoplay, infinite scroll, engagement notifications — is legally protected activity that cannot give rise to liability, effectively resolving the most contested open question in Section 230 law inside a jury trial rather than through a dispositive motion. The Supreme Court's 2023 *Gonzalez v. Google* decision deliberately left unresolved whether algorithmic amplification constitutes "publishing," meaning whatever the court decides about this instruction could become the most significant judicial statement on that question to emerge from this MDL. The court's prior rejection of an earlier version signals meaningful skepticism, and if the court issues a written ruling explaining why it again rejects or substantially rewrites the instruction, that order — not the instruction itself — may carry the greatest precedential weight for how future social media injury plaintiffs are permitted to frame their claims.

2022-10-06 Other Section 230 First Amendment

PRETRIAL ORDER NO. 2 RE: MOTIONS IN LIMINE by Judge… — Attachment 2898

Issue: This pretrial order addresses whether defendants in a social media products liability MDL may introduce expert testimony characterizing adolescent harm claims as unsupported or overstated, and whether plaintiffs may rely on lay witness accounts of exposure to inappropriate content as circumstantial evidence of harm without closing the causation loop through expert testimony alone. The questions are non-obvious because they sit at the intersection of FRE 702 gatekeeping, the design-defect theory that has so far allowed these claims to survive § 230, and the practical reality that exclusion of defense experts at the bellwether stage locks in the plaintiff's causal narrative for the jury. The stakes are amplified by the MDL's bellwether structure, in which evidentiary rulings here will likely shape how parallel cases are tried or settled across hundreds of consolidated actions.

Judge Yvonne Gonzalez Rogers issued Pretrial Order No. 2 on March 30, 2026, resolving ten motions in limine filed by both sides in advance of the bellwether trial in Breathitt County School District v. Meta Platforms Inc. et al. The court excluded both of defendants' affirmative experts: Dr. Hutt was barred for offering impermissible legal-conclusion advocacy rather than methodologically grounded opinion, and Dr. Hampton was excluded for circular, ipse dixit reasoning underlying his "no evidence of harm" and "moral panic" framing. Plaintiff's core defect theories — CSAM-reporting failures and content-filter labeling defects — survived defendants' challenge and will be presented to the jury, while plaintiff's lay testimony regarding exposure to inappropriate images was held sufficient to support a permissible inferential harm argument. The court denied both sides' overbroad motions on content moderation evidence and indicated a limiting instruction would follow, denied most sealing requests, and deferred one ruling on financial mismanagement evidence pending submission of audit materials.

The categorical exclusion of both defense experts creates a materially asymmetric evidentiary posture at trial: defendants enter without credentialed methodological opposition to the foundational claim that social media causes adolescent harm, while plaintiffs' specific design-defect theories proceed intact. The court's acceptance of circumstantial lay testimony as sufficient to support an inferential harm argument is a notable departure from the more demanding causation standards applied in other complex products liability contexts — such as pharmaceutical MDLs — and may prove contentious on appeal or in parallel proceedings where defense experts have survived Daubert scrutiny. The circumscribed admission of foreign regulatory evidence bearing on defendants' knowledge and feasible alternative design opens a significant avenue for plaintiffs across the MDL to introduce EU and UK regulatory findings without triggering foreign-law instructions, and the deferred financial mismanagement ruling leaves open a question that could bear directly on punitive damages framing in downstream bellwether cases.

2022-10-06 Other Section 230 First Amendment AI Liability

Exhibit List DEFENDANTS PRELIMINARY EXHIBIT LIST… — Attachment 2851

Issue: Insufficient text to determine. (The document is a pretrial exhibit list, not an opinion, order, or brief addressing a specific legal question under §230, First Amendment doctrine, or AI/product liability theory.)

Defendants Meta Platforms, Inc. and Instagram, LLC filed a preliminary exhibit list on March 16, 2026, in the Breathitt County Board of Education bellwether trial within the MDL. The list discloses 235+ exhibits Defendants may use at trial, including Breathitt County School District financial statements, school board minutes, student handbooks, behavior and discipline data, technology plans, digital citizenship curricula, and COVID-related school communications. The filing notes that the list excludes impeachment or rebuttal documents and deposition transcripts, and that a final updated list is due April 20, 2026.

As a pretrial exhibit list rather than a ruling or substantive motion, this document does not advance legal doctrine; however, the categories of exhibits—particularly school financial records, pre-existing behavioral data, and district technology and digital-citizenship plans—signal that Defendants intend to contest causation and damages by attributing student mental-health and behavioral issues to pre-existing institutional, socioeconomic, and pandemic-related factors rather than to platform design.

2022-10-06 Other Section 230 First Amendment AI Liability

Witness List by Meta Platforms, Inc. DEFENDANTS… — Attachment 2856

Issue: Whether Meta, Snap, TikTok, and YouTube are civilly liable under product design defect and related theories for harm allegedly caused to adolescent users of the Breathitt County School District, where defendants contest both causation and the adequacy of plaintiff's abatement damages model.

In this bellwether trial within the In re Social Media Adolescent Addiction MDL, defendants filed a corrected preliminary witness list on March 17, 2026, pursuant to a pretrial scheduling order, disclosing witnesses they may call live or by deposition designation in their case in chief at the Breathitt County trial. The list identifies Breathitt County school employees — several of whom served as Rule 30(b)(6) designees — whom defendants intend to examine on alternative causes of harm and the district's failure to mitigate, as well as platform employees from Meta, Snap, TikTok, and YouTube who will testify about each platform's safety policies, well-being tools, and internal research. Defendants also disclosed a substantial expert roster spanning psychiatry, epidemiology, education policy, marketing, economics, and platform design to challenge plaintiff's causation theory, damages model, and abatement cost estimates.

This witness list signals that defendants' trial strategy will center on contesting general and specific causation through scientific experts while affirmatively presenting evidence of platform safety efforts, positioning the case as a significant test of whether product liability theories can survive against social media platforms when defendants offer robust alternative-cause and reasonable-design defenses in the school-district plaintiff context.

2022-10-06 Other First Amendment Section 230

Witness List by Plaintiffs' Steering Committee… — Attachment 2848

Issue: Whether Meta Platforms, TikTok/ByteDance, Google/YouTube, and Snap are civilly liable under a products liability theory — including defective design and failure to warn — for harms suffered by a school district caused by the addictive and engagement-optimizing design features of their social media platforms.

This is a plaintiff's preliminary trial witness list filed by the Breathitt County Board of Education in the Social Media Adolescent Addiction MDL before Judge Yvonne Gonzalez Rogers in the Northern District of California. The school district plaintiff designated 47 witnesses — including lay witnesses such as school administrators, counselors, and finance personnel; fact witnesses consisting of current and former employees of Meta, TikTok/ByteDance, Google/YouTube, and Snap; and expert witnesses spanning addiction medicine, neuroscience, computer science, public health, economics, and forensic accounting. The witnesses are expected to support the district's claims by addressing platform design choices prioritizing engagement over safety, algorithmic recommendation systems, failure to implement age verification, failure to warn of addictive platform features, quantified costs and diverted staff time the district incurred, and, if reached, punitive damages through financial testimony on defendants' wealth and ability to pay.

This witness list signals that the school district bellwether trial in the Social Media MDL is advancing toward trial on a products liability theory that characterizes engagement-optimizing algorithms and addictive design features as actionable defects — a framing that, if successful, could establish a roadmap for institutional plaintiffs to recover costs attributable to platform design independent of Section 230 immunity arguments previously litigated in the MDL.

2022-10-06 Other Section 230 First Amendment

ORDER GRANTING IN PART AND DENYING IN PART RULE 702… — Attachment 2857

Issue: Whether expert general causation opinions offered to show that specific platform design features cause compulsive use and mental health harms in adolescents must be excluded under Federal Rule of Evidence 702 because the experts fail to disentangle actionable design defects from content and conduct immunized by Section 230 and the First Amendment.

In this MDL products liability action brought by school districts and state attorneys general against Meta, Google, ByteDance, and Snapchat, defendants moved under Rule 702 to exclude thirteen plaintiffs' general causation experts on multiple grounds, including methodological unreliability, lack of qualifications, improper reliance on internal company documents, and—critically—failure to isolate the causal effects of design features the court had previously deemed actionable from features and content barred by Section 230 or the First Amendment. The court denied the Section 230/First Amendment exclusion argument across all twelve experts to whom it applied, holding that expert testimony need not independently establish every element of the plaintiff's case to be admissible, and that the experts' reports did in fact address the specific actionable design defects identified at the motion-to-dismiss stage. The court granted the motions in part, however, striking discrete opinions in which experts—lacking relevant expertise—opined that defendants consciously prioritized profit and engagement over user wellbeing, finding those opinions went beyond each witness's disclosed area of expertise.

This ruling advances the theory that product-design claims targeting social media platforms' compulsive-use-inducing features can survive both Section 230 immunity and First Amendment limits at the expert-admissibility stage, so long as expert opinions are tethered to the specific design defects the court has deemed actionable rather than to third-party content or protected publishing decisions—a framework that could shape how plaintiffs structure expert testimony in future platform-liability litigation.

2022-10-06 Other Section 230 First Amendment AI Liability

Exhibit List PLAINTIFF'S PRELIMINARY EXHIBIT LIST…

Issue: Whether Meta's internal research, design decisions, and communications regarding adolescent users' mental health and well-being are admissible at trial to support plaintiffs' product liability claims arising from alleged addiction-causing features of Facebook and Instagram.

This document is Plaintiff's Preliminary Trial Exhibit List filed on March 16, 2026, in the multidistrict litigation consolidating personal injury claims against Meta and other social media defendants. The list identifies hundreds of proposed trial exhibits drawn from Meta's internal documents, including PowerPoint presentations, emails, internal research studies, and message summaries, covering topics such as teen mental health, problematic use research, social comparison effects, suicide and self-injury content, and Meta's internal awareness of risks to adolescent users. The exhibit list also includes congressional hearing transcripts, coroner reports, and video excerpts of statements by Meta executives and early investors. No court ruling is reflected in this document; it is a pretrial filing identifying evidence plaintiffs intend to introduce at trial.

The breadth and specificity of the exhibit list signals that plaintiffs intend to prove at trial that Meta possessed extensive internal knowledge of harms its platforms caused to adolescent users, which could be significant for establishing the knowledge and design-defect elements of product liability claims that courts in this MDL have allowed to proceed notwithstanding Section 230 immunity arguments.

2022-10-06 Other Section 230 First Amendment AI Liability

Proposed Jury Instructions — Attachment 2837

Issue: Whether §230 of the Communications Decency Act and the First Amendment immunize social media platform defendants from liability for specific algorithmic and design features—such as infinite scroll, content recommendation algorithms, notification clustering, and autoplay—when a school district plaintiff alleges those features caused compulsive platform use and resultant mental health harms to its students.

The Breathitt County School District brought negligence and public nuisance claims against Meta, Snap, TikTok, and YouTube in MDL No. 3047, alleging that the defendants designed their platforms to foster compulsive use in minors, causing the district to expend significant resources addressing the effects on its schools. With trial set for June 15, 2026, the parties filed competing proposed jury instructions, with the central dispute concentrated in Instruction No. 18, where defendants proposed a detailed "Protected Conduct" list—encompassing algorithmic recommendations, infinite scroll, notifications, autoplay, and similar features—that the jury would be barred from using as a basis for liability under §230 and the First Amendment, while plaintiff objected and proposed a narrower instruction. The defendants' proposed instruction reflects prior court rulings categorizing specific platform features as protected, and confines plaintiff's recoverable claims to a limited "Non-Protected Conduct" list including age verification processes, parental controls, account deletion processes, and appearance-altering filters.

This document is significant because it reveals how §230 and First Amendment protections will be operationalized at the jury instruction level in the first bellwether trial of a major social media addiction MDL, effectively showing which platform design features a court has already ruled immune from tort liability; the outcome could establish a concrete, feature-by-feature framework for distinguishing actionable product design claims from immunized publishing decisions that other courts and litigants could adopt or contest in future platform liability litigation.

2022-10-06 Other Section 230 First Amendment AI Liability

MOTION to Exclude and/or Strike Expert Testimony of Carl… — Attachment 2845

Issue: Whether expert testimony calculating statutory penalty "violations" under state consumer protection laws by counting teen users' encounters with third-party "bad experiences" on Instagram must be excluded under Federal Rule of Evidence 702 where that methodology treats Meta as a publisher of third-party content immunized under 47 U.S.C. § 230(c)(1).

In this MDL consolidating state attorneys general consumer protection claims against Meta, defendants moved under Federal Rule of Evidence 702 to exclude the testimony of plaintiffs' damages expert Carl Saba, a financial consultant who calculated statutory violations using two primary methods: extrapolating results of an 11-day internal Meta survey (the "BEEF Survey") over six years to count teen encounters with third-party harmful content, and counting instances of teen usage exceeding 30 minutes per day per month. Meta argued that Saba's "Bad Experience Violations" opinion is legally impermissible under Section 230 because it attributes liability to Meta for third-party content that Meta would have had to actively vet to avoid, that both violation-counting methodologies are incorrect as a matter of law under the four lead states' consumer protection statutes (which require violations to be tied to wrongful acts or exposed/injured consumers), and that Saba's methodology was in large part designed by counsel rather than grounded in his own expert analysis. Meta further sought exclusion of Saba's disgorgement opinions as lacking the required causal nexus between Meta's advertising profits and the alleged misconduct, and as impermissible under California, Kentucky, and New Jersey law, which permit only restitutionary disgorgement or none at all.

The motion presents a significant question about whether Section 230 immunity can be invoked not only to defeat substantive liability claims but also to exclude expert damages methodologies that treat a platform's publication of third-party content as the predicate "violation" for penalty calculation purposes, potentially extending §230's reach into the evidentiary phase of litigation. If the court grants exclusion on this ground, it would signal that plaintiffs in platform-liability cases must carefully disaggregate algorithmic and design conduct from publishing conduct even at the damages-quantification stage.

2022-10-06 Other Section 230 First Amendment

Reply Brief — Attachment 2839

Issue: Whether §230 of the Communications Decency Act and the First Amendment require exclusion, under FRE 403, of evidence concerning third-party content and protected publishing features (including recommendation algorithms, autoplay, notifications, and Snap Streaks) that plaintiff seeks to admit at trial to support failure-to-warn and product-defect claims against social media platforms for alleged adolescent addiction injuries.

In this MDL bellwether trial (Breathitt County Board of Education v. Meta et al.), defendants Snap, Meta, YouTube/Google, TikTok, and related entities filed a motion in limine seeking to exclude evidence of third-party content and §230-protected platform features; this document is defendants' reply brief in support of that motion, filed March 11, 2026, ahead of a March 18 hearing. Defendants argue that plaintiff's failure-to-warn theory impermissibly attempts to premise liability on protected publishing functions, contrary to the Ninth Circuit's holdings in *Doe v. Grindr* (9th Cir. 2025) and *Estate of Bride v. Yolo Technologies* (9th Cir. 2024), which require that any actionable harm be "independent of the site's publishing function." Defendants further contend that plaintiff's claimed "actionable features" rationale for admitting the challenged evidence is pretextual, and that the evidence presents a serious risk of unfair prejudice under FRE 403 because a jury would likely draw a direct causal inference from protected conduct rather than from any non-protected design defect.

This reply brief illustrates how the §230 immunity question is migrating from the pleadings and summary judgment stages into trial-management rulings, testing whether the court's prior "feature-by-feature" liability framework can be operationalized as an evidentiary filter; the outcome could establish a replicable in limine standard for separating protected editorial/publishing conduct from actionable product-design claims in platform-liability litigation.

2022-10-06 Other Section 230 First Amendment

Administrative Motion to File Under Seal - Plaintiff's… — Attachment 3

Issue: Whether Section 230 and the First Amendment bar admission of evidence regarding YouTube's and Meta's platform design features—including "likes," recommendations, age verification, autoplay, shorts, notifications, and negative third-party content—in a minor plaintiff's negligent design trial premised on addiction rather than third-party content.

In this California state court MDL bellwether trial (JCCP 5255), Google/YouTube moved in limine to exclude all evidence of allegedly negligent design features on Section 230 and First Amendment grounds, relying in part on the Ninth Circuit's 2025 decision in *NetChoice v. Bonta*, which applied strict scrutiny to a statute restricting platforms from displaying like-counts to minors. Judge Kuhl denied the motion in its entirety, holding that where plaintiffs' theory is that content-agnostic design features (such as the "like" mechanism, notifications, autoplay, and data-driven algorithmic profiling) made the platform addictive independent of any specific third-party content, neither Section 230 nor the First Amendment shields defendants from liability or bars the underlying evidence. The court further held that even features for which liability is precluded under Section 230—such as age verification defaults—remain admissible as contextual evidence explaining why other design choices were allegedly negligent, and that negative third-party content is admissible to illustrate the degree of a plaintiff's addiction rather than as a basis for publisher liability.

This ruling advances a significant and recurring distinction in platform liability litigation: that Section 230 and the First Amendment operate as liability bars tied to *content-based* claims, not as blanket evidentiary shields against design-defect theories premised on addiction-inducing, content-agnostic features, potentially signaling that state-court juries will hear extensive evidence about algorithmic architecture even where direct liability for that architecture is nominally cabined by prior rulings.

2022-10-06 Other Section 230 First Amendment

Administrative Motion to File Under Seal - Plaintiff's… — Attachment 2830

Issue: Whether evidence of content and platform features is subject to exclusion at trial under §230 of the Communications Decency Act and the First Amendment in a products liability action brought by a school board against Meta and other social media defendants.

This is a temporary sealing motion filed by Plaintiff Breathitt County Board of Education on March 9, 2026, in the MDL proceeding before the Northern District of California, seeking to seal its opposition to Defendants' Motion in Limine #1, which moves to exclude evidence of content and features allegedly protected by §230 and the First Amendment. The substance of the underlying opposition brief is not included in the filed document, as the motion relates solely to the administrative sealing request. No ruling on the merits of the evidentiary dispute is reflected in this filing.

Insufficient text to determine the precise arguments or the court's reasoning, but the existence of a motion in limine framing §230 and the First Amendment as evidentiary shields — rather than pleading-stage defenses — signals that defendants are pursuing these protections through trial to limit what a jury may consider regarding platform content and design features.

Related Commentary