Skip to main content

They Knew: What the OpenAI Trial Revealed About AI's Self-Governance Problem

9 min read
Emily Chen
Emily Chen AI Ethics Specialist & Future of Work Analyst

On April 30, Elon Musk sat in a California federal courtroom and answered the most important question in AI governance. Asked whether xAI trained its Grok model using OpenAI’s outputs, he paused and said: “Partly.” He then added, for the record, that it was “a general practice among AI companies.”

That one word — partly — has done more to clarify the state of AI transparency than any model card, safety whitepaper, or responsible use framework published in the past three years. And the week wasn’t finished.

A floor-to-ceiling panel of thick frosted glass with a single hairline crack running diagonally across it, a gavel resting at the base where the crack begins, interior technical systems faintly visible through the fracture, dramatic chiaroscuro lighting, charcoal and frosted white palette with amber-gold light inside
The transparency barrier held — until litigation cracked it open.

By May 4, Brockman’s personal journals were being read aloud in court. One entry, written while OpenAI was privately planning its for-profit pivot, assessed the move clearly: “To convert to a b-corp without him. That’d be pretty morally bankrupt.” Another, six days after the founding team had asked Musk for nonprofit funding, read: “We’ve been thinking that maybe we should just flip to a for-profit. Making money for us sounds great and all.”

Greg Brockman, OpenAI’s president, is not the villain of this story. That framing misses the point. What his journals reveal is something more disturbing than misconduct: a thoughtful person, genuinely wrestling with ethics in real time, who recognized the moral dimensions of a decision and made it anyway — not because he stopped caring, but because the structure he was operating in made the alternative nearly impossible.

That is the actual AI governance problem. And the courtroom gave us the clearest demonstration of it we have ever seen.

The Distillation Admission and What It Means
#

“Distillation” in this context means systematically querying a rival company’s AI model outputs — through their public APIs, their chatbots, their interfaces — and training your own model on the results. It is, as Musk acknowledged under oath, “a general practice among AI companies.” It is not clearly illegal. It almost certainly violates terms of service. And the only enforcement mechanism that has ever produced a public disclosure of the practice is a lawsuit.

The implications run deeper than the immediate controversy. We have spent the past three years building an elaborate voluntary transparency architecture — model cards that describe training data, safety commitments that constrain use cases, responsible use frameworks that specify prohibited applications. All of it assumes that we know what our models were trained on. If distillation is, as Musk testified, “general practice,” then the training lineages of the most consequential AI systems in the world are partially opaque even to the companies that built them.

The Frontier Model Forum — the industry consortium formed by OpenAI, Anthropic, Google, and Microsoft to advance AI safety — has reportedly been working to combat distillation of American models by Chinese AI labs. The irony now in open court: the same American labs were doing it to each other.

Distillation alone does not make any model unsafe. But it does make the entire transparency infrastructure unreliable. You cannot disclose what you trained on if your training involved systematically mining outputs from systems whose own training you did not audit.

The Structural Trap Stuart Russell Named
#

Musk’s only expert witness at trial was Stuart Russell, the UC Berkeley AI safety researcher who co-authored the field’s foundational textbook and signed the March 2023 letter calling for a six-month pause in AI research — as did Musk, who was simultaneously launching xAI.

Russell was called to establish that the race toward AGI is genuinely dangerous. He was partially successful. But the most important thing he said was not an argument in favor of Musk’s lawsuit. It was a description of the trap that catches every actor in the race: “Each company individually feels it needs to be in this race. That means they can’t stop and solve the safety problem, which I think some of their employees would like to do, but the overall company policy is preventing them.”

This applies to xAI.

It applies to the company whose president signed the March 2023 pause letter while planning the launch that would directly compete with OpenAI. It applies to every major AI lab whose senior researchers have published thoughtful papers on AI alignment while also shipping increasingly capable systems on aggressive timelines.

The governance implication is stark: if the problem is structural — if individual company decisions are shaped by competitive dynamics that no single company controls — then individual company commitments cannot solve it. “We will act responsibly” is a sincere statement that is also, by definition, insufficient.

The Transparency Numbers Say the Same Thing
#

The same week, Stanford HAI published its annual summary of the 2026 AI Index. The finding that received the least attention was the one that mattered most.

The Foundation Model Transparency Index — which measures how much labs actually disclose about their models’ training data, parameter counts, risk assessments, and compute usage — dropped from 58 points to 40 out of 100. That is the steepest single-year decline in the index’s history. The most powerful models deployed in 2026 are, by measurable criteria, the least transparent models the industry has produced.

The correlation is not accidental. As capabilities increase, competitive value increases, and the incentive to disclose decreases. We know that training xAI’s Grok 4 produced 72,816 tons of CO₂-equivalent emissions — not because xAI disclosed it, but because Stanford researchers measured it. We know Greg Brockman believes OpenAI is “80% of the way to AGI” — not from any official company statement, but because he said it in court, under oath, on May 4.

That last data point deserves emphasis. The single most significant statement about AGI progress made by an OpenAI executive this year was made under legal compulsion. In a world where 88% of enterprises use AI systems built by companies whose training practices are partly opaque, whose CO₂ footprints are partly unmeasured, and whose internal beliefs about AGI timelines are partly confidential, the word partly begins to feel like the defining adjective of the entire sector.

What the Governance Architecture Actually Produces
#

I have written previously about how voluntary governance commitments can be coerced away when they encounter economic or national security pressure — as Anthropic discovered when the Pentagon labeled it a supply chain risk after it refused to remove safety guardrails. I have written about how the behavioral constraints embedded in frontier AI systems are implemented as text, enforced by organizational culture, audited only by the labs that wrote them.

This week adds a third panel to that picture: even when individuals in those labs recognize that their decisions raise serious ethical concerns — “pretty morally bankrupt” is not an ambiguous phrase — the structural dynamics of the competitive environment make alternative choices nearly impossible to sustain.

The Pentagon did not pause when the Anthropic legal battle remained unresolved. On May 1, 2026, it announced new AI agreements with Nvidia, Microsoft, Amazon Web Services, and Reflection AI, targeting deployment at the highest security classification levels — Impact Level 6 and 7 — in pursuit of its stated goal of becoming “an AI-first fighting force.” The governance dispute with one vendor did not slow the procurement timeline. It diversified the vendor list.

Meanwhile, the sharpest AI governance framework produced in the United States this week came not from Congress, not from the White House AI Office, and not from any of the industry’s voluntary governance initiatives. It came from the Academy of Motion Picture Arts and Sciences, which ruled that performances must be “demonstrably performed by humans with their consent” and screenplays must be “human-authored” to qualify for Oscar eligibility. The Oscars have clearer rules for AI use than any federal law currently on the books.

The Accountability Gap the Trial Exposed
#

None of what the courtroom revealed this week is evidence that the people leading AI development are uniquely bad actors. The journals, the testimony, and the admissions point in the opposite direction: these are people who understood the stakes, took them seriously, felt the weight of the trade-offs, and were still unable to override the structural forces that determine how fast you have to ship, how much you have to raise, and what governance constraints you can actually sustain when a government customer or a competitor makes the alternative economically prohibitive.

That is precisely the governance gap that external regulation is supposed to fill. External governance does not exist to catch villains. It exists because structural competitive pressure can make good people make decisions they themselves, in private, recognize as ethically compromised.

The trial produced something valuable and accidental: a week of forced disclosure. Under oath, Musk acknowledged his lab’s training practices. Under cross-examination, Brockman’s private doubts became public record. Under the pressure of litigation, OpenAI’s in-house beliefs about AGI timelines left the boardroom and entered the court record.

None of this would have happened under the current voluntary governance framework. None of it would have been required.

The AI industry built a transparency architecture that assumes good faith operates independent of competitive incentives. Brockman’s journals suggest the people inside the industry have always understood that assumption is fragile. “Pretty morally bankrupt” is not the language of someone who doesn’t know better. It is the language of someone who knows better and cannot stop.

Until external governance creates consequences for partly — for partial disclosure, partial compliance, partial transparency — the industry will keep answering accountability questions the same way Musk did: honestly, on the stand, only when the subpoena leaves no other option.


References:

AI-Generated Content Notice

This article was created using artificial intelligence technology. While we strive for accuracy and provide valuable insights, readers should independently verify information and use their own judgment when making business decisions. The content may not reflect real-time market conditions or personal circumstances.

Whenever possible, we include references and sources to support the information presented. Readers are encouraged to consult these sources for further information.

Related Articles