...
AI & Computing NewsNews

Musk Admits That xAI Trained Grok on OpenAI Models, Then Called It Standard Practice

Elon Musk confirmed on the witness stand in Oakland on April 30, 2026, that xAI used OpenAI's models to help train Grok, describing model distillation as "standard practice" across the industry.

Key Takeaways

  • Musk confirmed under cross-examination that xAI used OpenAI models “partly” to train Grok, calling it standard industry practice to “use other AIs to validate your AI.”
  • The admission came during cross-examination by OpenAI’s lead attorney William Savitt at the federal courthouse in Oakland on April 30, 2026.
  • Musk separately ranked the world’s leading AI companies on the stand, placing Anthropic first, followed by OpenAI, Google, and Chinese open-source models, and described xAI as a much smaller company by comparison.
  • OpenAI did not respond to a request for comment on Musk’s admission at the time of publication.

Elon Musk, testifying for a second day at the federal courthouse in Oakland on April 30, 2026, confirmed under cross-examination by OpenAI attorney William Savitt what the AI industry had long suspected but never heard stated openly in court.

Asked directly whether xAI used distillation on OpenAI models to train Grok, Musk said it was common practice and among AI companies. When pressed on whether that meant yes for xAI specifically, he replied with a single word: “Partly.”

He added that using other AIs for validation of your AI is standard practice. The statement came on day four of the Musk v. Altman trial, where he is suing OpenAI over its shift from a nonprofit lab to a for-profit company.

What Musk’s Admission Actually Means

The admission carries legal and strategic weight. Model distillation, training smaller models on outputs of larger ones, is not illegal under US law but often breaches the terms of service set by OpenAI, Anthropic, and Google for access to their products.

OpenAI and Anthropic have both moved aggressively against distillation in 2026. OpenAI told US lawmakers on the Chinese Communist Party in February that it had strengthened defences after detecting organised distillation efforts. 

Separately, Anthropic accused multiple Chinese AI projects, including DeepSeek, of extracting data from its Claude system at scale.

Bloomberg previously noted that OpenAI, Anthropic, and Google have coordinated through the Frontier Model Forum to detect and block suspicious mass-querying linked to distillation. 

The irony of Musk’s admission is precise: the CEO of xAI, whose company is a leading critic of OpenAI’s practices, has now acknowledged that its own system relied on outputs from OpenAI models during development. 

Importantly, this is the same activity that OpenAI, Google, and Anthropic, combined with the U.S. Government itself, have flagged as a major risk when linked to Chinese firms.

US firms

Musk’s AI Rankings and What They Reveal About xAI

The distillation admission was not the only key moment in Musk’s April 30 testimony. As TechCrunch reported, he was asked about his earlier claim that xAI would soon surpass nearly all companies besides Google. 

He instead ranked leading AI firms as Anthropic first, followed by OpenAI, then Google, with Chinese open source models next, describing xAI as much smaller, with only a few hundred employees.

Musk built xAI in 2023 specifically to compete with OpenAI, merged it with his social network X in 2025, and subsequently folded it into SpaceX

xAI, as Musk claims, sitting several levels below leading AI firms, helps explain why using distillation from more advanced models may have been appealing during its early development.

The Trial’s Broader Context

As The Verge confirmed, Musk’s full testimony on April 30 covered considerably more ground than the distillation exchange. 

He acknowledged knowing about early for-profit discussions within OpenAI but said he was reassured by Sam Altman, who sat watching from the courtroom, that the company would remain a nonprofit. 

OpenAI’s legal team argued that OpenAI’s for-profit structure was originally proposed by Musk in 2017, and that his lawsuit reflects competitive jealousy over his 2018 departure from OpenAI’s board. 

Judge Yvonne Gonzalez Rogers, for her part, noted the irony directly from the bench: addressing a moment in Musk’s testimony about the safety risks of AI, she said that it was “ironic” for Musk to raise AI safety concerns while building a company in the same field

Musk was dismissed after more than two hours of cross-examination. The next witnesses scheduled are OpenAI President Greg Brockman and AI safety expert Stuart Russell. The trial resumes Monday.

Source: Elon Musk testifies that xAI trained Grok on OpenAI models

Fawad Malik

Fawad Malik is a digital marketing professional and technology writer with over 15 years of industry experience. He specializes in SEO, SaaS, AI, consumer technology, internet services, and content strategy. He is the Founder and CEO of WebTech Solutions, a digital agency focused on helping businesses grow through modern online strategies. Through NogenTech, Fawad shares practical insights on internet technology, WiFi, apps, AI tools, digital trends, and the latest tech updates for readers worldwide.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button