🚀 Explore this trending post from TechCrunch 📖
📂 **Category**: AI
📌 **What You’ll Learn**:
Elon Musk appeared in federal court in California on Wednesday to argue that Sam Altman and his founders “stole a charity.” He left after admitting, under oath, that Tesla is not currently pursuing artificial general intelligence (AGI) — a direct contradiction to a tweet he posted just weeks ago.
It was that kind of day for Musk.
His lawsuit challenging OpenAI’s structure alleges that Altman and the other founders tricked him into supporting a nonprofit, then launched the for-profit arm of Frontier Lab and allowed him to take control of the organization.
After Musk testified for several hours, it appears the issue may come down to how much jurors and Judge Yvonne Gonzalez Rogers differentiate between OpenAI investors who have a cap on their potential profits or not.
According to Musk’s account, when he co-founded the lab with Altman, Ilya Sutskever, Greg Brockman and others, he trusted them to build AI for humanity, but over time he became suspicious of their motives, finally concluding that they were “looting the nonprofit.”
OpenAI’s lawyer, William Savitt, sought to complicate this story during cross-examination, trying to show that Musk supported a variety of efforts to shift OpenAI toward a profitable position so it could raise the money needed to compete with companies like Google, including merging Tesla’s AI lab.
Musk testified that he discussed turning the company into a for-profit company as early as 2016, and that in 2017, he explored creating a for-profit arm of OpenAI in which he would own a majority stake and control the company. When those plans collapsed, he stopped making regular donations to OpenAI, though he will continue to pay for its office space through 2020.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Musk insisted that there is a big difference between investors whose profits are limited and those whose profits are unlimited. Microsoft’s first major investments in OpenAI limited the software giant’s profits, but those restrictions have been rolled back over the years. Musk says these changes ultimately prompted him to file this lawsuit.
Savit tried to prove that Altman and Shivon Zillis — his longtime adviser and also the mother of four of his children — had consulted Musk about subsequent fundraising efforts, and he did not object. Zillis was also on OpenAI’s board of directors when it approved some of those transactions.
This questioning extended to Tesla’s AI ambitions. Notably, Musk was asked about Tesla’s efforts to develop competing AI technologies, and found himself, not for the first time, on the wrong side of one of his own posts on “We are not pursuing artificial general intelligence at this time,” Musk told the court. (Tesla shareholders may want to take note.)
Musk was also asked about a post in which he claimed to have invested $100 million in OpenAI, instead of the $38 million that was actually reported. He said his reputation and network made up for this disparity.
Savitt brought up emails in which Musk supported efforts by Tesla and his brain-interface company, Neuralink, to poach employees from OpenAI while he was still on that company’s board of directors. Another conversation focused on his efforts to hire OpenAI leaders when he left the board in 2018, including Andrei Karpathy, who left OpenAI to lead Tesla’s self-driving business. Musk was also asked about a conversation in which Zelis suggested Musk hire Sutskever at Tesla.
However, perhaps the most important topic today is about harm prevention. Part of Musk’s case is based on the idea that OpenAI becoming a traditional company is dangerous to society because it reduces the company’s focus on safety. In return, Savit asked Musk to acknowledge that all AI companies, including his own, suffer from this risk.
Judge Gonzalez Rogers halted this line of questioning, but in statements to the lawyers after the testimony ended, he made it clear that he would resume, but with limits. When Musk’s lawyers asked questions about ChatGPT’s role in the Tumbler Ridge shooting — an incident earlier this year in Canada in which a person went on a killing spree after intense conversations with a chatbot — she made it clear that she didn’t want to hear about scandals caused by AI models, but that xAI and OpenAI’s approach to safety was fair game.
Musk returns Thursday for another round of hostile questioning. His family office manager, Jared Birchall, is also expected to testify; AI safety expert Stuart Russell; and OpenAI President Greg Brockman.
Correction: A previous version of this story incorrectly contained details about the Tumbler Ridge shooting due to an editing error. It has been updated.
When you buy through links in our articles, we may earn a small commission. This does not affect our editorial independence.
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#platform #Elon #Musk #escape #tweets**
🕒 **Posted on**: 1777521253
🌟 **Want more?** Click here for more info! 🌟
