OpenAI Co-Founders Resign Amid Elon Musk's Explosive Legal Battle

Web3

August 6, 2024 8:36 PM

In Brief:
OpenAI co-founders John Schulman and Greg Brockman announced their departures following Elon Musk's new lawsuit against the company.
Schulman is joining rival AI firm Anthropic to focus on AI alignment, while Brockman is taking an extended sabbatical.

Leadership Exodus at OpenAI

OpenAI experienced a significant shake-up as two co-founders, John Schulman and Greg Brockman, announced their departures a day after Elon Musk filed a new lawsuit against the company.

On Aug. 5, Musk filed a legal action accusing OpenAI CEO Sam Altman and Brockman of misleading him into co-founding the organization and straying from its original non-profit mission.

The billionaire’s lawsuit had attracted significant attention because he had withdrawn a similar application less than two months ago.

Why They Left

Brockman, who served as the company’s President, stated that he was taking an extended sabbatical until the end of the year.

He emphasized the need to recharge after nine years with OpenAI, noting that the mission to develop safe artificial general intelligence (AGI) is ongoing.

On the other hand, Schulman said he was leaving the AI company for its rival, Anthropic, because he wanted to focus more on AI alignment—a field that ensures AI systems are beneficial and non-harmful to humans.

He added that another reason he left was because he wanted to engage in more hands-on technical work. Schulman wrote,

“I’ve decided to pursue this goal at Anthropic, where I believe I can gain new perspectives and do research alongside people deeply engaged with the topics I’m most interested in.”

Focus on AI Safety Practices

Schulman’s exit leaves the company with only two active co-founders, including CEO Altman, Brockman, and Wojciech Zaremba, who leads language and code generation.

Meanwhile, Schulman’s departure has reignited focus on OpenAI’s AI safety practices. Critics argue that the company’s emphasis on product development has overshadowed safety concerns.

This criticism follows the disbandment of OpenAI’s superalignment team, which was dedicated to controlling advanced AI systems. Last month, US lawmakers wrote Altman and sought confirmation on whether OpenAI will honor its pledge to allocate 20% of its computing resources to AI safety research.

In response, Altman reiterated the firm’s commitment to allocating at least 20% of its computing resources to safety efforts and added,

“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations. Excited for this!”

Disclaimer: Backdoor provides informational content only, it is not offered or intended to be used as legal, tax, investment, financial, or other advice. Investments in digital assets involve risk, and past performance does not guarantee future results. We recommend conducting your own research before making any investment decisions.