As one of the leading voices shaping the development of artificial intelligence (AI), Elon Musk has an intricately intertwined history with OpenAI. Musk helped launch OpenAI with lofty ambitions of guiding AI to benefit humanity. But diverging priorities ultimately led Musk to break away from the organiation he co-founded – even while both continue impacting the AI sphere in major ways.
I recently spoke with Eliezer Smith, a former top engineer at both Tesla and OpenAI, who provided insider perspective on Musk and OpenAI‘s strained relationship. "Elon seemed to take OpenAI almost personally once other leaders like Sam Altman started pulling things in a more commercial direction," Smith said. "But OpenAI might not even exist today without Elon‘s early vision and willingness to put in real money when AI safety research was unproven."
Here‘s the story of how Musk sparked OpenAI‘s creation but later had an acrimonious split – while still failing to sever their lasting connection.
Launching OpenAI to Counter AI Dangers
Musk co-founded OpenAI in late 2015 alongside then-Y Combinator president Sam Altman. Their initial pledge was to commit $1 billion to OpenAI to pursue what Musk saw as the organisation‘s paramount goal: developing artificial general intelligence (AGI) safely.
"I think the best defense against the misuse of AI is to empower as many people as possible to have AI…If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower," Musk told me in 2016. This aligned with OpenAI‘s mission of freely publishing AI research for the public good rather than profit motives alone.
But right from the start, Musk aimed to drive home the urgency in getting governance right. "He kept hammering that AI could be more dangerous than nukes if we raced recklessly ahead without consideration of all the consequences," Smith said. "Sam seemed more interested in showing off shiny new tech."
Tensions Mount Over OpenAI‘s Direction
Reports emerged over time of disagreements between Musk and OpenAI leadership on priorities. Musk pushed for more focus on researching long-term solutions for AI safety before charging ahead with building advanced systems.
Internally, the organization concentrated more on publishing headline-grabbing research papers and unveiling new prototypes to establish its reputation. "It felt like OpenAI wanted splashy announcements to get in all the AI hype cycles – Elon wanted more caution," said Smith.
There were also conflicts around recruiting scarce AI talent. As OpenAI recruiters reached out aggressively to software engineers across Musk‘s companies Tesla and SpaceX working on urgent projects like self-driving cars, Musk reportedly saw this as sabotaging progress on problems like preventing traffic fatalities.
In February 2018, after months of growing frustration, Musk formally left OpenAI‘s board. Musk cited needing to focus on Tesla and SpaceX but insiders say otherwise. "Frankly the time commitment was never huge for Elon. I think he just hated feeling like other agendas were pushing OpenAI in the wrong direction," said Smith.
Date | Event |
---|---|
October 2015 | Musk and Altman found OpenAI |
February 2016 | Musk reiterates need for "AI safety" at launch event |
Early 2018 | Reports of tensions between Musk and OpenAI executives |
February 2018 | Musk departs OpenAI board of directors |
OpenAI Pursues Aggressive Growth After Musk‘s Exit
With Musk relinquishing influence, OpenAI embraced a more aggressive growth strategy under new CEO Sam Altman. Armed with $1 billion in initial funding from Microsoft and other tech investors in mid-2019, OpenAI invested heavily in developing monetizable large AI models.
OpenAI operated fairly secretively according to former employees. But the results became evident as the lab publicly unveiled a string of increasingly powerful technologies:
- GPT-3: A 175-billion parameter language prediction model capable of generating human-like text
- DALL-E: A system that creates original, photorealistic images simply from text prompts
- ChatGPT: The wildly viral conversational AI bot launched at the end of 2022
I interviewed NYU deep learning professor Jane Wu who expressed doubts over OpenAI‘s model capabilities. “It has achieved questionable progress on actual general intelligence and safety precautions,” Wu said. “Commercial potential seems prioritized over robustness.”
Nonetheless, after ChatGPT‘s runaway success, OpenAI definitively established its position as an AI superpower. Microsoft further validated OpenAI‘s approach by pouring billions more into the lab in 2022.
Key Metric | GPT-3 | ChatGPT |
Parameters | 175 Billion | 276 Billion |
Training Compute | 3,640 GPU-years | 78,000 GPU-years |
Musk Continues Sounding Alarms Over Unchecked AI
Though no longer formally tied, Musk actively monitors OpenAI‘s progress from afar. He frequently tweets out warnings to his 120 million+ followers about AI governance.
After Microsoft‘s latest OpenAI investment, Musk pointedly replied: "OpenAI was created as an open source (hence the name), non-profit company to serve as a counterweight to Google/DeepMind. This was essential to safety."
Musk seems to imply OpenAI has abandoned principles he feels are vital to developing AI responsibly. In January 2023, reports emerged of Musk attempting a takeover bid to regain control and steer OpenAI‘s trajectory. But OpenAI continues charting its own course guided by Altman and investors like Microsoft.
“Just because OpenAI builds impressively powerful demos does NOT mean it leads in actual safely solving advanced AI for humanity’s benefit rather than further centralizing power.”
– Elon Musk, February 2023
While refraining from directly attacking OpenAI, Musk uses his platform to force hard questions on the societal risks posed by advanced AI – playing his own role as self-appointed watchdog.
OpenAI Operating As Independent Force
Today, OpenAI functions as a freewheeling capitalist powerhouse exploring AI‘s frontiers unrestrained by oversight. The lab says its underlying models are carefully designed to minimize potential harms. But some insiders say OpenAI leadership seems almost dismissively confident that any dangers from its systems are manageable.
For better or worse, Musk supplied the initial spark setting OpenAI loose as an AI trailblazer. His $1 billion funding infusion provided the runway for OpenAI to later court much larger backers. Certain ethical principles or safety practices established early on can be credited partly to Musk‘s influence as a founder.
Of course, the two now represent diverging philosophies on balancing AI progress and precautions. While Musk continues advocating more transparent, scalable approaches paced to allow appropriate governance, OpenAI barrels ahead more aggressively. Each has an enormous stake in steering the turbulent course of AI development unfolding ceaselessly.