Sam Altman’s ouster from OpenAI and prompt appointment by Microsoft—the company that invested $13 billion in OpenAI—has consequences beyond one company, industry or country.
Whether he works at Microsoft, returns to OpenAI or pursues a third, unknown path, Altman’s power has only been concentrated by the events of the last week. As a result, he is likely to continue to help shape the direction of global AI governance.
ChatGPT’s success and Altman’s resultant celebrity gave him easy access to world leaders, making him the de facto ambassador for a version of “safe AI development” that suited his personal preferences as well as the financial interests of the company he led. The support he has received from Microsoft, big names in Silicon Valley and the more than 500 OpenAI employees who are calling for the board’s replacement and Altman’s reinstatement suggests that no matter where he works, Altman could become an even more dominant voice in the global conversation about AI development and risk.
Altman’s firing on Friday was reportedly (at least partially) attributable to internal differences between “doomers” and “boomers”—those who focus on mitigating the “existential” risks of AI and those who prefer to maximize its development and commercialization, respectively. The oversimplified but broadly applicable distinction is: in furtherance of effective altruism, those in the former group (like some members of OpenAI’s board and chief scientist and co-founder Ilya Sutskever) believe AI can plausibly destroy humanity, and it is their duty to work against that outcome. The boomers, meanwhile, are more driven by the “normal” concerns of tech companies: being first and making money.
Altman has displayed sympathy for both ideologies. But his recent decisions could have given the board ample reason to believe his focus had shifted from preventing the worst case scenario to developing and monetizing AI products as quickly as possible (Altman announced new consumer products earlier this month at DevDay, the company’s first developer conference, for example).
Altman’s exit from OpenAI is the result of disagreements around how to manage artificial intelligence research, and the egos and considerations driving it, within one company. But it is also representative of broader debates about AI “safety” (a nebulous term). OpenAI is distinct in its acceptance of many seemingly contradictory assumptions: first, creating artificial general intelligence (AI that outperforms humans in various contexts) is possible; second, that project should be relentlessly pursued; third, it could, maybe, kill us all; fourth, we should at least try to institute long-term guardrails that can help avoid our AI-enabled extinction.
That is a long list of assumptions to carry. OpenAI’s founders and employees have been thinking about these possible trajectories and responsibilities for years. Presidents and prime ministers, however, have just recently started thinking through whether and how to regulate AI, though they have been making up for lost time domestically and diplomatically—largely based on input from a handful of tech CEOs, Altman first among them. In front of Congress and meetings with leaders around the world, he has made the case for regulating AI in a way that would suit him and the for-profit arm of OpenAI (the company is unusually structured such that the research nonprofit it started as oversees a for-profit arm which releases products like ChatGPT).
In their moves toward regulating AI, governments have embraced Altman’s philosophy to varying degrees. The EU’s AI Act, which has been in progress since 2021, prompted Altman to threaten to pull ChatGPT from the continent (though he later revoked that threat). The U.S. has likely been most susceptible to Altman’s arguments; Congress warmly received his testimony over the summer and has proven to be vulnerable to industry influence.
China, operating outside Silicon Valley’s pressures, has released regulations targeting deepfakes and algorithmic recommendations, among others. However, China’s participation in international AI dialogues with counterparts more attentive to Altman’s theories means Beijing must at least be aware of the sway he holds in the field and, possibly, over the terms of international agreements as a result.
Last month, China announced a Global AI Governance Initiative, largely in an appeal to developing countries, and less than two weeks later, representatives from China attended the U.K.’s inaugural Summit for AI Safety, which convened officials from over 20 countries. The day before he was terminated, Altman spoke to leaders from Asia at the Asia Pacific Economic Cooperation summit in San Francisco.
Throughout much of the last year, as international AI governance efforts ramped up, Altman was making the case for regulation to audiences in China, the U.K., the EU, the U.S. and other countries, tailoring his message slightly to the audience of the day. When addressing a Chinese audience, for example, he called for AI collaboration between the U.S. and China—a prospect he has eerily framed as necessary given the “stakes.”
Throughout Altman’s tenure leading OpenAI, he appeared determined to set precedents and reach breakthroughs while raking in profits and cementing himself as an ethical leader in a field with existential salience. Despite the cognitive dissonance apparent in these concurrent aims, one takeaway is evident: OpenAI’s board learned that people in the field, including Altman’s former employees—individuals who likely have diverse stances on best practices in creating and controlling AI—can largely agree that a company so significant to the future of the field and the world should not be susceptible to a poorly executed coup.
That consensus means Altman is unlikely to lose access to presidents and parliaments anytime soon.
Read the full article here