OpenAI CEO Sam Altman over the weekend called for enhanced collaboration between the U.S. and China on artificial intelligence development. Without mentioning the fact that his company’s products like ChatGPT are not available in China, he argued that China should be a major player in ensuring the safety of global AI development and rollout.
“With the emergence of the increasingly powerful AI systems, the stakes for global cooperation have never been higher,” he said in the keynote address for a conference hosted by the Beijing Academy of Artificial Intelligence, sounding more like someone leading an advocacy group on responsible tech than what he is: the CEO of a company responsible for shepherding that emergence.
Altman’s call for U.S.-China collaboration on “mitigating risk” is only the latest (and, given the state of U.S.-China technological competition, possibly the most hazardous) incident in his quest to convince the world to regulate his industry. Unlike other tech leaders, he has been eager to meet with policymakers around the world, not just in the United States but also in South America, Africa, Europe and Asia, in an effort to encourage and influence the development of AI regulations. Presumably, he is advocating for rules that would benefit OpenAI’s business interests.
He was also one of hundreds to sign a recent one-lined statement released by the Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
A targetless “should” statement, the line is both benign and passive. Translated to active tense, it would read something like: “Someone in the world should regulate AI such that it cannot realize its full, perilous potential. And please let me keep making money while appearing to do the right thing.”
Perhaps, then, Altman’s appeals to China’s AI community had something to do with the fact that Beijing takes AI regulation seriously (China has enacted rules on deepfakes and generative AI, for example)—and Chinese companies do not pose strong immediate threats to Altman’s business interests. For all the talk of the U.S.-China “AI arms race,” the fact is Chinese companies lag their American peers (OpenAI foremost among them) in advanced AI like large language models by years, not months, according to a recent analysis published in Foreign Affairs.
Yet U.S. lawmakers frequently invoke the perception that China is on the brink of usurping American AI leadership to argue against regulation. This tactic may not be analytically sound but it is politically successful. As the authors of the Foreign Affairs article point out, “If anything, regulation is the area where the United States most risks falling behind in AI.”
Last month, Altman testified to the Senate Judiciary Subcommittee, along with Christina Montgomery, chief privacy and trust officer at IBM, and Gary Marcus, professor emeritus at New York University. After explaining OpenAI’s internal and self-imposed safety assessment practices, Altman said: “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
Notably, he did not specify which government or governments he hopes will step up to the plate. But for all his apparent support of regulation, he recently threatened to “cease operating” in Europe if OpenAI finds it too difficult to comply with the EU’s AI Act. Conversely, Italy temporarily blocked ChatGPT in March over privacy concerns.
Still, Altman’s Senate testimony claimed to be interested in international standards. A pessimistic read might be that he hopes the United States’ contributions to any such global standards would yield a relatively laissez-faire approach over the course of negotiations and compromise. He told Congress that companies like his can “partner” with governments on establishing and updating safety requirements and “examining opportunities for global coordination.” Such a globally minded attitude may also help OpenAI poach some of China’s AI talent—or help the company gain access to the Chinese market.
The Biden Administration, on the other hand, appears on board to cooperate exclusively with U.S. allies. Last month’s G7 gathering featured discussion of values-based technological coordination, and a more recent meeting in Sweden between the U.S.-EU Trade and Technology Council, which plainly explained the reasoning behind transatlantic technology cooperation, including on developing AI standards: “Our nearly 800 million citizens share common values and principles that directly support the largest economic relationship in the world.”
But within its own territory, the EU has pursued far more aggressive and decisive AI regulation than the United States. Partially due to the well-circulated, well-received and largely unsubstantiated idea that China is nearly on par with America in all forms of innovation, many lawmakers are likely to balk at U.S. legislation similar to Europe’s AI Act.
Others will advocate for stronger consumer protections and clear industry standards. The debate was on display in the lead-up to the Sweden meeting, as aides within the Biden Administration argued over how far to take the proposed U.S.-EU standards given the strategic import of the geopolitical contest with China.
This fight has the potential to pit China hawks against big-tech hawks. Many Republicans are both. Regulating companies based in San Francisco run by Stanford drop-outs could represent huge wins in a Republican political environment that is, as already evidenced by the burgeoning presidential primary race, a contest characterized by who can show they are the most anti-woke and least elite. But competing with China and, more recently, employing all available tools to stifle China’s competitive advantages, is also a key strategy to win electoral support.
As lawmakers work through the politics of possible AI regulation, players like Altman will continue speaking to all governments that will listen, attempting to shift and shape regulations so they align with company mission statements and bottom lines. Conveniently, in Altman’s case, the two are not mutually exclusive.
Read the full article here