The leaders of America’s biggest AI-focused companies—including Meta, Amazon, Google, Microsoft, Anthropic, Inflection AI, and OpenAI—met with President Biden on Friday, committing to continue to do what they were already doing to make AI safe, per their own skewed definitions of safety.
The meeting can be considered symptomatic of AI executives’ dominance in the conversation about the dangers posed by their own products. That dominance has resulted in the proliferation of ominous scenarios centered on the hypothetical and dire possible long-term outcomes of the technology.
Nitasha Tiku described the phenomenon in a recent article in The Washington Post: “In recent years, Silicon Valley has become enthralled by a distinct vision of how super-intelligence might go awry,” she writes. “In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, such as making paper clips — and triggers human extinction to optimize its task.”
Tech billionaires’ fixation on these tales and threats is interrelated with their faith in longtermism. The idea can be understood as philosophy’s answer to procrastination: it encourages the art of making short-term trade-offs for the vague if noble purpose of securing humanity’s long-term well-being.
Conveniently, this concept allows them to continue developing profitable technological applications while claiming paternalistic, karmic excellence: they are not only bestowing life-optimizing tech to the masses, they are also single handedly ensuring the tech they make doesn’t go rogue and kill us all.
The White House may just be buying this narrative; President Biden said on Friday that the gathered executives are critical for making sure AI develops “with responsibility and safety by design.”
Adhering to these ideals allows the creators of mass AI applications to control narratives surrounding “AI risk,” and its inverse, “AI safety”. Coining and propagating terms such as “alignment,” which refers to the degree to which artificial intelligence adheres to its human handlers’ intentions, ignores less sexy terms like “copyright infringement” and “sexist and racist algorithms” which are relevant to AI’s current practices, if not its wildest possible future destination.
Meanwhile, in a country with no national regulations on data privacy, let alone artificial intelligence, the most immediate challenges posed by AI are not “risks.” They are facts.
By focusing on grandiose and hypothetical threats, AI executives can continue to hand out user data, ignore the dangers of AI-generated mis-and-disinformation, and engage in other heedless behaviors—while sternly committing to “AI safety” alongside the President of the United States.
“AI safety” has a great ring to it. And it’s vague enough that everyone can be using the same words and mean different things. “Safety” in AI could be employed to convey that people feel their jobs are secure despite large language models’ capacity to perform some of their functions. It could mean a person of color feels confident that algorithms aren’t being used to filter them out of job or mortgage applicant pools. It could amount to public trust that facial recognition won’t be used as evidence in arrests—or it could simply mean that users trust the information they get from ChatGPT.
But the discussion about creating “safety” in AI ignores the fact that the stuff it is made of—data—remains entirely unregulated in the United States. In congressional testimony in May, OpenAI CEO Sam Altman implored lawmakers to regulate the technology that has enriched him, implicitly suggesting they ask him how they should go about doing so.
Conveniently, he can claim his company, like those of his peers who attended Friday’s photo-op, already have teams in charge of managing AI risk (OpenAI’s trust and safety lead stepped down on Friday) who craft safety standards that can be systematically factored into their business models.
Since AI companies were founded in a country that protects innovation and innovators over all else, they took it upon themselves to design theories and standards around safety that comported with their own moral and financial outlooks. That such unregulated waters were there waiting for them to wade into is the underlying problem.
The AI “threat” posed by China’s domestic development is only likely to intensify Washington, D.C.’s preference to step out of the way of perceived would-be innovation. In their view, winning the immersive U.S.-China competition depends on it. This narrative omits the fact that China has curbed its own AI companies, at times to the detriment of profit-making or “innovation”.
While Beijing is regulating technology largely (though not entirely) due to its interest in controlling popular opinion and behavior, the fact remains that the average U.S. citizen is more vulnerable to encountering an unmarked deepfake than her Chinese counterpart. Ironically, synthetic images and text arguably stand to make a bigger difference in democratic countries, where information informs public opinion and public opinion dictates election outcomes.
It is better than nothing that both the White House and tech CEOs are sufficiently invested in the delivery (and optics) of secure artificial intelligence that they are willing to meet and come up with voluntary measures which, while still unspecified, sound generally positive. And it remains to be seen whether the companies will come up with more concrete guidelines in the coming days and weeks; Anthropic announced it would soon share “more specific plans concerning cybersecurity, red teaming, and responsible scaling.”
But trying to make AI “safe” without data privacy regulations would be like trying to regulate the drinking of wine in restaurants—but not the commercial process of turning grapes into wine. You could make sure the glass is the right shape, but you would have no way of knowing whether the content actually contained any alcohol. It might even be poisonous.
Read the full article here