“The age of AI has begun,” Bill Gates declared this March, reflecting on an OpenAI demonstration of feats such as acing an AP Bio exam and giving a thoughtful, touching answer to being asked what it would do if it were the father of a sick child.
At the same time, tech giants like Microsoft and Google have been locked in a race to develop AI tech, integrate it into their existing ecosystems and dominate the market. In February, Microsoft CEO Satya Nadella challenged Sundar Pichai of Google to “come out and dance” in the AI battlefield.
For businesses, it’s a challenge to keep up. On the one hand, AI promises to streamline workflows, automate tedious tasks and increased overall productivity. Conversely, the AI sphere is fast-paced, with new tools constantly appearing. Where should they place their bets to stay ahead of the curve?
And now, many tech experts are backpedaling. Leaders like Apple co-founder Steve Wozniak and Tesla’s Elon Musk, alongside 1,300 other industry experts, professors and AI luminaries, all signed an open letter calling to halt AI development for six months.
At the same time, the “godfather of AI,” Geoffrey Hinton, resigned as one of Google’s lead AI researchers and wrote a New York Times op-ed warning of the technology he’d helped create.
Even ChatGPT’s Sam Altman joined in the chorus of warning voices during a Congress hearing.
But what are these warnings about? Why do tech experts say that AI could actually pose a threat to businesses — and even humanity?
Here is a closer look at their warnings.
Uncertain liability
To begin with, there is a very business-focused concern. Liability.
While AIs have developed amazing capabilities, they are far from faultless. ChatGPT, for instance, famously invented scientific references in a paper it helped write.
Consequently, the question of liability arises. If a business uses AI to complete a task and gives a client erroneous information, who is liable for damages? The business? The AI provider?
None of that is clear right now. And traditional business insurance fails to cover AI-related liabilities.
Regulators and insurers are struggling to catch up. Only recently, the EU drafted a framework to regulate AI liability.
Related: Rein in the AI Revolution Through the Power of Legal Liability
Large-scale data theft
Another concern is linked to unauthorized data use and cybersecurity threats. AI systems frequently store and handle large amounts of sensitive information, much of it collected in legal gray areas.
This could make them attractive targets for cyberattacks.
“In the absence of robust privacy regulations (US) or adequate, timely enforcement of existing laws (EU), businesses have a tendency to collect as much data as they possibly can,” explained Merve Hickok, Chair & Research Director at Center for AI and Digital Policy, in an interview with The Cyber Express.
“AI systems tend to connect previously disparate datasets,” Hickok continued. “This means that data breaches can result in exposure of more granular data and can create even more serious harm.”
Misinformation
Next up, bad actors are turning to AI to generate misinformation. Not only can this have serious ramifications for political figures, especially with an election year looming. It can also cause direct damage to businesses.
Whether targeted or accidental, misinformation is already rampant online. AI will likely drive up the volume and make it harder to spot.
AI-generated photos of business leaders, audio mimicking a politician’s voice and artificial news anchors announcing convincing economic news. Business decisions triggered by such fake information could have disastrous consequences.
Related: Pope Francis Didn’t Really Wear A White Puffer Coat. But It Won’t Be the Last Time You’re Fooled By an AI-Generated Image.
Demotivated and less creative team members
Entrepreneurs are also debating how AI will affect the psyche of individual members of the workforce.
“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the open letter asks.
According to Matt Cronin, the U.S. Department of Justice’s National Security & Cybercrime Coordinator, the answer is a clear “No.” Such a large-scale replacement would devastate the motivation and creativity of people in the workforce.
“Mastering a domain and deeply understanding a topic takes significant time and effort,” he writes in The Hill. “For the first time in history, an entire generation can skip this process and still progress in school and work. However, reliance on generative AI comes with a hidden price. You are not truly learning — at least not in a way that meaningfully benefits you.”
Ultimately, widespread AI use may lower team members’ competence, including critical thinking skills.
Related: AI Can Replace (Some) Jobs — But It Can’t Replace Human Connection. Here’s Why.
Economic and political instability
What economic shifts widespread AI adoption will cause are unknown, but they will likely be large and fast. After all, a recent Goldman Sachs estimate projected that two-thirds of current occupations could be partially or fully automated, with opaque ramifications for individual businesses.
According to experts’ more pessimistic outlooks, AI could also incite political instability. This could range from election tampering to truly apocalyptic scenarios.
In an op-ed in Time Magazine, decision theorist Eliezer Yudkowsky called for a general halt to AI development. He and others argue that we are unprepared for powerful AIs and that unfettered development could lead to catastrophe.
Conclusion
AI tools hold immense potential to increase businesses’ productivity and level up their success.
However, it’s crucial to be aware of the danger that AI systems pose, not just according to doomsayers and techno-skeptics, but according to the very same people who developed these technologies.
That awareness will help infuse businesses’ AI approach with a caution critical to successful adaptation.
Read the full article here