Artificial Intelligence (AI) is thoroughly engrained in our lives now. In fact, it has been for some time – just mostly below the surface.
Every time in the last decade that we’ve searched the internet or browsed Netflix for movies, we’ve used AI. It’s there when we use navigation apps to get us from A to B, and when we use camera filters to give ourselves a smooth complexion or rabbit ears.
But it’s the emergence of “generative” AI over the past few years that’s really made it clear just how radically this technology is going to transform the world.
Put very simply, generative AI is AI that’s capable of creating new things – which could be written text, pictures, video or even computer code – based on examples that it’s been shown.
Some fun examples have emerged – who could forget Deep Fake Tom Cruise or Pope in a Puffer Jacket?
The wave of tech-driven innovation culminated last year with the release of ChatGPT and image creation tools like Stable Diffusion and Midjourney. These put the power of generative AI in the hands of absolutely anybody, even those with very little or no technical expertise.
This has led to a fairly significant dilemma. The world is already affected by misinformation and fake news, and use of technology to spread slander and malicious falsehoods is increasingly commonplace. With everyone able to use this technology – which will no doubt become even more sophisticated as time goes on – how will we ever know if anything we see or hear is real again?
The Misinformation Era
“A lie can travel halfway around the world while the truth is still putting on its shoes.” No one seems entirely sure who first came up with this quote, but one thing that’s certain is that it’s never been truer than it is today!
Thanks to ongoing waves of technological progress – from the internet to social media to “deep-fakes” to generative AI – it’s becoming increasingly difficult to know for certain whether what we’re seeing with our own eyes is real.
Presidential elections in the US, the global Covid-19 pandemic, the UK Brexit referendum, Russia’s invasion of Ukraine – all events of global significance. And all marked by concerted attempts to influence the way they played out via the spread of targeted disinformation campaigns.
Deepfakes are certainly one of the most concerning products of the generative AI revolution. It’s now very easy to make it appear as if anyone is saying or doing anything, even things they would never be likely to do in real life. This has ranged from attempts to ridicule or embarrass politicians to the creation of non-consensual pornography featuring celebrities and “revenge porn” targeting private individuals. There have also been instances where faked voices have been used by scammers to trick people into parting with cash by making them believe their loved ones are in trouble and need help.
Deepfakes and other “creative” uses of generative AI don’t even have to be malicious to be potentially misleading and reality-bending. Since the tools fell into the hands of the public, hundreds of AI-generated songs that have never been sung by human voices have flooded social media – you can hear the “final Beatles record” featuring John Lennon (officially endorsed by the remaining members), or Kurt Cobain covering Blur’s Song 2 (not so official).
Considering the potential consequences, it’s clearly critical that we fully understand the ethical implications of this society-wide transformation. How can we navigate these challenges without stifling the undoubted opportunities for positive change and progress that AI also brings?
Trust and Regulation
Being unable to distinguish between reality and AI-generated fantasies or lies could be disastrous, both in regard to events of political significance and in our personal relationships.
Trust is essential in many areas of life – we need to trust our elected leaders, we need to trust our friends and loved ones, and we need to trust the developers of AI tools that use our data to make decisions that affect our lives.
Perhaps most importantly of all, we need to be able to trust what we see and hear with our own eyes in order to be able to make decisions about who else we can trust.
Regulators have an important role in determining whether AI will be trusted. We can expect to see laws come into force with the aim of preventing technology being used to deceive and mislead in order to preserve public trust. An example is laws that came into force in China in January 2023, prohibiting the use of deepfakes or any AI technology that might disrupt economic or national security. The laws also prohibit deepfake content featuring real, living people when it is done without their consent and oblige creators of synthetic content to make it clear that it isn’t real.
There is always the danger, however, that such methods might have the effect of stifling some of the potential for innovation that AI brings. So, while China’s method of tackling this challenge – through enforcement and regulation – is one possible solution to the problem, other jurisdictions may prefer to adopt more organic approaches.
Other Safeguards and Solutions
While regulation is likely to play an important role in society’s response to the rise of the unreal, other methods will probably be just as important.
Overarching them all, perhaps, is the concept of digital literacy. By developing (and providing others with) the skills to critically evaluate the digital content that we see, we foster a society that is more resilient against misinformation. Would it really have been possible for Kurt Cobain to have covered Song 2? Is Joe Biden really likely to have sung Baby Shark after introducing it as “our national anthem”?
Human fact-checking initiatives are likely to provide another line of defense. This involves teams of educated and trained experts skilled at using scientific methods to ascertain facts from fiction and operating (as far as possible) free of political or other types of bias. They will become increasingly essential as the technology of misinformation evolves and becomes more sophisticated.
And, of course, technology itself is likely to have a big part to play. Software-based solutions for detecting when content has been created or altered by AI are already available, covering both text-based generative AI of the type churned out by ChatGPT, as well as videos, images and sounds used to create deepfakes.
While regulation in some form is a certainty, these organic methods of combating the rise of disinformation can also be effective, with the added bonus that they are not so likely to disrupt genuine attempts to use generative AI for positive ends.
The Benefits and Dangers of AI
However we – as individuals and as a society – decide to tackle the potential for AI-enabled misinformation, it’s clear that we face tricky challenges on the road ahead. By adopting a comprehensive strategy – encompassing critical thinking, fact-checking, and technological solutions -we are most likely to mitigate the dangers while paving the way for the good work that can be done.
Undoubtedly, navigating the challenges explored here is a tricky balancing act – but it’s one that we have every incentive to pull off. However it plays out, the next five years – as society adapts to the existence of this world-changing technology and comes to terms with its implications – will be critical. This means that no one who is involved in AI in any way – whether as a user, a creator, a legislator or a beneficiary – can afford to ignore the big questions at the heart of this issue.
To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books ‘Future Skills: The 20 Skills And Competencies Everyone Needs To Succeed In A Digital World’ and ‘The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society’.
Read the full article here