AGI (or Artificial General Intelligence) is something everyone should know about and think about. This was true even before the recent OpenAI drama brought the issue to the limelight, with rumors speculating that the issues may have been caused due to disagreements about safety concerns regarding a breakthrough on AGI. Whether that is true or not, and we may never know, AGI is still serious. In this article, we discuss what AGI is, or could be, what it means to all of us, and what – if anything – the average person can do about it.
What is Artificial General Intelligence?
As expected for such a complex and impactful topic – definitions vary:
- Wikipedia defines AGI as a machine agent that can accomplish any task that a human can perform. This includes reasoning, planning, executing, communicating, etc.
- ChatGPT defines AGI as “highly autonomous systems that have the ability to outperform humans at nearly any economically valuable work. AGI is often contrasted with narrow or specialized AI, which is designed to perform specific tasks or solve particular problems but lacks the broad cognitive abilities associated with human intelligence. The key characteristic of AGI is its capacity for generalization and adaptation across a wide range of tasks and domains. (Contd..) “
Given the recent OpenAI news, it is particularly opportune that the OpenAI Chief Scientist, Ilya Sutskever, actually presented his perspective on AGI just a few weeks ago at TED AI. You can find his full presentation here, but some takeaways –
- He described a key tenet of AGI as being potentially smarter than humans in anything and everything, with all of human knowledge to back it up
- He also described AGI as having the ability to teach itself – thereby creating new, even potentially smarter AGIs.
We can already see distinctions even within these definitions. The first and third are far broader – reflecting any human endeavor, while the second appears to be more economically targeted. With both come benefits and risks. The risks of the first group are existential, while the risks of the second may lean more toward massive workplace displacement and other economic impacts.
Will AGI happen in our lifetimes?
Hard to say. Experts differ in whether AGI is never likely to happen or whether it is merely a few years away. A lot of this discrepancy also has to do with the lack of broadly agreed upon precise definition – as the example above shows.
Should we be worried?
Yes, I believe so. If nothing else – the current drama in OpenAI shows how little we know about the technology development that is so fundamental to humanity’s future, and how unstructured our global conversation on the topic is. Fundamental questions exist – such as “who will decide if AGI has been reached?”, “would the rest of us even know that it has happened or is imminent?”, “what measures will be in place to manage it?”, “how will countries around the world collaborate or fight over it?”, and so on.
Is this Skynet?
I don’t think this is the cause for the biggest worry. While certain parts of the AGI definition (particularly the idea of AGIs creating future AGIs) are heading in this direction, and while movies like Terminator show a certain view of the future, history has shown us that harm caused by technology is usually caused by intentional or accidental human misuse of the technology. AGI may eventually reach some form of consciousness that is independent of humans, but it seems far more likely that human-directed AI-powered weapons, misinformation, job displacement, environmental disruption, etc. will threaten our well-being before that.
What can I do?
I believe the only thing that each of us can do is to be informed, and AI-literate and exercise our rights, opinions, and best judgement. The technology is transformative. What is not clear is who will decide how it will transform.
It is also worth noting that AGI is unlikely to be a binary event (one day not there and the next day there). ChatGPT appeared to many people as if it came from nowhere, but it did not. It was preceded over the last several years by GPT 2 and GPT 3. Both were very powerful – but harder to use and far less well known. While ChatGPT (GPT3.5 and beyond) represented major advances – the trend was already in place. Similarly – we will see AGI coming (we already do). The question is what will we do about it before it arrives? That decision should be made by everyone. No matter what happens with OpenAI, the AGI debate and issues are here to stay, and we will need to deal with them – ideally sooner rather than later.
Read the full article here