The introduction of ChatGPT has re-initiated the ‘will AI take over the humans’ debate. ChatGPT has a singular goal – to provide written responses to written questions – but the perceived jump in technology has worried many both in the general public and amongst the world’s foremost AI researchers.
In March of this year, an open letter was signed by AI leaders (including Stuart Russell, Elon Musk, Steve Wozniak and many others) stating “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This has been followed up another open letter in May originated by Geoffrey Hinton, an AI veteran who recently retired from Google so that he could express his own opinions about the future of AI: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Interestingly Hinton didn’t sign the first letter as he felt a moratorium was unachievable.
Student opinions
Clearly, AI leaders have been unnerved by the implications of AI (ironically including the CEO of OpenAI which created ChatGPT), but what about the next generation of users?
As recorded in my last article, as part of a developmental activity within our DBA program, four doctoral researchers and myself asked undergraduate students from Amsterdam and London their opinions about ChatGPT. Similar concerns to those expressed by AI leaders arose: “I’m nervous about how people will use it. I’m not necessarily nervous about the technology itself.”
However, many students were pragmatic about its introduction, one comparing ChatGPT to his father’s experience at college when the internet came out. A number of students described ChatGPT as a tool comparable to a calculator “We still do maths, just faster maths”.
Just another tool?
However, due to its novelty, we picked up a number of comments that felt as if students were self-justifying ChatGPT’s use as ethical (when the debate is still ongoing within universities). These included comments such as ChatGPT was “similar to Google”, or that the tool “is fine to use so long as the source was referenced.” Beyond just being a reference tool, others said that ChatGPT “gives more time for learning” and “I actually don’t see much problem using it, as its use is inevitable.”
ChatGPT’s inevitable use raises the issue of bias. Google and YouTube are already reinforcing what we want to hear. As one of the student’s asked, especially with the conversational nature of ChatGPT, “how will information be given without expressing opinions?”.
However, not all students were positive about ChatGPT, some of those that hadn’t used the technology felt that ChatGPT shouldn’t be used in university as it was cheating yourself or giving those who used it an unfair advantage. Which raises the question as to what advantages does ChatGPT have over humans?
Information vs knowledge
This is explained by Hinton in an online interview
“GPT knows thousands of times more than any human in basic common sense knowledge. It only has about a trillion connection strengths in their artificial neural networks, and we have around hundred trillion connection strengths in the brain. So with a hundredth as much storage capacity it knows a thousand times more than us. That strongly suggests that it has a better way of getting information into the connection.”
Hinton seems to suggest that, at least in terms of information processing, ChatGPT indeed has an advantage over humans. But can the pattern-forming mind of AI really equate to knowledge or is this just information processing? Hinton continues: Brains can’t exchange digital information, so AI can learn from each other – brains can’t. These guys [AI systems] can communicate at trillions of bits per second and we can communicate at hundreds of bits per second via sentences. It’s why ChatGPT can learn thousands of times more than you can.”
Considering that this still feels like more efficient data processing, should we worry about ChatGPT? As one of the students commented, as humans “We digest a lot of information, but this is different to knowledge”. Hinton suggests that the rate at which AI can acquire, process and share information means that it can ‘learn thousands of times more’ than humans, but is this really ‘learning’ in the same way that humans do? Hinton himself states that ChatGPT has a different way of getting information to the connection (he says ‘better’), but the nuances and approaches that humans apply to tackle a problem have not yet been fully understood, let alone been fully realised.
Whilst, as mentioned above, many AI leaders are concerned about the speed of AI developments – at the end of the day ChatGPT has a lot of information, but it doesn’t have the human skill of knowledge.
Yet.
Many thanks to the students in Amsterdam and London who agreed to be interviewed. Also thanks to my fellow doctoral researchers on the DBA program at Warwick Business School; Chun-Kit Tang, Dean Al-Sened, Derrick Chang and Laura Sapa. The results included here from our exploratory study are solely my interpretation and do not represent the views of my fellow researchers, WBS or the professors who coordinated the developmental activity – Davide Nicolini or Nick Llewellyn.
Read the full article here