To the much-discussed hallucinations of Large Language Models (LLM), their instability and lack of interpretability, we can now add their vulnerability to automated adversarial attacks forcing them to produce harmful content. But these are the least of their weaknesses. The depth of their understanding of the human world is a much more important hurdle for LLM to overcome before they become “superintelligence.”
According to ChatGPT creator OpenAI, however, this “most impactful technology humanity has ever invented,” could “arrive this decade” and could “lead to human extinction.”
We have already heard years ago that humanlike AI is just around the corner. At the time Terry Winograd entered graduate school in the late 1960s, most [intelligent and knowledgeable humans] “believed that it wouldn’t be long before machines would see, hear, speak, move, and otherwise perform humanlike tasks,” wrote John Markoff in Machines of Loving Grace.
For his PhD dissertation at MIT, Winograd developed SHRDLU, a computer program that was capable of participating in conversations about a made-up world, a “microworld” consisting of toy blocks and a “gripper” for moving them about. It responded to commands in English to manipulate the blocks using its gripper and it “understood” questions about the state of its world.
In his history of artificial intelligence, Nils Nilsson says that SHRDLU’s impressive performance made some NLP researchers “optimistic about future success.” But “Winograd soon abandoned this line of research in favor of pursuing work devoted to the interaction of computers and people. Perhaps because he had first-hand experience of how much knowledge was required for successful language understanding in something so simple as the blocks world, he despaired of ever giving computers enough knowledge to duplicate the full range of human verbal competence.”
Later, in 2006, Winograd saw statistical language understanding, neural networks and machine learning as new developments moving the AI field in the direction of his preferred approach to human-computer interaction. “Enlightened trial and error outperforms the planning of the flawless intellect” is at the center of this approach, he wrote, acknowledging “the limitations of knowing and modeling the complexities of the real human world.”
Further triumphs of the statistical analysis approach to AI, particularly in the domain of image identification in 2012, made most intelligent and knowledgeable humans believe that human-like machine intelligence or even “superintelligence” is, again, just around the corner.
Most, but not all. One popular tool used by those questioning the intelligence of the new AI became known as the Winograd Schema Challenge. This was the 2010 brainchild of Hector Levesque at the University of Toronto, based on an example of semantic ambiguity mentioned in Winograd’s 1972 “Understanding Natural Language.” To overcome the limitations of the “Turing Test,” Levesque suggested a multiple-choice test that requires the correct resolution of an ambiguous pronoun in a statement. For example,
The trophy doesn’t fit in the brown suitcase because it is too large. What is too large?
A. The trophy
B. The suitcase
“The challenge attracted a fair amount of favorable interest from both the research community and the popular science press. The naturalness of the problem made it a fair one for AI systems; the complexity of the inferences involved seemed to put it far beyond the range of what was then current technology,” wrote the authors of “The Defeat of the Winograd Schema Challenge” (January 2023).
Indeed, at the “first and last running of the Winograd Schema Challenge” which took place in 2016, the most successful of the six AI programs that entered it achieved a score of 58% on the test collection, only slightly better than chance. At the inaugural O’Reilly AI conference the same year, while some talked about “super-human” driverless cars that are just around the corner, others, including deep learning pioneer Yann LeCun, mentioned the Winograd Schema as an unsolved challenge in testing the machine’s knowledge of how the world works.
Just a year later, however, natural language processing (NLP) moved forward with Google’s researchers publishing “Attention is All You Need.” It proposed the Transformer (the T in ChatGPT), a new “network architecture” that improved considerably the (machine) learning of the dependencies among words even when they are far apart in the text. The result of this breakthrough, in addition to unleashing the avalanche of generative AI, was the defeat of the Winograd Schema Challenge, with AI going from “random guessing to human performance in a very short time span,” achieving a score of 90.1% in 2019.
Large language model GPT-3 achieved a similar score (88.3%) in 2020. Eager to see for myself the triumph of LLM, I ran a few Winograd Schema phrases by the July 20, 2023, version of ChatGPT. It gave me the correct answers to a couple of them. But then I presented it with “The sack of potatoes had been placed below the bag of flour, so it had to be moved first. What had to be moved first?” and got this in response:
In the given sentence, “it” refers to the sack of potatoes. The sentence “The sack of potatoes had been placed below the bag of flour, so it had to be moved first” means that the sack of potatoes had to be moved first before accessing or moving the bag of flour, which was placed on top of it.
When I switched the position of the sack of potatoes from “below” to “above,” I got this in response:
Apologies for any confusion in my previous response. In the given sentence, “it” refers to the bag of flour. The correct interpretation of the sentence is:
“The sack of potatoes had been placed above the bag of flour, so it had to be moved first.”
This means that the bag of flour had to be moved first because it was below the sack of potatoes, and you needed to move it out of the way before accessing the sack of potatoes placed above it.
It appears that the great leap forward for AI, which some of its prominent developers say will lead soon to superintelligence that could destroy humanity, still does not have, at least in this case, a child-like understanding of space and motion and what’s involved in moving and manipulating physical objects.
Sometimes LLM appear to understand the world and how to operate in it, and sometimes they fail to do so. Sometimes their language skills lead to accurate translations and sometimes to complete failures. But what about their impressive performance in the Winograd Schema Challenge?
In the paper documenting the demise of the Winograd Schema Challenge, the researchers who worked for about ten years to promote it as a test of machine intelligence, attribute the human-like achievements of LLM to flaws in the test’s “formulation and execution.”
In addition, they reported that in the early 2010s, “when there were few signs that AI systems could succeed at the Winograd Schema Challenge, there were those who warned that it might be a less robust test than its creators hoped and believed. They argued that… success might come without solving the underlying problem that had motivated the challenge.”
The proponents of the Winograd Schema now admit that solving it “is not a surrogate for the ability to do commonsense reasoning, let alone for intelligence. The difficulty of using success at a specific task as a surrogate for intelligence is one that continues to this day, even as the tasks computers can successfully perform significantly increase in complexity.”
Or what Yehoshua Bar-Hillel called seventy years ago (!) the fallacy of the first step: The distance from the inability to do something to doing it badly is usually much shorter than the distance from doing something badly and doing it correctly.
Or what Terry Winograd said in 2004, according to Nilsson: “There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains.”
Winograd thought it would take decades or more to realize the initial ambitions of NLP researchers. “In the meantime,” he said, “AI took on much more doable goals of working in less ambitious niches, or accepting less-than-human results (as in translation).”
Not anymore. Today’s ambition of AI developers is focused on getting to artificial general intelligence (AGI), recreating the full spectrum of human intelligence in machines while fixing all its flaws.
After attending the O’Reilly AI conference in 2016, I wrote:
Maybe, just maybe, our minds are not computers and computers do not resemble our brains? And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional—albeit “narrow”—applications of computers to enrich and improve our lives?
See also:
Lessons Learned From Computer Conversations And Taming AI 70 Years Ago
Demonstrating Why AI Can’t Do High-Quality Translation
Read the full article here