In 1997, when IBM’s supercomputer ‘Deep Blue’ defeated world chess champion Garry Kasparov, it signalled a pivotal turning point for human civilisation. The world felt that artificial intelligence (AI) could outsmart human intelligence. But that was only the beginning. Any modern chess engine might easily defeat the best human player.
In the quarter century since, AI has taken over every aspect of our lives and lifestyles, not just chess. Siri and Alexa have been a part of our lives for about a decade now. Google’s popular search engine is another AI application. However, the recent breach of the barrier by OpenAI’s chatbot, ChatGPT, has dealt a serious blow to the co-existence of AI and human intelligence.
AI quickly widened its scope. GPT-4 is much more potent than its predecessor. Investments in AI development and research are rising steadily. In a tweet, author Yuval Noah Harari stated: “The danger is that if we invest too much in developing Al and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity of humans.”
The issue of how “artificial” AI is comes up. It’s designed by humans, it’s programmed by humans, and the algorithm that drives its creation is also outlined by humans.
Moreover, how “intelligent” are AI bots? If we think about it properly, their ‘intellect’ is shallow by nature, as they can only tackle specific problems for which they are designed. Additionally, the nature of their intelligence differs greatly from ours. Consider the Moravec paradox, which was developed in the 1980s by robotics expert Hans Moravec of Carnegie Mellon University and his colleagues. In 1988, Moravec noted that “it is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Recently, Kate Crawford, a University of Southern California professor and a Microsoft researcher, examined what it takes to create AI in her 2021 book Atlas of AI. She believes that the name is deceptive because AI is neither intelligent nor artificial. In addition to the algorithms created by human programmers, a vast amount of natural resources, energy, and human labour are used to create AI. Furthermore, it lacks intelligence in the sense of human intelligence. Additionally, it requires extensive human training to operate, and the statistical logic it uses to create meaning is entirely different. “Since the very beginning of AI back in 1956, we’ve made this terrible error, a sort of original sin of the field, to believe that minds are like computers, and vice versa. We assume these things are an analog to human intelligence, and nothing could be further from the truth,” says Crawford.
In a New York Times article, Noam Chomsky and his co-authors said that ChatGPT and similar systems are “a lumbering statistical engine for pattern-matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question.” Human intelligence, on the other hand, “seeks not to infer correlations among data points but to create explanations.”
GPT-3, ChatGPT’s predecessor, shocked the entire world by writing an op-ed in The Guardian. Back in 2020, American tech entrepreneur Kevin Lacker asked, “How many rainbows does it take to jump from Hawaii to seventeen?” while administering a Turing test to GPT-3. The GPT-3 responded, “Two.” Thus, despite a remarkable AI’s ability to write in the same style as humans, it still lacks common sense in its comprehension of how the physical and social worlds function. Whatever its level of development, AI will always have this kind of weakness. According to Chomsky and his co-authors, “ChatGPT and similar programmes are incapable of distinguishing the possible from the impossible.” Here, human intelligence would defeat AI by a margin of 10 goals!
In his 2018 book, Intelligence is not Artificial, Italian-American author Piero Scaruffi wrote that most of the “intelligence” of our machines is due to the environment that humans structure for them. He wrote, “AI is just computational mathematics applied to automation.” Scaruffi perceived that “[H]umanity is at risk because it is increasingly forced to coexist with very stupid machines in these vast algorithmic bureaucracies. The risk is that we will end up creating not superhuman technology but subhuman societies.” Sounds like what Yuval Noah Harari tweeted more recently?
As a result, it appears that many relevant experts are quite unwilling to declare AI to be “artificial” or “intelligent,” at least by the yardstick of human intelligence. The term “artificial intelligence” seems to
describe just the name of these
types of machines, not their capabilities.
(The writer is a Professor of Statistics, Indian Statistical Institute, Kolkata)
Deccan Herald News now on Telegram - Click here to subscribe
Follow us on Facebook | Twitter | Dailymotion | YouTube