Since the ChatGPT prototype launch in November, there’s been massive interest in its application across industries, including customer service.
While OpenAi’s ChatGPT does seem to take a massive leap forward and continually improve, Elerian AI CTO, Alfredo Gemma, disagrees that it’s the breakthrough that will change everything.
Artificial general intelligence (AGI), the ability of an intelligent agent to understand and learn any intellectual task that a human being still requires Deep Learning (DL) architecture to generalise effectively to work.
Gemma says: “Large Language Models (LLM), such as the one powering ChatGPT, remember everything up to the point at which their training stopped. The question then becomes whether the system is capable after that of the human-like generalisation capabilities needed to achieve an AGI – and the likely answer is no.”
Why ChatGPT will never replace a human
Human intelligence can be considered a combination of specialised intelligence (linguistic, emotional, logical-mathematical, spatial, bodily-kinesthetic, musical, etc.), which leverages memory in a particular way. The ability to generalise our knowledge is a fundamental aspect of human intelligence: humans can extend and apply the knowledge acquired in a specific context to other contexts when we identify similarities. Generalisation is only possible if one can identify the context, which is only accessed through memory. Memory is a requirement for intelligence.
To generalise, an intelligent system must be able to instantly repurpose its existing cognitive building blocks to perceive completely new objects or patterns without having to learn them, that is, without having to create new building blocks specifically for them.
ChatGPT’s lacks the human ability to generalise
In the end, the real problem with LLMs like ChatGPT is a structural one, which depends on the underlying architecture of the neural networks: the Deep Learning (DL) architecture. The biggest problem with DL is its inherent inability to generalise effectively. Without generalisation, edge cases are an insurmountable problem, something that the autonomous vehicle industry found out after investing more than $100 billion and are yet to produce a fully self-driving car.
Gemma continues, “Some in the AI community insist that DL’s failure to generalise can be circumvented by scaling (like it is done when LLMs are created), but this is not true. Scaling is precisely what researchers in the self-driving car sector have been doing, and it does not work. The cost and the many long years it would take to accumulate enough data become untenable because corner cases are infinite.”
ChatGPT did my college assignment
While there are life and death cases of where artificial intelligence makes the ‘wrong’ decision, there are also instances of college students using the new technologies to write their university papers. Indeed, ChatGPT appears to be able to write articles on various topics. But on closer inspection, the results, while often are well-written are, in almost all cases, inaccurate. Every version of the story, even if prompted multiple times, contained errors that the chatbot couldn’t identify when engaged in conversation. ChatGPT is prone to fabricating answers if its knowledge doesn’t cover a request, even when you’re not asking it to write an article.
In conclusion
Gemma concludes: “Bottom line, cracking generalised perception and the DL architecture needed to achieve that is still an open problem and would be a monumental achievement. For now, ChatGPT is exciting but not exactly a massive game-changer.”