By Zuzanna Stamirowska, CEO & Co-Founder of Pathway

Dangerous, catastrophic, revolutionary. These are just some of the words media outlets have used to describe Artificial Intelligence (AI). In many stories, AI is often being depicted as a completely self-sufficient, self-teaching technology. But in reality, it’s subject to the rules built into its design. And in order to fully reap the benefits of AI, we must first address its current limitations.

Limited to a moment in time

Most AI systems, including large language models like ChatGPT, are trained on static data uploads. This means that, unlike humans, machines are not in a continuous state of learning and cannot iteratively ‘unlearn’ any information they were previously taught when it is found to be false, inaccurate, or becomes outdated. Essentially, their intelligence is stuck at a moment in time.

This limits the accuracy of the AI systems. Inaccurate or biased information cannot be updated or retaught. This presents a significant risk at a social level; implicit biases in data sets are harder to address, which have been shown on countless occasions to result in negative outcomes for certain groups.

While at an enterprise level, it risks misleading results and the leaking of sensitive information. It has also at times reduced confidence in machine learning systems and stalled the adoption of enterprise AI use cases that rely on real-time data to make decisions. Such as in manufacturing, financial services and logistics.

The power of learning to forget

One reason that real-time learning has been so difficult to implement for AI systems is the complexity of designing streaming workflows. To date, this has resulted in very specialist teams focusing on data streaming use cases, who are typically not integrated into the wider data team. The two teams are literally coding in different languages; so integrating the workflows has been almost impossible.

However, this no longer has to be the case. It is now possible to design streaming workflows with the same code logic as in batch. This radically democratises the ability for developers to design streaming workflows and ease the process of putting LLM pipelines in production. Thanks to this breakthrough capability to combine batch and streaming logic in the same workflow, AI systems can now be continuously trained or updated with new streaming data, with revisions made to certain data points without requiring a full batch data upload.

This can be compared to updating the value of one cell within an Excel document, which doesn’t reprocess the whole document, but just the cells dependent on it. With inaccurate source information seamlessly corrected to improve system outputs, this will enable the next generation of enterprise AI applications.  In turn then supports the development of true real-time systems for resource management, observability and monitoring, predictive maintenance, anomaly detection, and strategic decision-making.

Harnessing the potential of real-time unlearning

The biggest misconception about intelligence is that it’s how much you know and how much you learn. But AI systems remind us of the value of context and curiosity in our own knowledge. This allows us to correct our misconceptions and improve our accuracy and understanding. We’re now at the precipice where AI systems will be able to learn to forget information that is false, biased or out of date. And the potential that holds for LLMs and enterprise AI applications is massive.


Read more news and articles here including AI.