As businesses continue to experiment with AI-powered technologies, it’s likely that the most common use case across industries is one that essentially pre-dates the explosion of interest in generative AI, which is that of the humble chatbot. Anyone who has visited a business website over the last decade will have encountered a chatbot, particularly when the nature of engagement relates to customer services. What most people don’t realize, however, is that chatbots have existed in some form for decades. 

In this article, I’ll show just how far we’ve come and where technological advancement is taking the chatbot. 

Let’s talk, computers

The earliest version of a chatbot, ELIZA, was released in 1967. A simple rules-based program, ELIZA was humanity’s first successful attempt at conversing with computers. From an interface perspective, it wasn’t so dissimilar to how we interact with chatbots now, as users would type a question (like the prompts of today) and then receive a response. A key difference to today’s models was that the responses were pre-programmed and the chatbot would produce its response based on keywords that matched the prompt. Despite this, ELIZA marked a significant leap in the relationship between human beings and machines. It also laid the foundation for the way vector databases work today. 

The next leap forward came in the 1980s, with the arrival of Jabberwocky, which enabled voice interaction. Today, with voice assistants and voice-activated apps on smart devices, this capability is practically synonymous with chatbots, but this again shifted how we perceived our relationship with technology. Crucially, however, Jabberwocky was still rules-based, providing mostly pre-determined responses. 

A.L.I.C.E., which stands for Artificial Linguistic Internet Computer Entity (also stylized as ‘Alice’) moved things on in the 1990s, as responses to prompts could then be added to the database and inform future responses. However, the principle remained very much the same—except now the metaphysical question of what constitutes “learning” came into play. Could the fact that Alice was using previous responses to create new responses be classified as learning? From a technological perspective, the answer was no, but a more philosophical door had been opened. 

While there were many developments across the decades that followed ELIZA, enabling more varied and complex interactions, the architecture and technology underpinning chatbots remained largely the same until the advent of language modeling and natural language processing (NLP).

The data-driven era

Two significant factors driving the advancement of models are the rapid increase in compute power and the availability of data, driven respectively by the development of GPUs and the internet. 

The arrival of large language models precipitated a shift from rules-based interactions to those that were far more data-driven, with the ability to deliver more varied responses. ChatGPT, released in 2022, used GPT-3 architecture and transformed a text-completion model into one that was conversational through a technique called supervised fine tuning. This is where text-completion models are fed conversational examples, which eventually enables them to learn how to interact in a more conversational manner. This is how chatbots work today. The biggest difference between today’s models and their earlier counterparts is that they are trained on huge amounts of real data, so there is no need to pre-program responses. 

Another crucial contributory factor in the advancement of chatbots—and data science and AI more widely—has been the development and growth of open-source machine learning libraries, such as PyTorch and TensorFlow. These libraries significantly lowered the bar to entry and made models more accessible than ever, meaning businesses today can quickly develop their own chatbots or other NLP applications. 

Now, the main barrier to enabling more heavy-duty and upscaled use cases for chatbots is the cost. Take customer service chatbots as an example; it’s likely that these will run 24/7 on an organization’s website, so costs can quickly spiral, as each interaction increases GPU usage. This is why it’s much more cost-effective for chatbots to be powered by smaller models, as models with more parameters will incur higher GPU usage and costs. 

A multi-modal future

The most significant development in the history of chatbots, and one that will continue to unlock use cases and greater efficiency, is the advent of multi-modal models. Where once we could only converse with chatbots through text and speech, we can now combine modalities. We can write text prompts for imagery, video, and audio, but can also support these other modalities with text, such as captioning pictures or transcribing audio information. 

These capabilities unleash a new era of creative and practical capabilities, from using proprietary information to create internal reports or presentations to creating more bespoke marketing materials. With the introduction of retrieval augmented generation (RAG) architectures, chatbots can also now draw on proprietary data across an organization’s systems, enabling more powerful enterprise use cases, such as internal Q&A chatbots that are able to answer questions specific to the user’s organization, or delivering more advanced enterprise search and discovery. 

As chatbots and their underlying architectures continue to evolve, so too will the complementary technologies that augment them. In 2025 and beyond, RAG systems and AI agents will continue to deliver stunning efficiency gains for organizations across industries. When combined with multi-modal models, the innovative potential of chatbots seems limitless.