Navigating the Generative AI Wave

fibre_optics

Share this post

Navigating the Generative AI Wave: How to Enhance Your Chatbots Without Starting From Scratch. 

In this article:

Why boil the ocean?

Pre-build new intents in minutes

Intent Boosting

Advanced entity extraction

Search content on fallback (RAG)

Summary

 

Why boil the ocean?

In today’s rapidly evolving tech landscape, it’s easy to feel overwhelmed by the constant stream of new Generative AI models and “AI-powered” tools. Every day brings a fresh wave of AI-enhanced applications, including updates to existing business tools with AI-driven “boosts” or “plug-ins.”

While the excitement and potential of Generative AI present businesses with incredible opportunities to do more with their data, the big question is: where do you begin? Most companies don’t have the luxury of dedicating resources to building an entire AI and Machine Learning team to tackle such a vast and intricate field.

At EBM, as specialists in Conversational AI, we’ve witnessed this shift firsthand. Our clients are asking how they can harness the power of Generative AI to enhance their chatbots. But instead of scrapping all their work and diving headfirst into an entirely Gen AI-focused approach (which isn’t ideal for many reasons!), why not take advantage of this new technology within the framework of their existing, trusted, and integrated chatbots?

The solution? Selectively incorporating Generative AI to improve customer experiences—safely and efficiently—within the confines of the chatbots they’ve already invested in.

Here are four key areas where our clients are using Generative AI to supercharge their current chatbots—without throwing out the foundation they’ve built.

Pre-build new intents in minutes

If you’ve ever worked with chatbot platforms, you know that one of the most time-consuming tasks is training your bot to understand user queries. Building up your bot’s training data—and continuously refining it as customers ask questions in new and unexpected ways—is crucial for enhancing its ability to “understand.”

Once your bot is trained and hitting a reasonable level of accuracy, traditional intent model training becomes quick and cost-effective. But let’s rewind for a moment: how do you even create new intents when you don’t have a neatly curated set of training data to start with?

Enter generative AI. With generative AI models, you can create those crucial 20 or so initial training phrases from just a couple of examples. This allows you to swiftly build out your intent models, even without pre-existing data. Of course, human oversight is still essential for reviewing and refining the dataset, particularly for nuances only subject-matter experts can spot. But the time saved in creating that initial intent model? Game-changing.

For those using the EBM platform, you’ve been ahead of the curve. We’ve had this feature for over a year, enabling chatbot developers to automatically generate a user-defined number of training examples from a single input—and even curate the results before adding them to the intent model.

Intent Boosting

A common challenge for chatbots is ensuring they correctly interpret a customer’s query. This often occurs when your intent model lacks sufficient training data, or when a user’s question is phrased in an unusual way—or even contains spelling mistakes—causing the intent model to return scores too low to trigger the right response or action.

Enter generative AI. If your chatbot struggles to understand a customer’s query, why not leverage generative AI to interpret the question and guide the customer towards the correct answer? By incorporating this feature, combined with some prompt engineering, you can quickly add a fallback mechanism to boost your chatbot’s understanding with minimal effort.

Plus, you’ll gather valuable new training data to further refine your chatbot’s intent model!

Advanced entity extraction

Extracting specific entities from a user’s input is a standard feature in chatbot platforms, playing a crucial role in understanding and normalising key values within a query. For example, if a user asks:

“Can you fit me in for a haircut on Monday, please?”

From this query, it’s clear the user wants a hair appointment next Monday. Entity extraction allows you to pull out and normalise key details like “hairdresser” and the specific date for the upcoming Monday. With this structured data, you can easily call an API to check available time slots for the service. Sounds straightforward, right?

But what happens when the query is less structured?

“My hair desperately needs trimming, and I’ve only got either Tuesday or Wednesday next week free. Can you help?”

Training a traditional intent model to recognise, extract, and normalise this type of input can be quite a challenge! This is where generative AI shines. With the help of generative AI, paired with a new feature in large language models called “Function Calling,” you can understand the intent and extract the necessary service details and dates—perfectly suited for your backend API.

The solution? Let your traditional intent model handle the simpler queries and entities, while generative AI takes care of the more complex inputs, all while preserving your existing conversational flows and API integrations.

Elevate your chatbot’s intelligence by building on the valuable work already done with your backend systems, and take user experience to the next level.

Search content on fallback (RAG)

Imagine your chatbot is already up and running, fully integrated with your backend systems, answering common questions, and completing tasks. All is going smoothly. Then someone suggests a brilliant idea: why not have the chatbot help employees navigate the vast HR handbook or guide engineers through processes detailed in hundreds of documents? While these are excellent use cases for a multi-channel chatbot, the challenge of scanning all those documents, predicting potential questions, training the chatbot, and curating responses is monumental—practically impossible when you’re dealing with hundreds or even thousands of documents. And what if the document gets updated? You’d have to start the whole process again!

So, what’s the solution? Building a dedicated LLM (large language model) for all these documents might seem like the way forward, but it’s a costly and complex task, requiring significant expertise and processing power. And if the documents change? Time to retrain the model.

While that’s an option some businesses might take, there’s a much more efficient approach: Retrieval Augmented Generation (or RAG). This method combines two powerful technologies—semantic search and LLMs. Here’s how it works:

Imagine an employee asks:

“How many holidays do I get each year if I’m based in Spain and have been at the company for two years?”

With semantic search, the system scans your documents and retrieves snippets that best match the query. While useful, these snippets would still need to be manually read and understood.

Enter Generative AI. Instead of stopping at semantic search, you feed the search results to an LLM, which interprets the information and generates a tailored answer.

But what happens when the documents change? Simple! The semantic search is automatically updated, ensuring the LLM always has access to the latest data.

Breaking it down:

  1. An LLM generates the answer, using data retrieved from semantic search. This is the magic of Retrieval Augmented Generation.
  2. It’s fast, efficient, and accessible through a single API call—perfect for integrating into your existing chatbot. If the chatbot struggles to understand a query, simply give the user the option to consult your RAG system.

Now, you might be thinking, “But what if Generative AI gets the answer wrong?” Yes, there’s always a risk of AI “hallucination,” where the model generates inaccurate responses. However, using RAG offers key advantages:

  • Semantic search constrains the content the LLM uses to generate answers, significantly reducing the risk of hallucination (though not eliminating it entirely).
  • Content snippets often come with document references, like the page number and location of the original snippet. Why not include these references in your chatbot’s responses?
  • Tag AI-generated responses, so users are aware they’re interacting with AI. If they’re unsure, they can always consult the source documents for verification.

This approach is already being used by some of our customers to supercharge their existing chatbots. It provides a seamless experience for users, while also offering a unified data source to help your business understand customer and employee needs, guiding future decisions.

Ready to take your chatbot to the next level? RAG might just be the game-changer you’ve been looking for.

In summary

There are countless examples of chatbots delivering real value, whether they’re assisting anonymous visitors on your website, helping logged-in customers with account-specific queries, or guiding employees on things like holiday entitlements. So, why abandon all that hard work to start fresh with a fully AI-generated chatbot—and take on the risks that come with it?

By blending generative AI with your existing chatbot platform, you can harness the power and advanced capabilities of AI while still benefiting from the structure and reliability of your current system. This hybrid approach enhances your bot’s understanding and functionality, all within the safe, controlled environment of your platform’s guardrails.

Ready to explore further?

Ready to kickstart your chatbot journey?

Ready to kickstart your chatbot journey?

More to explore

Ready to kickstart your chatbot journey?