Deterministic AI and Generative AI in the Contact Center

At Enterprise Connect 2024, I saw the latest updates around AI in enterprise communications. It was an interesting contrast to my 2023 visit to the same conference. While the hype was more or less the same, there has definitely been progress in turning that hype into reality. This was especially true around Deterministic AI and Generative AI in the contact center. 

As you might guess from the name, the annual Enterprise Connect conference is focused on large enterprises. Our team at WebRTC.ventures works with enterprise companies, but we also build video and communications apps for startups and mid-market companies who have different needs and experiences. I say this just to note that these observations are skewed toward the larger enterprises out there. For smaller organizations, many of these same ideas apply. But you’ll need to be more focused on how you invest in AI to ensure you get an appropriate return.

That being said, here are some of my takeaways from the conference.

A push for uniting enterprise data with AI

One trend discussed by a couple of speakers was how AI will drive the unification of back office and front office processes. Front office operations like contact centers are one of the early adopters of AI because of the benefits it promises in reducing call volume, enabling more customer self-service, and helping human agents to resolve customer issues more quickly.

But it’s hard for a voicebot to properly serve a customer if they don’t have access to the right information. So that LLM needs some access to backend systems and corporate data in order to gain the information needed. For this reason, there were large vendors like Zoom at the conference pitching the unification of back office and front office solutions. Naturally, they make a self-serving pitch to buy their solutions so that you have a single provider across back office and front office. Their various AI tools can then be the glue between those systems for you.

This opens up a set of risks as you try to make sure that the data LLMs are trained on is relevant and useful, but also has the proper security around it to prevent the release of sensitive data to a customer via the bot. This will doubtless keep many consultants and prompt engineers busy as these systems grow!

The most immediate value of AI is still internal

While AI in customer-facing scenarios gets a lot of attention, it’s also clear that the lower risk and most immediate implementations of AI in the enterprise are still around making internal tasks easier. Meeting transcripts which provide action items and separate notes for each attendee is a good example. This can save a lot of time, and the impact is pretty minimal if something goes wrong. 

Even on the customer-facing side, a call transcript and sentiment analysis can be useful, but are also low-risk when mistakes are made. Agents can typically edit those transcripts to correct errors. Even Agent Assist applications can be ignored when the intended help for the human agents misses the mark.

Deterministic AI can handle the low-hanging fruit

One session I enjoyed was a case study about Marriott Vacations using Amelia.ai in their contact center. They estimated a savings of $1-$2 on each call – just by having the bot authenticate the customer instead of a human agent verifying the caller. That’s an AI implementation that sounded very deterministic to me since it’s based on predictable questions and answers, such as “please tell me your birthday” or “what is your name?”

ChatGPT explains the difference between deterministic and generative AI: “Deterministic AI focuses on producing consistent and predictable outcomes based on predefined rules or algorithms. Generative AI specializes in creating new and often unpredictable content by learning from data and incorporating randomness or variability in its processes.

The presenter pointed out that deterministic paths are great for easy questions. But they also made the interesting points that Generative AI can still help us in a deterministic path. For instance, a purely deterministic path would struggle in this situation:

  • Bot: What is your birthday?
  • Customer: I just want to check my balance
  • Bot: I don’t understand, what is your birthday?

A more generative AI might better handle unexpected responses and be able to respond in a more human way, such as:

  • Bot: What is your birthday?
  • Customer: I just want to check my balance
  • Bot: I understand. However, before I can give you the statement balance, I need to verify your identity. Knowing your birthday is one of the questions I need to ask to verify your identity. What is your birthday?

Focusing initially on deterministic paths can really help your business get quick efficiency gains. But you shouldn’t stop there: adding in some generative AI to help make the responses more human results in a more pleasant customer experience. 

The time to try GenAI is now

In one of the keynote fireside talks, Kevin Shatzkampf of Google Cloud’s Conversational AI Engineering team said (paraphrasing here): 

“The right time to jump into GenAI, or the worst time, is always based on your risk tolerance. There will be no perfect time to jump in where the pace of change has paused. The rate of change will continue to grow!”

In other words, you’re wasting time if you’re waiting for GenAI to stabilize so much that you can pick an architecture and strategy and stick with it for the foreseeable future. You need to accept the fact that you will incur technical debt and you will need to keep revisiting and revising your GenAI implementations and vendor choices. But, the potential value is so high that you need to at least get started.

Stay focused on customer value

Although you do need to “play” with AI a bit to understand it and see the potential it offers your business and your customers, multiple speakers emphasized the importance of staying focused on how it will provide customer and business value.

You wouldn’t put out an RFP that says, “give me an open-ended budget to build the shiniest thing with the latest tech”. (Or if you do, please send it to me!)

Instead, you need to identify the problems you are trying to solve, i.e., reduced call times, improved containment rates, or increased NPS. Then, look for specific areas that generative AI and LLMs can add value and help solve those problems.

Contact our team build your intelligent communication application

At WebRTC.ventures, we are experts in building custom video and voice applications and we have built many “click to call” style implementations. We have helped integrate WebRTC communications into many corporate and customer contact implementations and also frequently work with our partners such as Symbl.ai to integrate intelligence into those applications for things like call summaries, analytics, and more. 

Our team can apply a decade of experience with WebRTC to your unique situation, and help you assess, build, test, deploy and manage a unique communications solution. Contact us today!

Recent Blog Posts