I recently attended the AWS Summit New York. I had a very busy few days of meetings with clients, partners, and prospects. The Summit itself was also extremely interesting, with the big topic for me being around Generative AI in the Contact Center. 

In this blog post, I’ll talk a bit more about the concepts behind Generative AI for the Contact Center, and how it relates to our integration work at WebRTC.ventures. To start, you may want to watch this short update video that I recorded from NYC, WebRTC.ventures Visits AWS Summit NYC 2023.

Generative AI overview

Generative AI has been a regular topic of conversation both professionally and personally for so many of us in 2023. In the most general sense, Generative AI refers to any AI algorithm that can create new content. This includes the creation of images based on text prompts, such as Dall-E from OpenAI, or MidJourney. It also includes the creation of text and conversational content from ChatGPT, also created by OpenAI and which now powers Microsoft’s Bing search.

It’s commonplace now for people to talk about having conversations with ChatGPT. Or, using it to write content as I did in this blog post about ChatGPT and the Future of WebRTC earlier this year. People are experimenting with Dall-E to create everything from images for blog posts to tattoo art. 

These same tools can be used to help ideate and write marketing content, among many other use cases. The Harvard Business Review suggested in a July 2023 article that Generative AI can be very valuable in product ideation for a business. Its greatest potential is, “to assist humans in their efforts to create hitherto unimaginable solutions” through assisting in idea evaluation, refinement and challenging expert bias.

Using Generative AI in the Contact Center

How can Generative AI be used in the contact center specifically? Generative AI can be combined with chatbots and human agents to create workflows that provide more efficient and valuable customer support. 

Image credit: Created using DALL-E with the prompt, “A customer service agent working on their laptop, sitting on a lounge chair next to the pool, in the cartoon style of Futurama.”

A traditional chatbot is trained on a very fixed domain of standard questions from customers. It attempts to answer these questions by pointing customers to the proper resource. They may or may not be very effective in answering those questions in a natural way, and may even end up simply pointing the customer to an FAQ article. When the chatbot cannot resolve the customer’s question, the call or text chat is escalated to a human agent.

How Generative AI Improves Chatbots

When a Chatbot is combined with Generative AI, it’s more likely to understand the customer’s question. And thus, provide a more natural and direct response to their question. A simplistic example might look like this:

Simplistic ChatbotChatbot using Generative AI
Customer: I would like to cancel my account, but the annual fee was just charged to my card. Can I get that refunded when I cancel the account?

Simplistic Chatbot: I understand that you want to cancel your account, is that correct?

Customer: Yes, but I also want to know if the annual fee will be refunded?

Simplistic Chatbot: To view our cancellation policies, please click here.  Was that helpful?

Customer: Not really, I don’t see my scenario covered there.

Simplistic Chatbot: I’m sorry I wasn’t more helpful, would you like me to transfer you to an agent?

Customer: Yes, please. (Sigh)
Customer: I would like to cancel my account, but the annual fee was just charged to my card. Can I get that refunded when I cancel the account?

Chatbot using Generative AI: According to our cancellation policies, we will automatically refund your annual fee if you cancel within 10 days of the charge. You can read our full policies by clicking here. When was your annual fee charged?

Customer: It was two weeks ago.

Chatbot using Generative AI: I understand. I’m afraid that two weeks exceeds our policy of 10 days. Would you like me to transfer you to an agent for further assistance?

Customer: Yes, please. Thanks!

In both of these fictional examples, the customer is ultimately transferred to a human agent for final resolution of their account. However, in the Generative AI example, the customer is likely to be more satisfied with the interaction because they were given a direct answer to their question. Using a chatbot powered by a Generative AI Large Language Model (LLM) will improve the conversational ability of the chatbot. Through better understanding of the question being asked, the LLM is able to provide a more direct answer. It can understand that two weeks is greater than 10 days and enough about the cancellation policy to directly answer the customer’s question.

Notice that in this concocted example, there is still value in a human agent. By offering to transfer the customer to a human agent, there is an opportunity to either upsell the customer. Perhaps a discount will encourage them to keep the account another year? There is also an opportunity to delight the customer: “We’re sorry to see you go, but I see you have been a loyal customer for a long time. I can make an exception and give you a full refund.”

Benefits of Generative AI in the Contact Center

Image credit: Created using DALL-E with the prompt, “A smart robot in a call center, answering multiple phone calls at once, drawn in a Simpsons cartoon style.”

Speakers at the AWS Summit NYC talked about the value that Generative AI can bring to the contact center, listing such benefits as:

  1. More self service answers
  2. More direct answer to custom questions
  3. Less need to send customers an excerpt from FAQ’s or a link to read
  4. Faster call resolution
  5. Agent Assist, aka providing agents with additional information about the customer and their history with the company, the product they are asking about, or available upsells
  6. Insights/Analytics generated across calls

After attending Enterprise Connect 2023 earlier this year, I blogged about Agent Assist and stated, “ChatGPT is not ready for the enterprise.” I said this because ChatGPT and Generative AI still face issues with hallucinations and potentially giving out false information. These sorts of erroneous results could be damaging to customers (in a literal sense if they are hurt by bad advice), as well as damaging to the corporate brand. Enterprises do not want that risk.

However, things are changing fast. There are new ways to limit the knowledge that an LLM has access to, in order to ensure it is providing information that companies are comfortable sharing with their customers.

Architecting Generative AI for your Contact Center

How can these applications be built? There are a number of AWS services to consider which were discussed at the AWS Summit NYC, including the following:

  • Amazon SageMaker. Build, train and deploy Machine Learning models on AWS. Providing support for many leading ML models and tools, SageMaker makes it easier for data scientists and ML engineers to deploy high-performance and low-cost ML at scale.
  • Amazon SageMaker JumpStart. A hub which provides many pre-trained ML models, which can be customized to your use case and data.
  • Amazon Kendra. Enterprise Search powered by Machine Learning. This allows you to search across structured and unstructured data and provide unified answers. By setting up your LLM to use Kendra to provide the answers the LLM needs to provide to customers, you can improve the ability of a chatbot to provide useful and intelligent answers to customer’s questions.
  • Amazon Lex. Provides the ability to build conversational AI chatbots using the same underlying logic as Amazon Alexa devices.
  • Amazon Chime SDK. Build real-time communication capabilities into your application. When a chatbot needs to transfer a customer to a human agent, that can be enabled with the Amazon Chime SDK using Video or Voice, directly in the browser or in your mobile application. 

For an in depth explanation of how to combine many of the above services to add Generative AI to your conversational solution, you might find this post on the AWS Machine Learning blog helpful: Exploring Generative AI in conversational experiences: An Introduction with Amazon Lex, Langchain, and SageMaker Jumpstart.

Additional services to consider for your application

In addition to AWS services, there are many other third party services which you can use to build your application. For example, we work regularly with Symbl.ai, which provides conversation understanding and Generative AI technologies which can be used to provide features like transcripts and conversation summaries for your voice or video calls with customers. We talked about this in a blog post on Enhancing Customer Service Experiences with Vonage and Symbl.ai and on a joint webinar. Vonage also offers their AI Studio for conversational AI, which we covered in this episode of WebRTC Live.

For intelligent chatbots, an application can also be based on NLX.ai, which is a conversational AI platform built to be ready for the enterprise. It is already being used by large brands such as Red Bull and Copa Airlines. Using a complete platform like NLX has the advantage of already being GDPR compliant, HIPAA compliant, and providing security features like end-to-end encryption and masking of sensitive information like PHI/PII

Many of these platforms can also integrate with other LLMs. For instance, NLX has an OpenAI integration for ChatGPT. As NLX notes, combining these LLMs with their existing chatbot functionality will, “drive even better intent detection accuracy with minimal training (or zero-shot learning) thanks to the massive data set that GPT-3 was trained on – a key driver of successful automation with high customer satisfaction.” NLX’s integration also provides guardrails that allow for more content control by the enterprise to reduce the risk of ChatGPT going off the rails when replying to a customer.

Integrating Generative AI with transfer to Human Agents

Image credit: Created using DALL-E with the prompt “A happy robot handing a rotary telephone to a human, drawn in a cyberpunk cartoon style”

As our fictional example at the beginning of this blog post showed, there are situations where customers will still need to be transferred to a human agent for final resolution of their concerns. This might be due to exceptional customer service cases, or simply to create upsell opportunities. 

Whatever the reason, it’s important that a custom integration is built to enable this transition. A chatbot should never just provide a message like “call this number for assistance” or provide a link to a meeting tool. The context of the conversation up to that point needs to be passed along to the human agent (and saved in a CRM!). The conversation should also be continued in whatever location it started (web browser or mobile app chat, perhaps).

The human agent may enter the conversation via text chat. Or, an integrated Voice or Video call can be initiated by the code or after prompting the customer to click on a link to start a call. These calls can be integrated directly into the contact center workflow so that they are monitored and assigned properly, like any other call into the contact center. 

Our team at WebRTC.ventures can build this functionality into your customer contact center, as well as work with existing integrations. For example, NLX offers an integration for the Amazon Chime SDK, for which we are a Systems Integration Partner. Our expertise in both platforms will allow you to most effectively transition customers from chatbots to human agents.

Other uses of Generative AI in the Enterprise

I’ve focused on Contact Center use cases in this blog post, but Generative AI can be used in many other ways of course. A few examples also discussed at the AWS Summit NYC were:

  • Coding assistants to help developers be more productive and provide a form of intelligent autocomplete
  • Data dashboards that can provide relevant information in real-time, for instance providing sports statistics during live sports broadcasts
  • Marketing and product content development

Meet our team or contact us to learn more!

Image credit: Created using DALL-E with the prompt “Hobbits using a laptop in Hobbiton”

Our team is often at different conferences like AWS Summit NYC, where we are often networking with our peers and industry partners, as well as meeting with existing and potential clients. We’d be happy to meet with you to discuss anything you found interesting in this post.  Here’s a few events we’ll be at in the coming months:

Our team of experts at WebRTC.ventures can help you build Voice, Video, and Generative AI into your contact center or other enterprise applications, using any of the technologies and APIs listed in this blog post.

We have a wide range of skills across our team of developers, designers, and DevOps experts. We bring strong partnerships to the table to make sure that your custom conversational AI solution can be deployed and scaled quickly, as well as provide ongoing management and support of that application. Contact us today to learn more!

Recent Blog Posts