On the November 13, 2024 episode of WebRTC Live, host Arin Sime welcomed Luca Pradovera, Lead Solutions Architect at SignalWire, for a discussion on call performance analysis in contact centers and other high-volume communication environments. Key insights and episode highlights can be found below the video.

Bonus Content

  • Our regular monthly industry chat with Tsahi Levent-Levi. This month’s topic: “Twilio Video: Back from the dead?”

Watch Episode 96!

Key Insights

Mean Opinion Score (MOS) is a crucial metric for measuring call quality, but it’s very subjective. As Luca explains, MOS is based on feedback from groups of testers who rate quality, making it a subjective quality measure shaped by personal perception rather than objective standards. “A lot of industry lives and dies around these kinds of metrics, but it’s hard to figure out what it is. It’s a measure of voice call quality. The important thing is it’s a subjective measurement, meaning that it’s actually supposed to be obtained by interviewing people.”

Train your agents to recognize issues, as it can save both time and money. Properly training your agents is essential for call management and quality assurance. Luca explains, “It’s very important to train your agents to recognize issues. That’s actually been our first line of defense. Please make sure people know what the common issues look like. For example, yesterday, I had to debug a batch of calls where the bad quality was literally a low cell phone signal on the customer side. They were calling someone who didn’t have a good signal. It was clearly easy to tell from the recordings, but the agents hadn’t been trained on the occurrences with someone new.”

Define your acceptance criteria to streamline call quality management. Without clear criteria, reviewing large volumes of calls becomes overwhelming and unproductive. Plus, it makes it difficult to identify and address key issues effectively. Luca explains, “The other important thing to define with your customers, with your partners, with whoever you’re working with is the acceptance criteria. It’s extremely important to define what consists an acceptance criteria. Why? Because you’re talking thousands and thousands of calls, millions of calls. We have customers who’ve done a million calls last month. If you do a million calls in a month and you go deep on every call that you think failed, first of all, you’re going to drive someone mad, really very mad, and second, you’re going to drive yourself mad because literally no way you can address that. So you need an acceptance criteria then you can still go see what happened, but it’s very important to have an acceptance criteria.”

Episode Highlights

The main challenge in call quality management is dealing with incomplete or inaccurate data.

It becomes incredibly difficult to effectively manage call quality when the information you’re working with is unreliable or misleading. Luca explains, “This is the main problem. Everybody lies at all levels. Agents will lie. Customers, don’t get me started on customers. They will literally say anything. They will say the agent didn’t tell them that their subscription was expiring, but they couldn’t understand what’s being said even if we approve, that’s not true. Carriers will lie. Sorry, hope I don’t get fired for this. But carriers will say, nope, we had no problems at all during yesterday’s campaign. Then you go look at your network stats and you have 50% packet loss towards one of their systems. But of course, that’s not… so everybody lies. How do we solve that? And this is, I like to start with a little laugh, but then this is serious. People have incomplete information, so they end up either saying what they see or saying what they think they should be seeing, which is also interesting. So finding the problem takes a few steps.”

How to measure call quality at scale?

Establishing clear acceptance criteria is important for better identifying and solving quality issues. Luca explains his criteria, “My definition, which has been working very well in the industry, is the agent couldn’t perform their job. That is the premium definition. Why? For two reasons. One, it’s relatively simple to picture yourself in the agent’s shoes, whether you’re a technical person or a manager person. And secondarily, it resonates very well with leadership. Agents couldn’t perform their job. We found out that 2% of agents could not perform their job. So you literally lost that amount of money. So it’s very easy. You always need to look for methods like this in my opinion. Again, this is not solely about regulations. We’re talking about quality. So, I try to remind that the important thing is how do you define a bad call. We have done it at ‘agent couldn’t perform their job’. So again, the call didn’t connect, or it connected, and it was very bad quality, dropped halfway, whatever. All of those are ‘agent couldn’t perform their job.’”

Rely only on objective data.

Objective data is essential for managing high-volume communication systems effectively. Relying too heavily on human interpretation and manual checks is both impractical and inefficient.

Luca explains, “Everybody lies, so have your own data. Don’t even trust yourself because you will lie after you see the data and say, ‘Oh no, this doesn’t mean that.’ But it does. Your server is bad. Go check your firewall. It is exactly what you’re seeing. Go check your firewall. And it’s really important to have objective data because if nothing else, aside from the commercials, the aspect of looking better for your customers, which is certainly what you want to do as a company. It’s literally the only way to survive the business. When you’re running 25,000 calls a day, there is no way you’re going to be able to check all of them. You need a quantitative approach. Otherwise, it’s just not going to work.”


Up Next! WebRTC Live Episode 97

The Changing WebRTC Landscape

with guest host Mariana Lopez, COO of WebRTC.ventures

Wednesday, December 11, 2024 at 12:30 pm Eastern. 

REGISTER FOR WEBRTC LIVE EPISODE 97

Recent Blog Posts