Improving quality in WebRTC applications is an ongoing task, it doesn’t stop when you deploy your application. To support maintenance, debugging, and continuous improvement, observability needs to be baked in from the beginning.
On March 25, 2026, our guest was Balázs Kreith, a Senior Software Engineer at Riverside.fm, lead developer of the open source ObserveRTC project, and a veteran of WebRTC teams at Whereby and callstats.io. Balázs talks about Quality of Service, Quality of Experience, and share common pitfalls from real world experience. He also offers practical guidance on how to build real time communication applications that hold up under real-world conditions.
The episode also features Arin Sime and Tsahi Level-Levi’s Monthly WebRTC Industry Chat. This month, they discussed the rise of the WebRTC micro app. Watch on YouTube.
Watch Episode 111: Improving End-to-End Quality with WebRTC Observability
Episode highlights and key insights below.
Key Insights
⚡ Observability reveals the full picture of call quality. To truly understand what’s happening, you need to go beyond isolated metrics and capture the interaction between system performance and user behavior. It’s not just about how the media flows, but how user actions influence that flow in real time. Observability bridges this gap by connecting technical data with actual user experience. Balázs explains, “In real-time communication, it is much more data you need to be analyzing because the media should flow and then the user is actually also behaving. There are clicks and then how certain clicks, how certain components they are initiating on your web pages will affect the media flowing and all of their components.”
⚡ QoS vs QoE: what’s the difference? It’s easy to rely on metrics, but they don’t always reflect what users actually feel. You can have “bad” QoS and still a good experience and vice versa. What matters is how the user remembers the call. As Balázs explains: “Quality of experience is a perceived quality, what we’re perceiving right now and this is how you will remember this meeting, so if there are freezing, if there are low quality on your end, you will remember it, it’s like it was not so good meeting […] But the other thing is that, that’s your perception of what happened. And then there is another one. How did the whole media flow and how the whole conference went from the quality of service perspective?”
⚡ Call quality metrics are often misleading. Measuring call quality isn’t as simple as assigning a single score, because user perception is nuanced and unfolds over time. Balázs explains, “Approximating the perceived quality in one score is challenging. As I said in the beginning, that it’s like if the first five minutes was very bad, but the last 55 minutes, then it’s not a bad call in perceived quality if you have all of the segregation.” Arin also adds, “Quality experience is definitely a continuum there throughout the call.”
Episode Highlights
Building observability early keeps you out of the dark
WebRTC applications operate in highly dynamic environments, where network conditions, user behavior, and system performance are constantly interacting. Without observability from the start, teams are forced to troubleshoot issues after the fact, with limited data and little clarity on where things went wrong.
As Balázs explains, “Monitoring is crucial as far as I can remember back in the whole time when I’m working with WebRTC, we all the time started analyzing what went wrong when something went wrong, obviously, and the less monitoring data you have, the more you are in the dark. That’s true for everything and especially in our profession, real-time communication when there are so many factors, and in order to see that it was not our fault, you need to have really clear boundaries.”
QoS metrics don’t always reflect user experience
A single quality-of-service score can misrepresent how users actually perceive a call. Even if the first few minutes are poor, a strong performance for the remainder can lead to a positive quality-of-experience. As Balázs explains, “You have a one-hour meeting and then in the beginning of the meeting there was very bad quality let’s say lots of freezing and so on and all over, like it was five minutes and then the last 50, 55 minutes went good. Your perception of the quality, especially because in the last half you didn’t have any kind of trouble it was it was good but if you are again aggregating this quality in a quality of service metrics then it says that hey this meeting was bad this is also not true. So the challenges in many layers is that how to aggregate, what kind of metrics to see, what kind of thresholds, and so on and so forth. This is also very interesting in many ways.”
Connectivity is half the battle in real-time calls
Even the most advanced quality-of-service metrics can’t tell the full story. A large share of call problems comes down to underlying factors that disrupt the experience before media quality even becomes relevant like connectivity. As Balázs explains, “To be fair, it’s like two things that I think that I’m still until today when I got some kind of tickets to analyzing because some problem got in the call, then 50% of the call is usually connectivity. So we can talk about nice quality of service metrics. We can say that how the flow went and how beautiful the media appears, and resolution and everything. But to be honest, 50% of the time it’s always connectivity issue. We need to take care of it. And then we need to dig out what happened. And then it’s also, it’s really quality of experience issue.”
Up Next! WebRTC Live #112
How Experienced Teams Debug and Monitor WebRTC in Production
Wednesday, April 22, 2026 at 12:30 pm Eastern
