A common issue we solve for clients here at WebRTC.ventures is how to get better quality video in WebRTC applications. A common and understandable question: If we can get 2K, 4K and above on YouTube videos, why can’t we easily get the same in a WebRTC application?

Let’s take a look at three factors that may limit the quality of your video application: Encoding/Decoding, Network Stability, and Limited or Variable Bandwidth, and how to solve them.

Encoding/Decoding

Video must be encoded on the sender side and decoded on the receiving side. It’s critical to compress this video, especially in situations where bandwidth is limited and we don’t have time to buffer our packets. Let’s begin!

Encoding

Encoding needs many more resources than decoding, so it is much more of an issue in situations where the encoding device can’t keep up. Some codecs might be able to encode data more efficiently, but today they still require more CPU. For example: if you encode video using VP8 30fps full HD at 10Mbps, we see a CPU usage of 20%. The same with AV1 used 32%.1

The main reason why we might want to use newer codecs, like AV1, is because they provide better compression. AV1 represents the biggest jump in video quality and performance in a decade. AV1 4K streams at similar bitrates to H.264 1080P video offer a reduction of cost-bitrate of 30% while keeping the same quality.

Newer, more powerful machines will be able to provide the resources necessary for encoding AV1 WebRTC video more efficiently. When these computers are regularly available in the market in the next year or so, more and more people will be able to encode 4K video streams as easily as HD with the same bandwidth. But we are not there yet! 

Once video is encoded, we can send it more efficiently through the internet.

1with intel i7 gen 11th (2022)

Decoding

On the other end, users receive data in a format that needs to be decoded for visualization. Video decoding is less computationally intensive and more evolved today. That’s one of the reasons why services like YouTube can offer 4K videos without problem. Also, video streaming services have the help of buffers. Because they don’t need to live stream, they can buffer data in advance. This provides higher video quality with lower bandwidth requirements. 

Network stability

Network instability creates three main issues. 

Latency

This is the delay that happens between a video being sent and then received on the other side. A latency of anywhere more than one second makes communication difficult. 

Packet Loss

Packet loss occurs when there is network congestion. If, for example, a public network is being heavily used, this can cause the video to drop or other odd things like green screens. 

Jitter

Jitter is a variance on how packets are received. When the usual rhythm (gaps between packets) or even the order of packet receipt is disturbed, video quality will be compromised. This usually manifests in videos freezing. 

Ways to Solve Network Stability Issues

Control your network

The ideal scenario is one where you can implement QoS mechanisms to prioritize important traffic. This will ensure smooth communication for essential services and manage bandwidth flows. In the real world, unfortunately, this is not usually the case.

Points of presence (PoPs) that are closer to the user

This moves the bulk of the video traffic to your private network, as opposed to hopping through the public internet. Not all companies have the infrastructure to do this, instead you would implement a CPaaS to do it for you. There are also some specific services like AWS Global Accelerator or Cloudflare Tunnel that can be used.

For audio, leverage Opus FEC

Transmitting audio using OPUS FEC does sacrifice some extra bandwidth to send duplicate packets, but this helps to improve the overall audio quality.

Variable or Limited Bandwidth

Variable or limited bandwidth is the third category that degrades video quality. This is due to the way data is transmitted over a network. When video content is streamed over the internet or any network, it is divided into small packets of data. These packets are then sent from the server to the user’s device in a sequential manner.

If the available bandwidth is limited, the data rate will be reduced. This means that fewer data packets can be sent, leading to slower video streaming and frequent pauses to buffer data.

If the bandwidth is variable, the video quality will fluctuate throughout the streaming session.

Ways to Solve Network Bandwidth Issues

Dominant speaker identification

Choose a specific video to receive, and optimize that one video instead of many. A great example of this is detailed by my colleague Marcell Silva in his post, Active Speaker Detection with the Amazon Chime SDK

Adaptive bitrate based on bitrate or resolution

In this technique, you reduce the quality of the sender’s video based on the bandwidth of the receiver, or their required resolution.

Simulcast or SVC (Scalable Video Coding)

These techniques allow you to distribute different video qualities to different types of users. In the diagram above, we use SVC to dynamically change quality with layered video encoding, optimizing real-time communication for varying bandwidths.

Let’s have our experts take a look under the hood of your live video application.

Is your application lagging at scale? Need to improve call quality? Considering a new media server or CPaaS provider? Trust our expert WebRTC team (who’s seen it all before!) to carefully examine your specific challenges, analyze your application/architecture, and recommend a game plan to solve your issues and maximize your potential. Contact our team for a WebRTC Assessment today!

Watch the Video!

This content comes from WebRTC Live Episode 78: Three Things We’ve Learned Building Video Apps. You can watch just this portion of the panel discussion below.

Recent Blog Posts