Each month, our CEO Arin Sime sits down with long-time WebRTC industry authority Tsahi Levent‑Levi of BlogGeek.me to discuss the latest real-time communications trends and challenges. In their November 2025 session, Arin asked Tsahi five pointed questions and gave him 90 seconds per answer, pushing for clear calls on where real‑time video is heading in the next year and beyond.
In this post, you’ll get a readable summary of those five predictions, including:
- Whether developers will complain more about WebRTC or MOQ
- If AV1 will finally become the dominant video codec in 2026
- Whether edge computing for WebRTC is hype or real progress
- The dark horse tech Tsahi thinks could disrupt WebRTC
- Which current “best practices” might soon be seen as anti‑patterns
Let’s get into Tsahi’s answers and what they mean for anyone betting on WebRTC in 2026. You can also watch “Five WebRTC Predictions for 2026” on YouTube.
Prediction 1: WebRTC or MOQ – Who Gets More Complaints in 2026?
Arin opened with a fun one: in 2026, what will developers complain about more, WebRTC or MOQ?
Tsahi did not hesitate. “WebRTC.”
Why WebRTC Will Draw More Fire Than MOQ
Today, MOQ is still for early adopters. The people using it now are:
- Enthusiastic experimenters
- Fans who are happy with the direction
- A few sharp critics who can point to gaps, but are still engaged
As Tsahi put it, the current MOQ crowd is made of people who “would love it” and “would die for it.” You might hear the odd skeptical voice, like someone saying it is not taking the right path, but that is a small minority.
There is also a key detail: MOQ standards and tooling are not fully settled yet. They are not closed, and changes are expected. That gives users a different mindset. If something breaks or changes, they accept it as part of the process. In Tsahi’s words, as long as it is not production ready and the standard is still changing, why complain?
WebRTC is in a very different phase:
- It is mature and widely deployed
- It runs in real production environments
- Many teams rely on it for business‑critical systems
That makes expectations much higher. When things fail or underperform, the complaints are louder.
As MOQ adoption grows past the current fans and into broader engineering teams, more developers will hit real‑world friction. At that point, complaints will grow. But for WebRTC 2026, Tsahi expects WebRTC to remain the primary target of developer frustration, simply because of its reach and age.
Prediction 2: Will AV1 Be the Dominant Video Codec in 2026?
The second question was about video codecs: will AV1 become the dominant codec in 2026?
Again, Tsahi’s answer was clear: “No.”
He added a timeline: if AV1 becomes dominant, it will probably be around 2028, not 2026.
Why AV1 Will Not Dominate WebRTC 2026
AV1 is already in use, and many people like it. It can offer better compression and quality in some scenarios. The problem is not interest, it is CPU cost.
Right now, AV1:
- Uses too much CPU for many live use cases
- Fits only where processing budgets and hardware allow
- Is not a default choice for most products
At the same time, VP8 still “just works.” It is widely supported, stable, and easier to run. If a product already uses VP8, switching to AV1 is not a small step. It takes real engineering effort, testing, and tuning.
Tsahi’s logic is simple:
- In two years, VP8 will still just work.
- Old systems will keep VP8 or H.264 if they already run well.
- Many services still have not fully adopted VP9, even though it has been around for years.
Based on this pattern, he expects AV1 to follow a similar path in the medium term: used where it makes sense, but not dominant across WebRTC in 2026.
Here is a quick snapshot of how Tsahi framed the current codec mix:
| Codec | Where it fits today | Main pros | Main cons |
| VP8 | Many WebRTC services, old and new | Stable, well understood, “just works” | Less efficient than newer codecs |
| H.264 | Legacy systems, cameras, interop cases | Broad device support | Licensing, older format |
| VP9 | Select services, experiments | Better compression than VP8 | Adoption still limited |
| AV1 | New projects, controlled environments | Strong compression potential | High CPU cost, not plug‑and‑play yet |
For WebRTC 2026, Tsahi expects AV1 usage to grow, but VP8 and H.264 to remain the workhorses for many services.
Prediction 3: Edge Computing for WebRTC – Hype or Real Progress?
Next up was a trend that gets a lot of attention: edge computing for WebRTC.
Arin framed it as WebRTC running “on the edge,” for example in:
- IoT devices and sensors
- Security and surveillance cameras
- Wearables and AR‑enabled factory workers
- Drones and remote equipment monitoring
So is 2026 the year edge computing goes big? Tsahi’s answer: not really.
Edge Computing: Slow Growth and Niche Use Cases
Tsahi’s view is that edge WebRTC will keep growing slowly, similar to how it has grown so far. In his words, “nothing really interesting or exciting” will change compared to recent years.
The biggest reason is legacy and inertia, especially in surveillance and security.
Most surveillance cameras today:
- Still encode video in H.264
- Stream over RTSP
- Have a large installed base, and they work well enough
Why do they stay that way? Again, because “it just works.”
If a service provider wants to integrate those cameras into a WebRTC application, the complexity shifts to the server side. They take RTSP streams from the cameras, then convert or gateway them into WebRTC for browsers or apps. This lets camera vendors keep their current stack, while cloud and service vendors take on the integration overhead.
From the camera maker’s point of view, there is a clear question: Why invest in putting WebRTC directly into the camera, if customers can work around it on their servers?
Because of that, Tsahi expects:
- WebRTC on the edge to appear mainly in new devices, not old ones
- Adoption to start in specific niches where WebRTC brings strong value
He gave drones as one example where WebRTC on the device makes more sense than in basic surveillance cameras. Drones often need low latency, bidirectional control, and flexible signaling, which fit WebRTC better.
So for WebRTC 2026, edge use will be present but limited. It will grow out of necessity in a few focused verticals, not as a sweeping trend across all IoT and camera hardware.
Prediction 4: The Dark Horse Tech That Could Disrupt WebRTC
When asked to name a dark horse technology that could disrupt WebRTC in an unexpected way, Tsahi again did not hesitate.
His pick: MOQ.
He tied this to a broader stack of web technologies:
- WebCodecs
- WebTransport
- WebAssembly
Tsahi suggested that combining these, and using MOQ in place of WebTransport, gives you a strong enabler for new types of real‑time experiences.
What Makes MOQ Interesting Here
By mixing these building blocks, developers can:
- Build media pipelines that do not have to follow the full WebRTC model
- Differentiate in ways that are harder with vanilla WebRTC
- Experiment with features where WebRTC is more rigid
This is not something mainstream developers will jump on overnight. Tsahi expects “crazy startups” to be among the first to push this model, along with large, very competitive vendors.
He mentioned a few by name as possible early adopters:
These are companies that can afford to invest in custom media stacks and custom networking logic if it gives them a real edge.
There is still a question mark here. Tsahi pointed out that Zoom already walked a similar path, then moved away from it by the end of 2024. Whether they, or others, will circle back to this approach in 2026 or 2027 is unknown.
Even so, for him, MOQ combined with WebCodecs, WebTransport, and WebAssembly is the most likely dark horse that could disrupt how we think about WebRTC in the coming years.
Prediction 5: Current WebRTC “Best Practices” That May Become Anti‑Patterns
The final question was about WebRTC practices that are popular today, but may soon look like bad ideas or even anti‑patterns.
Tsahi named two main areas:
- Simulcast used too broadly
- SVC (Scalable Video Coding) as a never‑ending promise
He also linked this to a deeper question: how much should we keep chasing new codecs and higher resolutions, when AI‑based upscaling is improving so quickly?
Simulcast: Great Tool, Misused in Common Cases
Simulcast is often recommended as a must‑use feature for group calls. It sends multiple versions of the same video at different bitrates and resolutions, so the media server can pick the best one for each participant.
Tsahi agrees simulcast is a powerful tool, but adds a twist: “Simulcast is great, but don’t use simulcast” in all situations.
Here is the core problem:
- Most calls in many services are still one‑on‑one, even if the system supports groups.
- If two people are talking in what is technically a group room, simulcast still kicks in.
- In that simple case, simulcast only burns extra bandwidth and CPU, without adding real value.
His point is that we need to refine the best practice:
- Use simulcast where it helps, such as real multi‑party sessions.
- Skip it for 1:1 calls, or very small group calls with simple layouts.
- Treat it as a conditional tool, not a universal default.
For WebRTC 2026, this kind of overuse could start to look like an anti‑pattern, as more teams measure costs and optimize carefully.
SVC: The Long‑Promised “Holy Grail” That Never Quite Lands
Scalable Video Coding (SVC) has been talked about as a kind of “holy grail” for about a decade, especially since VP9 entered the picture. In theory, SVC lets you encode video once, with multiple quality layers inside, and then adapt smoothly based on network conditions and device power.
In practice, Tsahi’s view is blunt: it still is not there.
Questions he raises:
- Will SVC finally arrive in a practical, widespread way with AV1?
- Or will it remain that best practice that everyone keeps talking about, but few really use?
He leans toward skepticism, at least in the near term. SVC may stay as a “great idea on paper” that rarely becomes a core part of actual deployments, especially while simpler approaches work well enough.
Are We Chasing the Wrong Things: Codecs and 4K vs. AI Upscaling?
From simulcast and SVC, Tsahi moved to a broader reflection that matters a lot for WebRTC 2026 and beyond.
For years, the industry has:
- Chased the next best video codec
- Pushed for higher resolutions, like 4K
Yet in real calls, we often:
- Talk about HD, but actually send VGA or 720p
- Work in small windows on a laptop, where 4K adds little to perceived quality
Tsahi gave a personal example from his own recorded videos, many of which are in portrait mode:
- Should he record in 1080p or 720p?
- Does he really want to store 1 GB for 5 minutes of content, when 200–300 MB might be enough?
Those are offline videos, where quality expectations are higher than a live call. Even there, the trade‑off is not obvious.
Then he pointed to a recent feature from YouTube, which introduced AI‑based upscaling he referred to as “hyper resolution.” With that feature, you can upload a lower resolution video, and YouTube will upscale it using AI, so viewers see something closer to HD.
This leads to a big question: If AI can upscale so well, do we still need to always send and store full HD or 4K?
Tsahi framed the choice like this:
- Should we invest effort in better video compression on the sender side?
- Or should we invest in better AI on the receiver side to improve whatever comes in?
He admitted he does not yet know what the winning best practice will look like in a year or two. And that uncertainty is exactly why some of today’s habits might soon feel like poor choices.
Here is a simple comparison of the trade‑offs he hinted at:
| Approach | Pros | Cons |
| Store high resolution | Maximum original quality preserved | Large files, higher storage and cost |
| AI upscale on playback | Smaller files, flexible playback | Extra processing, quality can vary |
For teams planning around WebRTC 2026, this means some long‑held assumptions may need a fresh look. The “always higher resolution, always newer codec” mindset may fade, replaced by more balanced choices that factor in AI and real user needs.
Wrapping Up: What WebRTC 2026 Might Really Look Like
Tsahi’s rapid‑fire session with Arin painted a picture of WebRTC 2026 that is more about steady change than dramatic shifts.
Key takeaways:
- More complaints about WebRTC than MOQ, because WebRTC is older and everywhere, while MOQ is still with early adopters.
- AV1 will not be the dominant codec in 2026, and may need until around 2028 to reach that status, while VP8 and H.264 stay strong.
- Edge WebRTC will grow slowly, with real use in niches like drones and new devices, while legacy cameras stick with H.264 and RTSP.
- MOQ plus WebCodecs, WebTransport, and WebAssembly is Tsahi’s dark horse stack that could disrupt traditional WebRTC in the longer term.
- Simulcast overuse and SVC hype may turn into recognized anti‑patterns, especially as AI upscaling changes how we think about resolution and codecs.
The real test will come as 2026 unfolds. Which prediction will age best, and which will surprise us? Either way, these questions are a helpful guide for anyone planning their next move in WebRTC.
WebRTC.ventures has been building real-time video applications for over a decade. No matter how WebRTC evolves in 2026, our team is here to help you design, build, and scale what comes next.
You can be among the first to see Arin and Tsahi’s chat each month on WebRTC Live, the monthly webinar with WebRTC industry guests brought to you by WebRTC.ventures. Catch up on past episodes through the Monthly WebRTC Industry Chat playlist on YouTube.
If you want to follow more of Tsahi’s thinking, his blog BlogGeek.me is packed with deep WebRTC content and hands-on insights. It also has a helpful WebRTC glossary, as evidenced by its use throughout this post!
Further Reading on the WebRTC.ventures blog:
- MOQ Protocol Explained: Unifying Real-Time and Scalable Streaming
- Why WebRTC Remains Deceptively Complex in 2025
- How to Get Started with WebRTC
Further Reading on Tsahi’s Blog:
