
Many WebRTC applications struggle with outdated or inappropriate media server infrastructure, limiting their ability to scale effectively and support powerful AI features. Alfred Gonzalez, Senior WebRTC Engineer at WebRTC.ventures, walks us through the considerations, options, and steps to successfully migrate to another media server. He’ll then show

Host Arin Sime was live from the RTC.ON 2025 conference in Krakow, Poland, with short discussions with three of the event speakers: Read our conference wrap up: WebRTC.ventures Visits RTC.ON 2025 Key insights and episode highlights below. Watch Episode 105! Key Insights ⚡ MoQ is the next-generation foundation for

Large Language Models (LLMs) have dominated conversations about AI integration in WebRTC, particularly when it comes to voice-based features like transcription, summarization, and intent detection. But there’s an emerging layer that many outside of research circles are missing: Vision Language Models (VLMs). Unlike LLMs, which work with

Adding Voice AI to WebRTC applications presents unique technical challenges and user experience considerations. How do you architect systems that handle real-time audio processing, maintain conversational context, and deliver natural, responsive interactions? And how do you design interfaces that adapt to the dynamic nature of AI-powered communication?