In a world that craves dynamic interactions and immersive experiences, the realm of technology keeps pushing the boundaries of what’s possible. Imagine a platform that combines the magic of real-time audio transmission with animated VR avatars. 

At, we’re pioneers in harnessing the power of WebRTC to create extraordinary connections. In this blog post, we learn how to transmit audio via WebRTC and receive animated video talking portraits — otherwise known as “avatars”.

WebRTC audio becomes a VR Avatar

At the heart of this unique and engaging experience is a symphony of technologies orchestrated to redefine how we can communicate and interact online.

The backbone of the backend WebRTC functionality is powered by the aiortc library. This enables seamless server side real-time communication to relay streams between users over the web. The primary focus is to integrate the audio stream with an animated VRM avatar, thereby enhancing the interactive experience.

Key Technologies and Modules

There are three important components to this mix:

  1. WebRTC with aiortc. WebRTC is a powerful technology that facilitates real-time communication. With the “aiortc” library, developers can create WebRTC applications using Python. This library empowers the project to establish audio streams and enables synchronized communication.
  1. Three.js. As a prominent 3D graphics library, Three.js is used for rendering interactive 3D computer graphics in a web browser. It offers a range of capabilities, making it an ideal choice for creating immersive visual experiences.
  1. vrm-three. The vrm-three library is utilized to load VRM 3D avatars, which are realistic digital representations of individuals. This module seamlessly integrates with Three.js, providing the foundation for the animated talking portraits.

Making Magic Happen

We use the WebRTC API to listen for new audio track received. When received, we will need to pass it to a custom VRMAvatar.js that does the animation.

import VRMAvatar from "./VRMAvatar.js";
// Instantiate the VRMAvatar class
const myVRM = new VRMAvatar('./models/VRM1_Constraint_Twist_Sample.vrm');
// Call the animate function to display avatar and start the animation with audio stream
myVRM.initializeVoiceToAvatar(audioStream); // passing audio (type MediaStream)

There are three steps here: 

  1. Initialize VRMAvatar. The star of the show, VRMAvatar, steps onto the stage. The VRMAvatar class is the key to bringing VR avatars to life.
  1. Start Animation. Invoke the animate function, and behold as your chosen VRM avatar springs to life! The animation loop begins, and the avatar is live.
  1. Symphony of Synchronization. The initializeVoiceToAvatar function links audio streams with avatar mouth movements

Now, a Demo!

View the demo project on Github and watch the demo below.

Unlocking Possibilities: Where WebRTC Shines

The potential of this project spans realms:

  • Elevated Virtual Meetings. Imagine virtual meetings where every participant has a voice and an animated presence, fostering a connection that goes beyond the screen.
  • Gaming and Live Streaming. From the realm of entertainment to the world of gaming, the fusion of real-time audio and animated avatars opens new dimensions of engagement.
  • Education Redefined. Virtual classrooms take a leap forward, as interactive avatars create a classroom where learning is an immersive adventure. is Your Gateway to the Extraordinary

We’re not just WebRTC experts – we’re architects of experiences that bridge technology and humanity. If you need help building new extraordinary experiences or need to improve your WebRTC application, reach out to us.

At, we don’t just build applications, we build better connections!

Recent Blog Posts