WebRTC applications can run into problems on Kubernetes because of the many layers of NAT, which often block UDP and RTP media traffic. STUNner is a WebRTC media gateway built to fix these issues by acting as a reliable STUN and TURN server in Kubernetes environments.

In this blog post, we’ll show you how to set up STUNner as a standalone STUN/TURN server on Amazon EKS (Elastic Kubernetes Service), with support for both UDP and TLS. You’ll learn how to use STUNner to set traffic rules, add authentication, and take advantage of features that make it easier to run WebRTC on Kubernetes.

This STUNner set up guide is great for DevOps teams and developers working on real-time applications who want more control over their media infrastructure, whether you’re replacing third-party TURN services or building a secure, scalable platform on AWS.

Prerequisites for Setting Up STUNner as a Standalone STUN/TURN Server 

Before diving into the STUNner setup, ensure your environment meets these requirements:

Infrastructure Requirements

  1. A Load Balance Controller for the specific cloud provider (AWS Load Balancer Controller here)
  2. Domain name with DNS management access (Amazon Route 53 recommended) 

Access & Permission

  1. Write access to Kubernetes namespaces for resource deployment
  2. Certificate Manager for TLS certificate generation

Required Tools

  1. Kubectl 
  2. Helm CLI
  3. Patience and troubleshooting skills for setup debugging!

Note: While this tutorial uses AWS services, the concepts apply to other cloud providers with minor modifications.

Step 1: Install STUNner Control Plane

The STUNner control plane can be installed via helm chart, and can be customized using a values file. Copy the following values and save them in a values.yaml file:

stunnerGatewayOperator:
  enabled: true
  deployment:
    name: stunner-gateway-operator
    podLabels: {}
    tolerations:
      - key: "webrtcventures"
        operator: "Equal"
        value: "allow"
        effect: "NoSchedule"
    nodeSelector:
      kubernetes.io/os: linux
    container:
      manager:
        resources:
          limits:
            cpu: 1000m
            memory: 512Mi
          requests:
            cpu: 250m
            memory: 128Mi
  dataplane:
    mode: managed
    spec:
      replicas: 1
      tolerations:
        - key: "webrtcventures"
          operator: "Equal"
          value: "allow"
          effect: "NoSchedule"
      resources:
        limits:
          cpu: 2
          memory: 512Mi
        requests:
          cpu: 500m
          memory: 128Mi

stunnerAuthService:
  enabled: true
  deployment:
    replicas: 1
    nodeSelector:
      kubernetes.io/os: linux
    tolerations:
      - key: "webrtcventures"
        operator: "Equal"
        value: "allow"
        effect: "NoSchedule"
    container:
      authService:
        resources:
          limits:
            cpu: 200m
            memory: 128Mi
          requests:
            cpu: 10m
            memory: 64Mi

With the file saved, install the STUNner control plane using the command below. Note that -f values.yaml should point to the values.yaml file created previously.

helm repo add stunner https://l7mp.io/stunner    # add
                                                  # repo
helm repo update stunner                         # update
                                                  # repo
helm upgrade --install stunner-gateway-operator \
  stunner/stunner-gateway-operator \
  --create-namespace \
  --namespace=stunner-system \
  -f values.yaml                                  # install
kubectl get pods -n stunner-system               # verify
                                                  # the
                                                  # installation

This command will install STUNner control plane in a single namespace which will consist of a Gateway Controller and Authentication Service. The control plane will manage the creation of the dataplane. 

Step 2: Set up cert-manager for TLS

Next, install a certificate manager in your cluster. In this case, the reference is based on Amazon EKS (Elastic Kubernetes Service)

First, create the IAM role that will be used by the CertificateManager’s service account to give the pods permissions to manage Route53 TXT records for ACME challenge. The role uses the following policies:

1. CertManager IAM assume role policy reference:

    Replace AWS_REGION and EKS_CLUSTER_OIDC_ID with your account’s specific values. If the ExplicitSelfRoleAssumption block causes an error during initial role creation, remove it from the Assume role policy. You can add it back after the role is created, as this block references the role itself (replace all values inside <<VALUE>> with your own).

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "ec2.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        },
        {
          "Effect": "Allow",
          "Principal": {
            "Federated":
              "arn:aws:iam::account-id:oidc-provider/oidc.eks.<<AWS_REGION>>.amazonaws.com/id/<<EKS_CLUSTER_OIDC_ID>>"
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
            "StringEquals": {
              "oidc.eks.<<AWS_REGION>>.amazonaws.com/id/<<EKS_CLUSTER_OIDC_ID>>:aud":
                "sts.amazonaws.com",
              "oidc.eks.<<AWS_REGION>>.amazonaws.com/id/<<EKS_CLUSTER_OIDC_ID>>:sub": [
                "system:serviceaccount:cert-manager:cert-manager",
                "system:serviceaccount:cert-manager:cert-manager-cainjector",
                "system:serviceaccount:cert-manager:cert-manager-webhook"
              ]
            }
          }
        },
        {
          "Sid": "ExplicitSelfRoleAssumption",
          "Effect": "Allow",
          "Action": "sts:AssumeRole",
          "Principal": {
            "AWS":
              "arn:aws:iam::<<AWS_ACCOUNT_ID>>:role/<<CURRENT_ROLE_NAME>>"
          },
          "Condition": {
            "ArnLike": {
              "aws:PrincipalArn":
                "arn:aws:iam::<<AWS_ACCOUNT_ID>>:role/<<CURRENT_ROLE_NAME>>"
            }
          }
        }
      ]
    }

    2. CertManager role policy reference:

    Assuming you are using Route53, replace the ROUT53_ZONE_ID with your Route53 zone ID from your account (replace all values inside <<ROUT53_ZONE_ID>> with your own).

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "route53:GetChange",
          "Resource": "arn:aws:route53:::change/*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "route53:ChangeResourceRecordSets",
            "route53:ListResourceRecordSets"
          ],
          "Resource": "arn:aws:route53:::hostedzone/<<ROUT53_ZONE_ID>>"
        },
        {
          "Effect": "Allow",
          "Action": "route53:ListHostedZonesByName",
          "Resource": "*"
        }
      ]
    }

    Once you have the IAM role ready, define the certmanager-values.yaml file that will be used to install Certificate Manager custom resource definitions and its control plane resources (service account, webhook service, and certificate authority injector). 

    Create the file (certmanager-values.yaml) and add the following values for the helm chart replacing the SERVICE_ACCOUNT_ROLE_ARN with the ARN of the role you created (replace all values inside <<SERVICE_ACCOUNT_ROLE_ARN>> with your own).

    crds:
      enabled: true
    
    serviceAccount:
      create: true
      automountServiceAccountToken: true
      annotations:
        eks.amazonaws.com/role-arn: <<SERVICE_ACCOUNT_CERTMANAGER_ROLE_ARN>>
      name: ""
    # nodeSelector:
    #   group: general-pool
    # tolerations:
    #   - operator: "Exists"
    
    
    webhook:
      serviceAccount:
        create: true
        automountServiceAccountToken: true
        annotations:
          eks.amazonaws.com/role-arn: <<SERVICE_ACCOUNT_CERTMANAGER_ROLE_ARN>>
        name: ""
      # nodeSelector:
      #   group: general-pool
      # tolerations:
      #   - operator: "Exists"
    
    
    cainjector:
      serviceAccount:
        create: true
        automountServiceAccountToken: true
        annotations:
          eks.amazonaws.com/role-arn: <<SERVICE_ACCOUNT_CERTMANAGER_ROLE_ARN>>
        name: ""
      # nodeSelector:
      #   group: general-pool
      # tolerations:
      #   - operator: "Exists"
    
    
    acmesolver:
      nodeSelector:
        group: general-pool
      # tolerations:
      #   - operator: "Exists"
    
    
    startupapicheck:
      serviceAccount:
        create: true
        automountServiceAccountToken: true
        annotations:
          helm.sh/hook: post-install
          helm.sh/hook-weight: "-5"
          helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
          eks.amazonaws.com/role-arn: <<SERVICE_ACCOUNT_CERTMANAGER_ROLE_ARN>>
        name: ""
      # nodeSelector:
      #   group: general-pool
      # tolerations:
      #   - operator: "Exists"

    Next, install certmanager control plane by running the following commands:

    helm repo add jetstack https://charts.jetstack.io
    helm repo update jetstack
    helm upgrade --install \
      cert-manager jetstack/cert-manager \
      --namespace cert-manager \
      --create-namespace \
      --version v1.15.3 \
     -f cert-manager-values.yaml

    Then, create the certmanager-certificate.yaml with the following content (replace all values inside <<VALID_EMAIL_ADDRESS>>, <<ROUTE53HOSTEDZONENAME>>, <<AWS_REGION>>, <<turn.dev.blog.example.com>> with your own).

    apiVersion: cert-manager.io/v1
    kind: ClusterIssuer
    metadata:
      name: webrtcventures-issuer
    spec:
      acme:
        # server: https://acme-staging-v02.api.letsencrypt.org/
        # directory
        server: "https://acme-v02.api.letsencrypt.org/directory" 
          # "https://acme-staging-v02.api.letsencrypt.org/directory"
        email: "<<VALID_EMAIL_ADDRESS>>"
        privateKeySecretRef:
          name: webrtcventures-issuer-account-key
        solvers:
          # https://cert-manager.io/docs/tutorials/acme/dns-validation/
          - selector:
              dnsZones:
                - <<ROUTE53HOSTEDZONENAME>> # i.e. example.com
            dns01:
              route53:
                region: "<<AWS_REGION>>"
                hostedZoneID: "<<ROUTE53HOSTEDZONEID>>"
                role: <<SERVICE_ACCOUNT_CERTMANAGER_ROLE_ARN>>
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: webrtcventures-certificate
      namespace: applications
    spec:
      secretName: webrtcventures-certificate
      # subject:
      #   organizations:
      #     - WebRTCVentures
      commonName: "<<turn.dev.blog.example.com>>"
      dnsNames:
        - <<turn.dev.blog.example.com>>
        - <<"*.dev.blog.example.com">>
      issuerRef:
        kind: ClusterIssuer # Issuer
        name: webrtcventures-issuer
        group: cert-manager.io # default

    After a few minutes, a new certificate will be issued and can be listed under the secrets resources. You might need to reference the logs from the certificate manager pods to see any errors in case you can’t find a secret with the name secretName: webrtcventures-certificate

    You can also directly check the certmanager’s certificate status using the following command:

    kubectl get certificates -n applications 
    # Status should show "Ready: True"

    Note: You can test this with a self signed certificate using CertManager. Reference this documentation.

    Step 3: Configure UDP and TLS listeners

    Next, install the Gateway configuration resources that will be used by the control plane to deploy the required STUNner resources. 

    The following configuration will require a Kubernetes secret that holds TURN credentials. The values passed below are base64 encoded. To encode your value, run:

    echo -n example-username | base64
    echo -n example-password | base64

    The route resources (UDPRoute and TCPRoute) must reference an existing service that exposes the port you specified under: 

    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: TCPRoute
    ...
      rules:
        - backendRefs:
            - name: my-application
              namespace: applications
              port: 7882 
              # port used by the app's service for tcp/tls

    Create a file named stunner-resources-config.yaml and add the following configuration (replace all values inside <<VALUE>> with your own):

    apiVersion: gateway.networking.k8s.io/v1
    kind: GatewayClass
    metadata:
      name: webrtcventures-application-gateway
      namespace: applications
    spec:
      controllerName: "stunner.l7mp.io/gateway-operator"
      parametersRef:
        group: "stunner.l7mp.io"
        kind: GatewayConfig
        name: webrtcventures-application-gateway
        namespace: applications
      description: "STUNner is a WebRTC ingress gateway for
        Kubernetes"
    ---
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: webrtcventures-stunner-auth-secret
      namespace: applications
    data:
      type: <<VALUE>> # static => c3RhdGlj
      username: <<VALUE>> # example => ZXhhbXBsZQ==
      password: <<VALUE>> # example => ZXhhbXBsZQ==
    ---
    apiVersion: stunner.l7mp.io/v1
    kind: GatewayConfig
    metadata:
      name: webrtcventures-application-gateway
      namespace: applications
    spec:
      logLevel: "all:DEBUG,turn:INFO"
      realm: stunner.l7mp.io
      authRef:
        name: webrtcventures-stunner-auth-secret
        namespace: applications
    ---
    # apiVersion: gateway.networking.k8s.io/v1
    apiVersion: gateway.networking.k8s.io/v1beta1
    kind: Gateway
    metadata:
      name: webrtcventures-application-gateway
      namespace: applications
      annotations:
        stunner.l7mp.io/enable-mixed-protocol-lb: "true"
        stunner.l7mp.io/targetport: '{"tls-listener":44321}'
        service.beta.kubernetes.io/aws-load-balancer-type:
          "external"
        service.beta.kubernetes.io/aws-load-balancer-
          nlb-target-type: "ip"
        service.beta.kubernetes.io/aws-load-balancer-scheme:
          "internet-facing"
        service.beta.kubernetes.io/aws-load-balancer-cross-
          zone-load-balancing-enabled: "true"
        # service.beta.kubernetes.io/aws-load-balancer-ssl-
        # ports: "443"
        # service.beta.kubernetes.io/aws-load-balancer-ssl-
        # cert: "arn:aws:acm:us-east-2:773530277564:
        # certificate/6868fb97-e845-4d7e-918f-8fcb1cd72c06"
        # service.beta.kubernetes.io/aws-load-balancer-backend-
        # protocol: "tcp"
        # service.beta.kubernetes.io/aws-load-balancer-ssl-
        # negotiation-policy: "ELBSecurityPolicy-TLS13-1-2-2021-
        # 06"
    spec:
      gatewayClassName: webrtcventures-application-gateway
      listeners:
        - name: udp-listener
          port: 3478
          protocol: TURN-UDP # Changed from TURN to TURN-UDP
          allowedRoutes:
            namespaces:
              from: All
        - name: tls-listener
          port: 443
          hostname: "turn.dev.blog.webrtc.software"
          protocol: TURN-TLS
          tls:
            mode: Terminate
            certificateRefs:
              - kind: Secret
                namespace: applications
                name: webrtcventures-certificate-dev
          allowedRoutes:
            namespaces:
              from: All
            kinds:
              - kind: TCPRoute
    ---
    # The error "udproute-controller Validating backend: not
    # found" likely occurs because:
    # 1. The backend service "my-application" does not exist in
    #    namespace "my-application"
    # 2. The namespace "applications" may not exist
    # 3. The port port:7881 is not exposed on the referenced
    #    backend service
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: UDPRoute
    metadata:
      name: webrtcventures-application-udp-route
      namespace: applications
    spec:
      parentRefs:
        - name: webrtcventures-application-gateway
      rules:
        - backendRefs:
            - name: my-application
              namespace: applications
              port: 7881 # port used by the app's service for udp
    ---
    apiVersion: gateway.networking.k8s.io/v1alpha2
    kind: TCPRoute
    metadata:
      name: webrtcventures-application-tcp-route
      namespace: applications
    spec:
      parentRefs:
        - name: webrtcventures-application-gateway
      rules:
        - backendRefs:
            - name: my-application
              namespace: applications
              port: 7882 # port used by the app's service for tcp/tls

    Then, deploy the stunner resources by running the following command:

    kubectl create ns applications
    kubectl apply -n applications -f stunner-resources-config.yaml

    This will provision the STUNner resources which will include a LoadBalancer service that will be configured using the annotations passed to the Gateway resource created before.

    Lastly, obtain the service external IP which will be an AWS Network LoadBalancer DNS. Map this DNS to a subdomain using CNAME record type or you can use it directly. In this tutorial, it is mapped to: turn.blog.dev.webrtc.software.

    Be sure to perform an ICE test to confirm that the TURN server is running correctly with both UDP and TLS. Check the testing section below for a sample test.

    Step 4: Test your WebRTC app connectivity

    With the above resources created, and assuming you’ve configured a cloud provider-compatible load balancer controller such as the AWS Load Balancer Controller we are using, a Kubernetes service should have been created. 

    This, in turn, should have triggered the creation of an external-facing Network Load Balancer. To get the load balancer hostname/IP, run:

    export HOSTNAME=$(kubectl get service \
    webrtcventures-udp-gateway -n stunner \
    -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    
    echo $HOSTNAME

    You should get an output similar to the following:

    k8s-stunner-webrtcve-xxx6545-xxxx.elb.us-east-2.amazonaws.com

    1. Navigate to Trickle ICE page to test if our TURN server is working correctly using the following steps:In the STUN or TURN URI input, configure the server using the output from the previous command as the hostname. You can add both a STUN and a TURN URI. Use either the load balancer URL directly or map it to a custom domain (e.g., turn.dev.blog.webrtc.software) by creating a CNAME record.

      stun:k8s-stunner-webrtcve-xxx6545-xxxx.elb.us-east-2.amazonaws.com  
        
      turn:k8s-stunner-webrtcve-xxx6545-xxxx.elb.us-east-2.amazonaws.com

      2. For credentials (TURN username and TURN password), use the values set in the GatewayConfig resource. In our case, we’ll use the following for both TURN and STUN configuration.

          userName: "admin-stunner"
          password: "b4KSUjMSnKNo"

          3. Finally, click on Add and the Gather Candidates button. The example below uses a domain name, but you can still use the NLB DNS name obtained from the AWS ELB console or from the service details in the cluster.

          Trickle ICE test interface showing STUN/TURN server configuration with multiple ICE candidates listed in a table, displaying host, server reflexive, and relay candidates with their IP addresses and ports.
          Trickle ICE test interface showing STUN/TURN server configuration with multiple ICE candidates listed in a table, displaying host, server reflexive, and relay candidates with their IP addresses and ports.

            WebRTC DevOps Solutions

            Congratulations! You now have a fully functional STUNner TURN server running on Kubernetes with both UDP and TLS support! This WebRTC STUN/TURN server setup provides the foundation for deploying real-time applications that can handle complex networking scenarios in cloud environments. 

            While this tutorial covers the basics of setting up STUNner on Kubernetes, production WebRTC deployments often require additional considerations around scaling, monitoring, security, and infrastructure optimization. 

            WebRTC.ventures specializes in WebRTC DevOps solutions including:

            • Production-ready Kubernetes deployments for WebRTC platforms
            • Auto-scaling configurations for media servers and TURN infrastructure
            • Multi-region deployments
            • Performance monitoring and optimization
            • Security hardening and compliance frameworks
            • CI/CD pipeline setup for WebRTC applications 

            Whether you’re looking to optimize your existing WebRTC infrastructure, need help architecting a scalable solution from scratch, the WebRTC.ventures DevOps team has the expertise to get you there. We’re also available on a staff augmentation basis. Contact us today

            Related Posts: 

            WebRTC.ventures is a member of the Amazon Partner Network

            Recent Blog Posts