GithubHelp home page GithubHelp logo

l7mp / stunner Goto Github PK

View Code? Open in Web Editor NEW
668.0 21.0 53.0 5.19 MB

A Kubernetes media gateway for WebRTC. Contact: [email protected]

Home Page: https://l7mp.io

License: MIT License

Dockerfile 0.16% Go 97.51% Shell 2.09% Makefile 0.24%
gateway kubernetes webrtc

stunner's Introduction

STUNner
Discord Go Reference

Note: This page documents the latest development version of STUNner. See the documentation for the stable version here.

STUNner: A Kubernetes media gateway for WebRTC

Ever wondered how to deploy your WebRTC infrastructure into the cloud? Frightened away by the complexities of Kubernetes container networking, and the surprising ways in which it may interact with your UDP/RTP media? Read through the endless stream of Stack Overflow questions asking how to scale WebRTC services with Kubernetes, just to get (mostly) insufficient answers? Want to safely connect your users behind a NAT, without relying on expensive third-party TURN services?

Worry no more! STUNner allows you to deploy any WebRTC service into Kubernetes, smoothly integrating it into the cloud-native ecosystem. STUNner exposes a standards-compliant STUN/TURN gateway for clients to access your virtualized WebRTC infrastructure running in Kubernetes, maintaining full browser compatibility and requiring minimal or no modification to your existing WebRTC codebase. STUNner supports the Kubernetes Gateway API so you can configure it in the familiar YAML-engineering style via Kubernetes manifests.

Table of Contents

  1. Description
  2. Features
  3. Getting started
  4. Tutorials
  5. Documentation
  6. Caveats
  7. Milestones

Description

Currently WebRTC lacks a virtualization story: there is no easy way to deploy a WebRTC media service into Kubernetes to benefit from the resiliency, scalability, and high availability features we have come to expect from modern network services. Worse yet, the entire industry relies on a handful of public STUN servers and hosted TURN services to connect clients behind a NAT/firewall, which may create a useless dependency on externally operated services, introduce a performance bottleneck, raise security concerns, and come with a non-trivial price tag.

The main goal of STUNner is to allow anyone to deploy their own WebRTC infrastructure into Kubernetes, without relying on any external service other than the cloud-provider's standard hosted Kubernetes offering. STUNner can act as a standalone STUN/TURN server that WebRTC clients and media servers can use as a scalable NAT traversal facility (headless model), or it can act as a gateway for ingesting WebRTC media traffic into the Kubernetes cluster by exposing a public-facing STUN/TURN server that WebRTC clients can connect to (media-plane model). This makes it possible to deploy WebRTC application servers and media servers into ordinary Kubernetes pods, taking advantage of the full cloud native feature set to manage, scale, monitor and troubleshoot the WebRTC infrastructure like any other Kubernetes workload.

STUNner media-plane deployment architecture

Don't worry about the performance implications of processing all your media through a TURN server: STUNner is written in Go so it is extremely fast, it is co-located with your media server pool so you don't pay the round-trip time to a far-away public STUN/TURN server, and STUNner can be easily scaled up if needed just like any other "normal" Kubernetes service.

Features

Kubernetes has been designed and optimized for the typical HTTP/TCP Web workload, which makes streaming workloads, and especially UDP/RTP based WebRTC media, feel like a foreign citizen. STUNner aims to change this state-of-the-art, by exposing a single public STUN/TURN server port for ingesting all media traffic into a Kubernetes cluster in a controlled and standards-compliant way.

  • Seamless integration with Kubernetes. STUNner can be deployed into any Kubernetes cluster, even into restricted ones like GKE Autopilot, using a single command. Manage your HTTP/HTTPS application servers with your favorite service mesh, and STUNner takes care of all UDP/RTP media. STUNner implements the Kubernetes Gateway API so you configure it in exactly the same way as the rest of your workload through easy-to-use YAML manifests.

  • Expose a WebRTC media server on a single external UDP port. Get rid of the Kubernetes hacks, like privileged pods and hostNetwork/hostPort services, typically recommended as a prerequisite to containerizing your WebRTC media plane. Using STUNner a WebRTC deployment needs only two public-facing ports, one HTTPS port for signaling and a single UDP port for all your media.

  • No reliance on external services for NAT traversal. Can't afford a hosted TURN service for client-side NAT traversal? Can't get decent audio/video quality because the third-party TURN service poses a bottleneck? STUNner can be deployed into the same cluster as the rest of your WebRTC infrastructure, and any WebRTC client can connect to it directly without the use of any external STUN/TURN service whatsoever, apart from STUNner itself.

  • Easily scale your WebRTC infrastructure. Tired of manually provisioning your WebRTC media servers? STUNner lets you deploy the entire WebRTC infrastructure into ordinary Kubernetes pods, thus scaling the media plane is as easy as issuing a kubectl scale command. Or you can use the built in Kubernetes horizontal autoscaler to automatically resize your workload based on demand.

  • Secure perimeter defense. No need to open thousands of UDP/TCP ports on your media server for potentially malicious access; with STUNner all media is received through a single ingress port that you can tightly monitor and control.

  • Simple code and extremely small size. Written in pure Go using the battle-tested pion/webrtc framework, STUNner is just a couple of hundred lines of fully open-source code. The server is extremely lightweight: the typical STUNner container image size is only 15 Mbytes.

Getting Started

STUNner comes with a Helm chart to fire up a fully functional STUNner-based WebRTC media gateway in minutes. Note that the default installation does not contain an application server and a media server: STUNner is not a WebRTC service, it is merely an enabler for you to deploy your own WebRTC infrastructure into Kubernetes. Once installed, STUNner makes sure that your media servers are readily reachable to WebRTC clients, despite running with a private IP address inside a Kubernetes pod. See the tutorials for some ideas on how to deploy an actual WebRTC application behind STUNner.

With a minimal understanding of WebRTC and Kubernetes, deploying STUNner should take less than 5 minutes.

Installation

The simplest way to deploy STUNner is through Helm. STUNner configuration parameters are available for customization as Helm Values.

helm repo add stunner https://l7mp.io/stunner
helm repo update
helm install stunner-gateway-operator stunner/stunner-gateway-operator --create-namespace \
    --namespace=stunner-system

Find out more about the charts in the STUNner-helm repository.

Configuration

The standard way to interact with STUNner is via the standard Kubernetes Gateway API. This is much akin to the way you configure all Kubernetes workloads: specify your intents in YAML files and issue a kubectl apply, and the STUNner gateway operator will automatically create the STUNner dataplane (that is, the stunnerd pods that implement the STUN/TURN service) and downloads the new configuration to the dataplane pods.

It is generally a good idea to maintain STUNner configuration into a separate Kubernetes namespace. Below we will use the stunner namespace; create it with kubectl create namespace stunner if it does not exist.

  1. Given a fresh STUNner install, the first step is to register STUNner with the Kubernetes Gateway API. This amounts to creating a GatewayClass, which serves as the root level configuration for your STUNner deployment.

    Each GatewayClass must specify a controller that will manage the Gateway objects created under the class hierarchy. This must be set to stunner.l7mp.io/gateway-operator in order for STUNner to pick up the GatewayClass. In addition, a GatewayClass can refer to further implementation-specific configuration via a reference called parametersRef; in our case, this will be a GatewayConfig object to be specified next.

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: GatewayClass
    metadata:
      name: stunner-gatewayclass
    spec:
      controllerName: "stunner.l7mp.io/gateway-operator"
      parametersRef:
        group: "stunner.l7mp.io"
        kind: GatewayConfig
        name: stunner-gatewayconfig
        namespace: stunner
      description: "STUNner is a WebRTC media gateway for Kubernetes"
    EOF
  2. The next step is to set some general configuration for STUNner, most importantly the STUN/TURN authentication credentials. This requires loading a GatewayConfig custom resource into Kubernetes.

    Below example will set the authentication realm stunner.l7mp.io and refer STUNner to take the TURN authentication credentials from the Kubernetes Secret called stunner-auth-secret in the stunner namespace.

    kubectl apply -f - <<EOF
    apiVersion: stunner.l7mp.io/v1
    kind: GatewayConfig
    metadata:
      name: stunner-gatewayconfig
      namespace: stunner
    spec:
      realm: stunner.l7mp.io
      authRef: 
        name: stunner-auth-secret
        namespace: stunner
    EOF

    Setting the Secret as below will set the static authentication mechanism for STUNner using the username/password pair user-1/pass-1.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: stunner-auth-secret
      namespace: stunner
    type: Opaque
    stringData:
      type: static
      username: user-1
      password: pass-1
    EOF

    Note that these steps are required only once per STUNner installation.

  3. At this point, we are ready to expose STUNner to clients! This occurs by loading a Gateway resource into Kubernetes.

    In the below example, we open a STUN/TURN listener service on the UDP port 3478. STUNner will automatically create the STUN/TURN server that will run the Gateway and expose it on a public IP address and port. Then clients can connect to this listener and, once authenticated, STUNner will forward client connections to an arbitrary service backend inside the cluster. Make sure to set the gatewayClassName to the name of the above GatewayClass; this is the way STUNner will know how to assign the Gateway with the settings from the GatewayConfig (e.g., the STUN/TURN credentials).

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: udp-gateway
      namespace: stunner
    spec:
      gatewayClassName: stunner-gatewayclass
      listeners:
        - name: udp-listener
          port: 3478
          protocol: TURN-UDP
    EOF
  4. The final step is to tell STUNner what to do with the client connections received on the Gateway. This occurs by attaching a UDPRoute resource to the Gateway by setting the parentRef to the Gateway's name and specifying the target service in the backendRef.

    The below UDPRoute will configure STUNner to route client connections received on the Gateway called udp-gateway to the WebRTC media server pool identified by the Kubernetes service media-plane in the default namespace.

    kubectl apply -f - <<EOF
    apiVersion: stunner.l7mp.io/v1
    kind: UDPRoute
    metadata:
      name: media-plane
      namespace: stunner
    spec:
      parentRefs:
        - name: udp-gateway
      rules:
        - backendRefs:
            - name: media-plane
              namespace: default
    EOF

Note that STUNner deviates somewhat from the way Kubernetes handles ports in Services. In Kubernetes each Service is associated with one or more protocol-port pairs and connections via the Service can be made to only these specific protocol-port pairs. WebRTC media servers, however, usually open lots of different ports, typically one per each client connection, and it would be cumbersome to create a separate backend Service and UDPRoute per each port. In order to simplify this, STUNner ignores the protocol and port specified in the backend service and allows connections to the backend pods via any protocol-port pair. STUNner can therefore use only a single backend Service to reach any port exposed on a WebRTC media server.

And that's all. You don't need to worry about client-side NAT traversal and WebRTC media routing because STUNner has you covered! Even better, every time you change a Gateway API resource in Kubernetes, say, you update the GatewayConfig to reset the STUN/TURN credentials or change the protocol or port in a Gateway, the STUNner gateway operator will automatically pick up your modifications and update the underlying dataplane. Kubernetes is beautiful, isn't it?

Check your config

The current STUNner dataplane configuration is always made available via the convenient stunnerctl CLI utility. The below will dump the config of the UDP gateway in human readable format.

stunnerctl -n stunner config udp-gateway
Gateway: stunner/udp-gateway (loglevel: "all:INFO")
Authentication type: static, username/password: user-1/pass-1
Listeners:
  - Name: stunner/udp-gateway/udp-listener
    Protocol: TURN-UDP
    Public address:port: 34.118.88.91:3478
    Routes: [stunner/iperf-server]
    Endpoints: [10.76.1.4, 10.80.4.47]

As it turns out, STUNner has successfully assigned a public IP and port to our Gateway and set the STUN/TURN credentials based on the GatewayConfig.

Testing

We have successfully configured STUNner to route client connections to the media-plane service but at the moment there is no backend there that would respond. Below we use a simplistic UDP greeter service for testing: every time you send some input, the greeter service will respond with a heartwarming welcome message.

  1. Fire up the UDP greeter service.

    The below manifest spawns the service in the default namespace and wraps it in a Kubernetes service called media-plane. Recall, this is the target service in our UDPRoute. Note that the type of the media-plane service is ClusterIP, which means that Kubernetes will not expose it to the outside world: the only way for clients to obtain a response is via STUNner.

    kubectl apply -f deploy/manifests/udp-greeter.yaml
  2. We also need the ClusterIP assigned by Kubernetes to the media-plane service.

    export PEER_IP=$(kubectl get svc media-plane -o jsonpath='{.spec.clusterIP}')
  3. We also need a STUN/TURN client to actually initiate a connection. STUNner comes with a handy STUN/TURN client called turncat for this purpose. Once installed, you can fire up turncat to listen on the standard input and send everything it receives to STUNner. Type any input and press Enter, and you should see a nice greeting from your cluster!

    ./turncat - k8s://stunner/udp-gateway:udp-listener udp://${PEER_IP}:9001
    Hello STUNner
    Greetings from STUNner!

Note that we haven't specified the public IP address and port: turncat is clever enough to parse the running STUNner configuration from Kubernetes directly. Just specify the special STUNner URI k8s://stunner/udp-gateway:udp-listener, identifying the namespace (stunner here) and the name for the Gateway (udp-gateway), and the listener to connect to (udp-listener), and turncat will do the heavy lifting.

Note that your actual WebRTC clients do not need to use turncat to reach the cluster: all modern Web browsers and WebRTC clients come with a STUN/TURN client built in. Here, turncat is used only to simulate what a real WebRTC client would do when trying to reach STUNner.

Reconcile

Any time you see fit, you can update the STUNner configuration through the Gateway API: STUNner will automatically reconcile the dataplane for the new configuration.

For instance, you may decide to open up your WebRTC infrastructure on TLS/TCP as well; say, because an enterprise NAT on the client network path has gone berserk and actively filters anything except TLS/443. The below steps will do just that: open another gateway on STUNner, this time on the TLS/TCP port 443, and reattach the UDPRoute to both Gateways so that no matter which protocol a client may choose the connection will be routed to the media-plane service (i.e., the UDP greeter) by STUNner.

  1. Store your TLS certificate in a Kubernetes Secret. Below we create a self-signed certificate for testing, make sure to substitute this with a valid certificate.

    openssl genrsa -out ca.key 2048
    openssl req -x509 -new -nodes -days 365 -key ca.key -out ca.crt -subj "/CN=yourdomain.com"
    kubectl -n stunner create secret tls tls-secret --key ca.key --cert ca.crt
  2. Add the new TLS Gateway. Notice how the tls-listener now contains a tls object that refers the above Secret, this way assigning the TLS certificate to use with our TURN-TLS listener.

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1beta1
    kind: Gateway
    metadata:
      name: tls-gateway
      namespace: stunner
    spec:
      gatewayClassName: stunner-gatewayclass
      listeners:
        - name: tls-listener
          port: 443
          protocol: TURN-TLS
          tls:
            mode: Terminate
            certificateRefs:
              - kind: Secret
                namespace: stunner
                name: tls-secret
    EOF
  3. Update the UDPRoute to attach it to both Gateways.

    kubectl apply -f - <<EOF
    apiVersion: stunner.l7mp.io/v1
    kind: UDPRoute
    metadata:
      name: media-plane
      namespace: stunner
    spec:
      parentRefs:
        - name: udp-gateway
        - name: tls-gateway
      rules:
        - backendRefs:
            - name: media-plane
              namespace: default
    EOF
  4. Fire up turncat again, but this time let it connect through TLS. This is achieved by specifying the name of the TLS listener (tls-listener) in the STUNner URI. The -i command line argument (--insecure) is added to prevent turncat from rejecting our insecure self-signed TLS certificate; this will not be needed when using a real signed certificate.

    ./turncat -i -l all:INFO - k8s://stunner/tls-gateway:tls-listener udp://${PEER_IP}:9001
    [...] turncat INFO: Turncat client listening on -, TURN server: tls://10.96.55.200:443, peer: udp://10.104.175.57:9001
    [...]
    Hello STUNner
    Greetings from STUNner!

    We have set the turncat loglevel to INFO to learn that this time turncat has connected via the TURN server tls://10.96.55.200:443. And that's it: STUNner automatically routes the incoming TLS/TCP connection to the UDP greeter service, silently converting from TLS/TCP to UDP in the background and back again on return.

Configuring WebRTC clients

Real WebRTC clients will need a valid ICE server configuration to use STUNner as the TURN server. STUNner is compatible with all client-side TURN auto-discovery mechanisms. When no auto-discovery mechanism is available, clients will need to be manually configured to stream audio/video media over STUNner.

The below JavaScript snippet will direct a WebRTC client to use STUNner as the TURN server. Make sure to substitute the placeholders (like <STUNNER_PUBLIC_ADDR>) with the correct configuration from the running STUNner config; don't forget that stunnerctl is always there for you to help.

var ICE_config = {
  iceServers: [
    {
      url: 'turn:<STUNNER_PUBLIC_ADDR>:<STUNNER_PUBLIC_PORT>?transport=udp',
      username: <STUNNER_USERNAME>,
      credential: <STUNNER_PASSWORD>,
    },
  ],
};
var pc = new RTCPeerConnection(ICE_config);

Note that STUNner comes with a built-in authentication service that can be used to generate a complete ICE configuration for reaching STUNner through a standards compliant HTTP REST API.

Tutorials

The below series of tutorials demonstrates how to leverage STUNner to deploy different WebRTC applications into Kubernetes.

Basics

  • Opening a UDP tunnel via STUNner: This introductory tutorial shows how to tunnel an external connection via STUNner to a UDP service deployed into Kubernetes. The demo can be used to quickly check and benchmark a STUNner installation.

Headless deployment mode

  • Direct one to one video call via STUNner: This tutorial showcases STUNner acting as a TURN server for two WebRTC clients to establish connections between themselves, without the mediation of a media server.

Media-plane deployment model

  • One to one video call with Kurento: This tutorial shows how to use STUNner to connect WebRTC clients to a media server deployed into Kubernetes behind STUNner in the media-plane deployment model. All this happens without modifying the media server code in any way, just by adding 5-10 lines of straightforward JavaScript to configure clients to use STUNner as the TURN server.
  • Magic mirror with Kurento: This tutorial has been adopted from the Kurento magic mirror demo, deploying a basic WebRTC loopback server behind STUNner with some media processing added. In particular, the application uses computer vision and augmented reality techniques to add a funny hat on top of faces.
  • Video-conferencing with LiveKit: This tutorial helps you deploy the LiveKit WebRTC media server behind STUNner. The docs also show how to obtain a valid TLS certificate to secure your signaling connections, courtesy of the cert-manager project, nip.io and Let's Encrypt.
  • Video-conferencing with Jitsi: This tutorial helps you deploy a fully fledged Jitsi video-conferencing service into Kubernetes behind STUNner. The docs also show how to obtain a valid TLS certificate to secure your signaling connections, using cert-manager, nip.io and Let's Encrypt.
  • Video-conferencing with mediasoup: This tutorial helps you deploy the mediasoup WebRTC media server behind STUNner. The docs also show how to obtain a valid TLS certificate to secure your signaling connections, courtesy of the cert-manager project, nip.io and Let's Encrypt.
  • Cloud-gaming with Cloudretro: This tutorial lets you play Super Mario or Street Fighter in your browser, courtesy of the amazing CloudRetro project and, of course, STUNner. The demo also presents a simple multi-cluster setup, where clients can reach the game-servers in their geographical locality to minimize latency.
  • Remote desktop access with Neko: This demo showcases STUNner providing an ingress gateway service to a remote desktop application. We use neko.io to run a browser in a secure container inside the Kubernetes cluster, and stream the desktop to clients via STUNner.

Documentation

The documentation of the stable release can be found here. The documentation for the latest development release can be found here.

Caveats

STUNner is a work-in-progress. Some features are missing, others may not work as expected. The notable limitations at this point are as follows.

  • STUNner targets only a partial implementation of the Kubernetes Gateway API. In particular, only GatewayClass, Gateway and UDPRoute resources are supported. This is intended: STUNner deliberately ignores some complexity in the Gateway API and deviates from the prescribed behavior in some cases, all in the name of simplifying the configuration process. The STUNner Kubernetes gateway operator docs contain a detailed list on the differences.
  • STUNner lacks officially support for IPv6. Clients and peers reachable only on IPv6 may or may not be able connect to STUNner depending on the version you're using. Please file a bug if you absolutely need IPv6 support.

Milestones

  • v0.9: Demo release: STUNner basic UDP/TURN connectivity + helm chart + tutorials.
  • v0.10: Dataplane: Long-term STUN/TURN credentials and STUN/TURN over TCP/TLS/DTLS in standalone mode.
  • v0.11: Control plane: Kubernetes gateway operator and dataplane reconciliation.
  • v0.12: Security: Expose TLS/DTLS settings via the Gateway API.
  • v0.13: Observability: Prometheus + Grafana dashboard.
  • v0.15: Performance: Per-allocation CPU load-balancing for UDP
  • v0.16: Management: Managed STUNner dataplane.
  • v0.17: First release candidate: All Gateway and STUNner APIs move to v1.
  • v0.18: Stabilization: Second release candidate.
  • v0.19: The missing pieces: Third release candidate.
  • v1.0: GA

Help

STUNner development is coordinated in Discord, feel free to join.

License

Copyright 2021-2023 by its authors. Some rights reserved. See AUTHORS.

MIT License - see LICENSE for full text.

Acknowledgments

Initial code adopted from pion/stun and pion/turn.

stunner's People

Contributors

amm0nite avatar arthuro555 avatar bbalint105 avatar botl7mp avatar codeding avatar davidkornel avatar dklimpel avatar levaitamas avatar megzo avatar nmate avatar pamelia avatar rg0now avatar vidarhun avatar vitorespindola avatar zifeo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stunner's Issues

No timeout for inactive peers?

We have a situation in which some of the peers do not signal their disconnection (and STUNner does not appear to be handling UDP timeouts). Downstream, our application is unable to detect these disconnections (since its connection to STUNner is still live).

Is there a way to detect and/or configure peer disconnections & timeouts?

[Inquiry] general progress and roadmap of this project

Hi l7mp team!

Great work as it seems to be, we are interested in this project which makes webrtc+k8s plausible.

Just wondering what's the current status in terms of the Milestones for this project? Is there any timeline/roadmap we can follow?

Currently we run a hybrid topology for our media streaming, we host coturn servers in a public cloud and this coturn cluster colocates with a bunch of media servers (along with application servers) all hosted in k8s. Feature-wise it is just fine but it surely lacks of scalability and visibility

Can't get a public IP address

Hi everyone,
First of all, I'm just starting to work with kubernetes and I have a pretty basic understanding, but I'm trying to learn, so excuse me in case my question is too basic.
I've been working on a webrtc project which I need to scale, and after I found out stunner I thought it might be the perfect solution to my needs. I've been playing arround with it, but I haven't managed to get it working yet.
I've followed the Getting started tutorial on both a minikube cluster and a real k8s cluster from my company to which I have access, but I can't get a public IP address on any of the enviroments.

After successfully following every step, when I execute the command stunnerctl running-config stunner/stunnerd-config, I end up with something like:

STUN/TURN authentication type:  plaintext
STUN/TURN username:             user-1
STUN/TURN password:             pass-1
Listener 1
        Name:   stunner/udp-gateway/udp-listener
        Listener:       stunner/udp-gateway/udp-listener
        Protocol:       UDP
        Public port:    30726

which doesn't have the Public IP field.
If I dump the entire running configuration I get

{
  "version": "v1alpha1",
  "admin": {
    "name": "stunner-daemon",
    "loglevel": "all:INFO",
    "healthcheck_endpoint": "http://0.0.0.0:8086"
  },
  "auth": {
    "type": "plaintext",
    "realm": "stunner.l7mp.io",
    "credentials": {
      "password": "pass-1",
      "username": "user-1"
    }
  },
  "listeners": [
    {
      "name": "stunner/udp-gateway/udp-listener",
      "protocol": "UDP",
      "public_port": 30726,
      "address": "$STUNNER_ADDR",
      "port": 3478,
      "min_relay_port": 32768,
      "max_relay_port": 65535,
      "routes": [
        "stunner/media-plane"
      ]
    }
  ],
  "clusters": [
    {
      "name": "stunner/media-plane",
      "type": "STATIC",
      "protocol": "udp"
    }
  ]
}

Which gives me no clue about what could be happening.

I think it could be related to the Gateway class not being implemented on minikube/my real cluster, but I haven't found any way to check if this is true or if it's related to something else.

As I can't get a public IP i can't test any of the examples, which is a stopper for me.

Could somebody give me any cues about what might be happening?

Thanks a lot.

Publish workflow fails if separately pushed commits are too rapid

Issue: If there are two commits pushed separately after each other in a short time span. The second job in the stunner-helm repo will have a different local l7mp.io head, which will block the push.

Solution: Prevent two jobs running from the same action at the same time. Queueing them would be a good solution

Make a better job at documenting that STUNner ignores the port in backend Services

It seems that we have to live with this monstrosity until support for service-port ranges lands in Kubernetes. The only way we can deal with the fact that media servers use entire port ranges is to ignore the Service port in UDPRoutes. This, however, often confuses users (quite understandably).

For now, we have to make a better job at documenting the fact that we completely omit ports in UDPRoute backend Services.

Support a stable and a dev release channel in the CI/CD pipeline and the Helm charts

The current release process is not optimal in the sense that we cannot distinguish a stable release channel (which gets the cutting edge features but may sometimes break) from the stable channel (rolled from the last major release and considered rock-solid). We already distinguish between the two in our semantic versioning scheme: all releases tagged with a new MAJOR or MINOR version are considered stable releases, while new PATCH versions are thought to be released only on the dev channel. The only problem is that this versioning scheme is not reflected in the Helm charts so users cannot default to the stable channel: after each git tag every new helm install will automatically install the cutting edge version. This issue is to coordinate the work towards updating our release process so that people can opt out from getting the latest and greatest from STUNner.

Here is a plan for how this should work:

  • The user can choose the stable or the dev release channel when installing STUNner via the Helm charts. In particular, the below would choose the stable channel, which should also become the default:
    helm install stunner stunner/stunner --create-namespace --namespace=<your-namespace> --release-channel=stable
    
    While the below would be optional and choose the dev channel (unstable can be an alias on dev, I don't insist on the name here):
    helm install stunner stunner/stunner --create-namespace --namespace=<your-namespace> --release-channel=unstable
    
    Same for the stunner-gateway-operator Helm chart.
  • The helm chart would work as follows:
    • if the user chooses the dev channel we install the stunnerd:latest and stunner-gateway-operator:latest images from Docker hub,
    • if they choose the stable channel we install stunnerd:stable and stunner-gateway-operator:stable images.
  • The CI/CD pipeline on the stunner and the stunner-gateway-operator repos should be updated so that after every new git tag we do the following:
    • if only the PATCH version changes we rebuild the image and we add only the latest tag before uploading to Docker hub,
    • if the MAJOR or MINOR version changes we add both the latest and the stable tags to the image and update Docker hub.

This would make it possible to avoid rebuilding the Helm chart after each git tag.

Operational query on running stunner in headless mode

Currently I am a bit confused with the scaling operation of stunner in one-to-one call scenario. This the setup, initially I would have one stunner with LoadBalancer service (Public facing IP) and cluster service IP for WebRTC client within k8s. This works fine as long as there is only one stunner pod. But once I scale the stunner pods to 3 instances I would assume the WebRTC would not establish because there is no control over the LB and cluster service to actually land the BIND requests to which stunner correct?

So in this case what should be done to scale ? A naive way I could think of is that I need to assign a new LB public address for each stunner and use headless service within the k8s. But this adds extra complexity on how should I ensure the both clients can use the same stunner ?

Thanks in advance

Specifying the LoadBalancer

Hello,
Sorry for the dumb question, but Is there a way to specify the LoadBalancer instead of creating a new one?
Or perhaps it is better if I describe my issue.
I use the Hetzner cloud with this automation tool to deploy the k8s cluster.
Whenever I deploy the stunner UDPRoute, it creates a new LB, which never gets to a healthy state. See the screenshot below.
Happy to learn how I can deal with this issue.
image

Question: Why stunner gives the POD_IP as the RELAY candidate?

I am not sure though if stunner requires a LoadBalance service with Public IP to work. In my on-prem k8s there is no LB available. So instead I changed the gateway svc to be NodePort instead.

But then when I do ICE trickle, I got the POD_IP as the relay candidate hence WebRTC could not establish.

{"level":"info","ts":1660105223.5055137,"logger":"trickleCmd","caller":"cmd/trickle.go:55","msg":"ICECandidate: udp4 relay 10.42.4.87:56684 related 0.0.0.0:43267"}

I also noticed that the stunner pod has this configuration

    Environment:
      STUNNER_ADDR:   (v1:status.podIP)

So what am I doing incorrectly here? I wanted to setup a headless stunner just act as TURN server for two media endpoints.

Thanks in advance

Feature: custom annotations to Stunner gateway services

In many cloud hosted Kubernetes environments LoadBalancer type services need custom annotations in order to tell the cloud provider what kind of external IP you want. A couple of examples:

To that end it would be nice if we could add custom annotations to the services created by Stunner.
My suggestion is to add a field to the Stunner GatewayConfig spec, e.g. like this:

apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
metadata:
 name: stunner-gatewayconfig
 namespace: default
spec:
 realm: stunner.l7mp.io
 authType: plaintext
 userName: "user-1"
 password: "pass-1"
 loadBalancerServiceAnnotations:
   service.beta.kubernetes.io/aws-load-balancer-type: nlb
   service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
   service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing

Then the operator can just copy the annotations under the loadBalancerServiceAnnotations key as-is to the created LoadBalancer service.

How debug problem?

Hello,

I have running udp-gateway (LoadBalancer) and running stunner pod.

$ kubectl get service -n stunner
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                         AGE
stunner       ClusterIP      10.245.242.16   <none>           3478/UDP                        2d1h
udp-gateway   LoadBalancer   10.245.125.7    138.68.119.xx   3478:32224/UDP,8086:32360/TCP   2d1h

$ kubectl get pod -n stunner
NAME                       READY   STATUS    RESTARTS   AGE
stunner-7ff4875b47-xb4z9   2/2     Running   0          102m
$ kubectl get pod -n stunner
NAME                       READY   STATUS    RESTARTS   AGE
stunner-7ff4875b47-xb4z9   2/2     Running   0          102m

I created DNS record

A  stunner.mywebpage.com  138.68.119.xx

Now I try connection on pagehttps://icetest.info/

Result of my test is

image

It looks my sturn/turn server is not working.

How can I find out reason of my problem?

Stunner gateway operator `ERROR updater cannot update service` on AWS + EKS + ALB due to `"cannot upsert service \"stunner/udp-gateway\": Service \"udp-gateway\" is invalid: spec.loadBalancerClass: Invalid value: \"null\": may not change once set"`

Hey all,
I'm getting this error when provisioning Stunner on AWS + EKS + ALB.

The error seems pretty straightforward. Here's my stunner logs:

2023-10-08T19:36:50.449897717Z  ERROR   updater cannot update service   {"operation": "unchanged", "service": "{\"metadata\":{\"name\":\"udp-gateway\",\"namespace\":\"stunner\",\"creationTimestamp\":null,\"labels\":{\"stunner.l7mp.io/owned-by\":\"stunner\",\"stunner.l7mp.io/related-gateway-name\":\"udp-gateway\",\"stunner.l7mp.io/related-gateway-namespace\":\"stunner\"},\"annotations\":{\"external-dns.alpha.kubernetes.io/hostname\":\"udp.mycompany.com\",\"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type\":\"ip\",\"service.beta.kubernetes.io/aws-load-balancer-scheme\":\"internet-facing\",\"service.beta.kubernetes.io/aws-load-balancer-type\":\"external\",\"stunner.l7mp.io/related-gateway-name\":\"stunner/udp-gateway\"},\"ownerReferences\":[{\"apiVersion\":\"gateway.networking.k8s.io/v1beta1\",\"kind\":\"Gateway\",\"name\":\"udp-gateway\",\"uid\":\"8112c527-66a9-455e-a030-4584d45f203f\"}]},\"spec\":{\"ports\":[{\"name\":\"udp-listener\",\"protocol\":\"UDP\",\"port\":3478,\"targetPort\":0}],\"selector\":{\"app\":\"stunner\"},\"type\":\"LoadBalancer\"},\"status\":{\"loadBalancer\":{}}}", "error": "cannot upsert service \"stunner/udp-gateway\": Service \"udp-gateway\" is invalid: spec.loadBalancerClass: Invalid value: \"null\": may not change once set"}
github.com/l7mp/stunner-gateway-operator/internal/updater.(*Updater).ProcessUpdate
        /workspace/internal/updater/updater.go:115
github.com/l7mp/stunner-gateway-operator/internal/updater.(*Updater).Start.func1
        /workspace/internal/updater/updater.go:62
  • It appears the culprit is on the property loadBalancerClass: service.k8s.aws/nlb present on the LoadBalancer service that gets created based on the GatewayConfig defined below.
  • The service automatically patched by ALB due to the annotations service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip and service.beta.kubernetes.io/aws-load-balancer-type: external

configurations

Here's my GatewayConfig:

apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
metadata:
  name: stunner-gatewayconfig
  namespace: ${kubernetes_namespace.stunner.metadata[0].name}
spec:
  realm: stunner.l7mp.io
  authType: plaintext
  userName: "user-1"
  password: "${random_password.stunner_gateway_auth_password.result}"
  loadBalancerServiceAnnotations:
    external-dns.alpha.kubernetes.io/hostname: ${local.udp_gateway_host}.${local.fqdn}
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-type: external

Here's the service that gets provisioned using the above config:

apiVersion: v1
kind: Service
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: udp.mycompany.com
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-type: external
    stunner.l7mp.io/related-gateway-name: stunner/udp-gateway
  labels:
    stunner.l7mp.io/owned-by: stunner
    stunner.l7mp.io/related-gateway-name: udp-gateway
    stunner.l7mp.io/related-gateway-namespace: stunner
  name: udp-gateway
  namespace: stunner
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 172.20.21.184
  clusterIPs:
  - 172.20.21.184
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerClass: service.k8s.aws/nlb
  ports:
  - name: udp-listener
    nodePort: 30733
    port: 3478
    protocol: UDP
  selector:
    app: stunner
  type: LoadBalancer

Now, everything appears to be working, but the error is there... I don't know the implications of it.

Rewrite `stunnerctl` in Go

This issue tracks the progress on rewriting stunnerctl in Go.

stunnerctl is a small CLI utility that simplifies the interaction with STUNner. Currently it offers a single command, stunnerctl running-config, which allows to dump a gateway hierarchy in a human readable form. In the long run, stunnerctl will obtain further features, like

  • stunnerctl version/status to get current cluster-wide STUNner version and status,
  • stunnerctl config as a fancier form of the current running-config functionality,
  • stunnerctl install to install STUNner via the CLI,
  • stunnerctl monitor/dashboard for monitoring, and
  • stunnerctl connect to control multicluster STUNner (once we implement it).

In addition, stunnerctl will need to provide the standard kubectl goodies, like support for taking Kubernetes config from KUBECONFIG, --kubeconfig, or --context.

Currently stunnerctl is a Bash script that talks to Kubernetes via kubectl and parses JSON responses using jq. Understandably, this is not really future-proof.

The goal is to rewrite stunnerctl in Go using the standard Go CLI tooling (viper, cobra, etc.).

Stunner service still in pending status

Hi,

I have similar issue as my previous #96

I installed and configured stunner but my stunner service still in pending status.

$ kubectl get pods -n stunner
NAME                       READY   STATUS    RESTARTS   AGE
stunner-7ff4875b47-l9jsp   0/2     Pending   0          6m22s

I am using DOKS (Digital ocean Kubernetes ).

There is some way how debug my stunner service?

Feature: Implementation of coturn's `use-auth-secret` TURN authentication mode in STUNner

The use of time-windowed TURN credentials requires the verification of the ephemeral TURN credentials in the TURN server on processing ALLOCATE requests. In this scheme, the TURN username is a colon-delimited combination of the expiration (UNIX) timestamp and a client id, while the password is computed from a secret key shared with the TURN server and the username by performing base64(hmac(secret key, username)).

Currently STUNner implements only a part of the above scheme (taken verbatim from pion/turn): it assumes that the username consists of a timestamp only and errs if there is a client id in the username. This issue addresses this limitation.

Plan is to modify the longterm auth handler to implement the whole server-side authentication logic:

  • split the username by the colon delimiter
  • take the first item that looks like an integer (strconv.Atoi is successful)
  • consider this item as a UNIX timestamp defining the expiry of the timestamp
  • fail authentication if the the timestamp is in the past: timestamp < time.Now().Unix()
  • otherwise take the original username, perform the HMAC and return the resultant hash

The issue addresses the update of the authentication docs (/doc/AUTH.md) and the addition of tests.

Docs:

Architecture check for GStreamer RTP stream -> StUNner -> WebRTC Client for an IoT device 1-to-1 or 1-to-few setup

This is dizzying. I am most familiar with Kubernetes so that is how I came across Stunner. For me, it needs to be able to be managed and scale to be useful. Hence, K8's

I have an IoT device that has MQTT as a protocol. It has video and GStreamer as the video stream provider.

Currently, I have a setup where the device would request a "session" and call an API to setup an RTMP stream from a service. The ingest URL is sent back to the device so that it can begin streaming to the event. When that is finished I have another call the device can make to locate the encoded asset via the event and provide a traditional LL-HLS stream endpoints to an end user.

This works but is certainly not going to be very cost effective or even scalable for that matter.

I want to use something that is more WebRTC based but I feel overwhelmed by the options at this moment. I have been looking into MediaSoup, Janus and others for creating a middleware service that can do a similar feat that I have achieved with RTMP.

However, I cannot understand how the recipe is supposed to go together. The streaming "work" is done on the device so I need it to passthrough to a client. Looking into it I see that GStreamer is somewhat slightly analogous to OBS but with more capabilities such as producing an RTP stream. I practice with my webcam so that is why I am thinking OBS is slightly similar as it will ingest my RTMP stream URL to stream my webcam. I am like 2 weeks old into this subject so please bear with me.

At the moment, OBS doesn't work with WebRTC and Milicast was bought by Dolby so that is a CPAAs option so it's hard to see how these things work all together. Which is difficult for learning. I wish there was more with OBS because it would relate so adequately to these use cases of having the webcam being one part and the client being the other part.

So my questions are this.

  1. Can I use GStreamer to create an RTP stream and pass it directly into STUNner,
  2. Or, do I need to use a middleware WebRTC server such as PION, Janus, or MediaSoup plus STUNner
  3. With the the stream collected by STUNner(for example) can I just IP connect a player, like HLS / DASH to connect to the incoming stream. For some reason I think this is where WHIP/WHAP comes in but I have not a good idea.
  4. If it isn't something like an IP connect, then I am assuming it is a very front-end application based connection through a react or angular SPA app that will have to more or less connect to the WebRTC server. In the architecture I reviewed here, headless as an example, the APP was in the same Kubernetes namespace. Is that for a reason? Does it have to be there? Is there something I can deliver in an api to make it more like the WHIP flow or HLS flow of URI ingestion? Afterall it's just a protocol.

I am completely sorry if my questions are noob 9000 status but I would like to learn more and know what track I should be on and if perhaps this is the right one.

On GKE, how to specify an IP for udp-gateway

Dear All,

I tried to deploy stunner on GKE. As your docs, when apply file stunner-helm/livekit-call-stunner.yaml, it will auto create an udp-gateway service then it will auto get an IP. But I want to specify an IP for it as below that I did, but it still get a different IP.

apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
metadata:
  name: stunner-gatewayconfig
  namespace: stunner-dev
spec:
  realm: stunner.l7mp.io
  authType: plaintext
  userName: "user-1"
  password: "pass-1"
  loadBalancerServiceAnnotations:
    networking.gke.io/load-balancer-type: Internal

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: udp-gateway
  namespace: stunner-dev
spec:
  gatewayClassName: stunner-gatewayclass
  listeners:
    - name: udp-listener
      port: 3478
      protocol: TURN-UDP
  addresses:
    - type: NamedAddress
      value: my_reserved_ip4

The "my_reserved_ip4" created by command:

gcloud compute addresses create IP_ADDRESS_NAME \
    --purpose=SHARED_LOADBALANCER_VIP \
    --region=COMPUTE_REGION \
    --subnet=SUBNET \
    --project=PROJECT_ID

Could you, please help.

Thanks and regards.

Cannot apply GatewayClass from README.md on K8s v1.27

Applying this GatewayClass from your README.md

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GatewayClass
metadata:
  name: stunner-gatewayclass
spec:
  controllerName: "stunner.l7mp.io/gateway-operator"
  parametersRef:
    group: "stunner.l7mp.io"
    kind: GatewayConfig
    name: stunner-gatewayconfig
    namespace: stunner
  description: "STUNner is a WebRTC media gateway for Kubernetes"
EOF

raises a webhook error on GKE Autopilot 1.27

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "gateway-api-alpha-deprecated-use-beta-version.gke.io": failed to call webhook: Post "https://gateway-api-alpha-deprecated-use-beta-version.kube-system.svc:443/?timeout=1s": service "gateway-api-alpha-deprecated-use-beta-version" not found

Using apiVersion: gateway.networking.k8s.io/v1beta1 works.

The same is with Gateway.
However, UDPRoute only works with v1alpha2.

kubectl version --short

Client Version: v1.26.3
Kustomize Version: v4.5.7
Server Version: v1.27.3-gke.100

stunnerd stops responding to filesystem events

The stunnerd pod seems to hang, with the following logs:

14:09:00.748041 main.go:148: stunnerd WARNING: unhnadled notify op on config file "/etc/stunnerd/stunnerd.conf" (ignoring): CHMOD 
14:09:00.748077 main.go:133: stunnerd WARNING: config file deleted "REMOVE", disabling watcher
14:09:00.748090 main.go:138: stunnerd WARNING: could not remove config file "/etc/stunnerd/stunnerd.conf" from watcher: can't remove non-existent inotify watch for: /etc/stunnerd/stunnerd.conf
14:09:01.400101 reconcile.go:145: stunner INFO: reconciliation ready: new objects: 0, changed objects: 3, deleted objects: 0

After this point, config file updates are not picked up by stunnerd any more

Let turncat to handle FQDNs in TURN URIs

Currently turncat cannot connect to TURN servers using a TURN URI that contains the FQDN of the server, e.g.: turn://example.com.

The reason is that during startup we try to create a fake STUNner config from the given URI and when we try to validate it:

if err := c.Validate(); err != nil {
we run into an error because we assume that the address is a valid IP address:
return fmt.Errorf("invalid listener address %s in listener configuration: %q",
.

help - intermitent failures connecting to workers on `cloudretro` example on AWS + EKS + ALB

I've been building the cloudretro example for a while on multiple kubernetes distributions without issues!
Now I'm trying to run this example on AWS setup (EKS + Fargate + ALB) and I'm getting some intermitent errors:

  • Sometimes I'm able to connect to the workers, and other times I have timeouts
  • I suspect it has something to do with the ICE candidates that are reported on the application - I've posted evidences below on the differences

Versions:

  • I'm using both stunner and stunner-gateway-operator versions 0.16.0 (chart and app)
  • EKS 1.27

Based on the information below, what is the culprit here?
Is there anything I can tweak server side to facilitate this discovery and ensure a successful connection on first attempt?

error attempt

The following errors were captured on the browser using hte Developer tools.

  • Usually when I first reach the coordinator, I usually get the error [rtcp] ice gathering was aborted due to timeout 2000ms.
  • I notice that only one user candidate 02895ab7-03e2-4f4a-9afe-daa99822e2d5.local gets reported and its not reachable (and should not be!)

Error console:

keyboard.js?v=5:128 [input] keyboard has been initialized
joystick.js?v=3:275 [input] joystick has been initialized
touch.js?v=3:304 [input] touch input has been initialized
socket.js?v=4:36 [ws] connecting to wss://home.company.com/ws?room_id=&zone=
socket.js?v=4:42 [ws] <- open connection
socket.js?v=4:43 [ws] -> setting ping interval to 2000ms
controller.js?v=8:79 [ping] <-> {http://worker.company.com:9000/echo: 9999}
rtcp.js?v=4:17 [rtcp] <- received coordinator's ICE STUN/TURN config: [{"urls":"turn:udp.company.com:3478","username":"user-1","credential":"fQvzu2pFOBxtW5Al"}]
rtcp.js?v=4:106 [rtcp] ice gathering
rtcp.js?v=4:120 [rtcp] <- iceConnectionState: checking
rtcp.js?v=4:100 [rtcp] user candidate: {"candidate":"candidate:1680066927 1 udp 2113937151 02895ab7-03e2-4f4a-9afe-daa99822e2d5.local 54853 typ host generation 0 ufrag rj2C network-cost 999","sdpMid":"0","sdpMLineIndex":0,"usernameFragment":"rj2C"}
rtcp.js?v=4:108 [rtcp] ice gathering was aborted due to timeout 2000ms

success attempt

After a couple of retries, we finally have success:

  • Notice that now there are 3 user candidates, one of them is reachable (the one with the public IP)
rtcp.js?v=4:100 [rtcp] user candidate: {"candidate":"candidate:2537811000 1 udp 2113937151 5774c39d-3ada-44b9-b95f-89a858000ac4.local 54903 typ host generation 0 ufrag sRpm network-cost 999","sdpMid":"0","sdpMLineIndex":0,"usernameFragment":"sRpm"}
rtcp.js?v=4:100 [rtcp] user candidate: {"candidate":"candidate:3567609059 1 udp 1677729535 89.180.168.100 42221 typ srflx raddr 0.0.0.0 rport 0 generation 0 ufrag sRpm network-cost 999","sdpMid":"0","sdpMLineIndex":0,"usernameFragment":"sRpm"}
rtcp.js?v=4:100 [rtcp] user candidate: {"candidate":"candidate:4028349664 1 udp 33562623 10.0.22.42 35233 typ relay raddr 

Full log of the success connection:

keyboard.js?v=5:128 [input] keyboard has been initialized
joystick.js?v=3:275 [input] joystick has been initialized
touch.js?v=3:304 [input] touch input has been initialized
socket.js?v=4:36 [ws] connecting to wss://home.company.com/ws?room_id=&zone=
socket.js?v=4:42 [ws] <- open connection
socket.js?v=4:43 [ws] -> setting ping interval to 2000ms
controller.js?v=8:79 [ping] <-> {http://worker.company.com:9000/echo: 9999}
rtcp.js?v=4:17 [rtcp] <- received coordinator's ICE STUN/TURN config: [{"urls":"turn:udp.company.com:3478","username":"user-1","credential":"fQvzu2pFOBxtW5Al"}]
rtcp.js?v=4:106 [rtcp] ice gathering
rtcp.js?v=4:120 [rtcp] <- iceConnectionState: checking
rtcp.js?v=4:100 [rtcp] user candidate: {"candidate":"candidate:2537811000 1 udp 2113937151 5774c39d-3ada-44b9-b95f-89a858000ac4.local 54903 typ host generation 0 ufrag sRpm network-cost 999","sdpMid":"0","sdpMLineIndex":0,"usernameFragment":"sRpm"}
rtcp.js?v=4:100 [rtcp] user candidate: {"candidate":"candidate:3567609059 1 udp 1677729535 89.180.168.100 42221 typ srflx raddr 0.0.0.0 rport 0 generation 0 ufrag sRpm network-cost 999","sdpMid":"0","sdpMLineIndex":0,"usernameFragment":"sRpm"}
rtcp.js?v=4:100 [rtcp] user candidate: {"candidate":"candidate:4028349664 1 udp 33562623 10.0.22.42 35233 typ relay raddr 89.180.168.178 rport 42221 generation 0 ufrag sRpm network-cost 999","sdpMid":"0","sdpMLineIndex":0,"usernameFragment":"sRpm"}
rtcp.js?v=4:113 [rtcp] ice gathering completed
rtcp.js?v=4:120 [rtcp] <- iceConnectionState: connected
rtcp.js?v=4:123 [rtcp] connected...

I appreciate any help on this matter!

How to add additional games to the `cloudretro` demo?

I would like to test additional games on the cloudretro demo... more importantly, I would like to test two player games for a personal project.

I already tried to add more nes roms to the image, but the cloudretro demo only loads with the Super Mario Bros game.

@bbalint105 could you shed some light on how can we add additional roms to the base image?

Thank you!

Use of Nodeport instead of LoadBalancer

Hello,
I'm currently trying to install stunner on my k8s cluster but my provider doesn't support UDP LoadBalancer. Is there a way to use a NodePort when creating UDP-Gateway instead of a UDP Loadbalancer ?
Thanks in advance.

Question: Route based on host?

I have tried searching the documentation, but it seems I cannot find information about how to have multiple distinct backends on the same frontend?

ie.

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
  name: udp-gateway
  namespace: default
spec:
  gatewayClassName: stunner-gatewayclass
  listeners:
    - name: udp-listener
      port: 3478
      protocol: UDP

and then n backend services:

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
  name: iperf-server
  namespace: team1
spec:
  parentRefs:
    - name: udp-gateway
      namespace: default
  rules:
    - backendRefs:
        - name: iperf-server
          namespace: team1

and

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
  name: iperf-server
  namespace: team2
spec:
  parentRefs:
    - name: udp-gateway
      namespace: default
  rules:
    - backendRefs:
        - name: iperf-server
          namespace: team2

I don't really see anything in the gateway spec that would allow for this, and no attributes in stunner that would identify if traffic is sent to team1 or team2s iperf-server.

Question: Can stunner be used as a pure relay server

Description:
I am deploying stunner in k8s as a pure turn server(with gateway operator), which all clients are out of the cluster. And clients has created data channel successfully, but for video stream, for the peer ip does not in the endpoint list(which is stunner's clusterip and pod ip), the permission was denied. Is it possible to set the endpoint to 0.0.0.0/0?

here is the code where the IP was compared:
https://github.com/l7mp/stunner/blob/v0.14.0/internal/object/cluster.go#L197

because I am not familiar with turn, is there something wrong I have misunderstood?any info will be helpful, thanks.

some stunner daemon logs:
06:21:08.720580 cluster.go:189: stunner-cluster-sweeper/stunner-headless TRACE: Route: cluster "sweeper/stunner-headless" of type STATIC, peer IP: 0.0.1.1 06:21:08.720588 cluster.go:196: stunner-cluster-sweeper/stunner-headless TRACE: considering endpoint {"10.0.0.143" "ffffffff"} 06:21:08.720593 cluster.go:196: stunner-cluster-sweeper/stunner-headless TRACE: considering endpoint {"10.254.123.34" "ffffffff"} 06:21:08.720597 handlers.go:118: stunner-auth DEBUG: permission denied on listener "sweeper/udp-gateway/udp-listener" for client "172.19.24.29:10709" to peer 0.0.1.1: no route to endpoint 06:21:08.720601 turn.go:235: turn INFO: permission denied for client 172.19.24.29:10709 to peer 0.0.1.1 06:21:08.727499 handlers.go:37: stunner-auth INFO: plaintext auth request: username="admin" realm="stunner.l7mp.io" srcAddr=172.19.24.29:10709 06:21:08.727512 handlers.go:101: stunner-auth DEBUG: permission handler for listener "sweeper/udp-gateway/udp-listener": client "172.19.24.29:10709", peer "0.0.1.1" 06:21:08.727516 handlers.go:106: stunner-auth TRACE: considering route to cluster "sweeper/stunner-headless" 06:21:08.727520 handlers.go:108: stunner-auth TRACE: considering cluster "sweeper/stunner-headless" 06:21:08.727523 cluster.go:189: stunner-cluster-sweeper/stunner-headless TRACE: Route: cluster "sweeper/stunner-headless" of type STATIC, peer IP: 0.0.1.1 06:21:08.727528 cluster.go:196: stunner-cluster-sweeper/stunner-headless TRACE: considering endpoint {"10.0.0.143" "ffffffff"} 06:21:08.727532 cluster.go:196: stunner-cluster-sweeper/stunner-headless TRACE: considering endpoint {"10.254.123.34" "ffffffff"} 06:21:08.727536 handlers.go:118: stunner-auth DEBUG: permission denied on listener "sweeper/udp-gateway/udp-listener" for client "172.19.24.29:10709" to peer 0.0.1.1: no route to endpoint 06:21:08.727543 turn.go:235: turn INFO: permission denied for client 172.19.24.29:10709 to peer 0.0.1.1 06:21:37.009777 handlers.go:37: stunner-auth INFO: plaintext auth request: username="admin" realm="stunner.l7mp.io" srcAddr=172.19.24.29:10709

Question: How to config TURN Server to Mediasoup Server?

Hello, I am newbie Kubernetes.
I am creating a server with mediasoup and following the stunner instructions i seem to have successfully created a TURN server.
I have connected the client to TURN Sever but nothing works. I think in next step, I need to connect my TURN Server to my Mediasoup Server, right?.
Any answers help, thanks in advance.

docs: How to deploy Jitsi (and potentially other examples) into DOKS

This issue aims to document how to deploy the Jitsi example into a Digital Ocean Kubernetes cluster.

Jitsi

The mentioned example/tutorial was created using GKE, which means it wasn't tested on other cloud providers. Unfortunately, DOKS is much more strict about creating load balancer services (with a public IP address). To expose TCP ports to the public internet is easy, and there is nothing to modify, however, to expose UDP ports requires some fine-tuning. If a load balancer uses UDP in its forwarding rules, the load balancer requires that a health check port is set that uses TCP, HTTP, or HTTPS to work properly (DOKS health check).

The most important fact is that a health check port must be exposed to the public internet, just to get the load balancer up and running. This is not too secure, because this port is unprotected and lets anyone test this port and get information about the health of your pods in the cluster. While it's unfortunate it is a must-have configuration.

In order to achieve a working UDP load balancer a slightly modified GatewayConfig and Gateway must be used.
loadBalancerServiceAnnotations will be added to the created service as extra annotations. These will tell the DOKS API where and how to check the healthiness of the underlying endpoints (pods).
And an extra TCP port on 8086 will be exposed used for health checking.

apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
metadata:
  name: stunner-gatewayconfig
  namespace: stunner
spec:
  authType: longterm
  sharedSecret: "my-shared-secret"
  loadBalancerServiceAnnotations:
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "8086"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/live"
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
  name: udp-gateway
  namespace: stunner
spec:
  gatewayClassName: stunner-gatewayclass
  listeners:
    - name: health-check
      port: 8086
      protocol: TCP
    - name: udp-listener
      port: 3478
      protocol: UDP

What about other media servers, such as LiveKit?

Haven't tested yet but other examples should work the same way.

  • UDP load balancer services must have an extra health check port
  • GatewayConfig should have loadBalancerServiceAnnotations added to their configs
  • Gateway must have an extra health check port

Generate static yamls on release

With each release, we should generate the static manifests from the helm and place them under deploy/manifests.
Perhaps we need to extend the functionality of the existing workflow

Support health-check handlers in STUNner

Some K8s LoadBalancers require a health-check readiness probe in order to consider a stunnerd pod up and start to route traffic to it. This PR is to track the progress towards implementing a configurable health-check endpoint inside STUNner.

The general idea is to:

  • configure the health-check endpoint in the GatewayConfig.Spec in exactly the same way as we do for Prometheus, i.e., healthCheckEndpoint: "tcp:0.0.0.0:8666" would implement a TCP health-check handler at port 8666 while the URL "http://0.0.0.0:80/healthz" would fire up a HTTP responder.
  • implement the boilerplate to distribute the GatewayConfig.Spec.healthCheckEndpoint setting down to the stunnerd pods
  • preferably use some prefab go-lib to implement the health-check handler, e.g., https://pkg.go.dev/github.com/heptiolabs/healthcheck or https://github.com/brpaz/go-healthcheck or simillar

Integrity check fails on passwords containing `$` [was: Auth server return bad url]

it return:

{
  "iceServers":[
      {
        "username":"1695376501:user1",
        "credential":"EPddI2tMN9vtfGMhup1RYE5nSkA=",
        "urls":[":192.168.0.247:3478?transport="]
      }
  ],
  "iceTransportPolicy":"all"
}

Here is the log of auth server:

2023-09-22T08:53:57.132824604Z	LEVEL(-2)	configmap-controller	reset ConfigMap store	{"configs": "store (1 objects): {version=\"v1alpha1\",admin:{name=\"stunner-daemon\",logLevel=\"all:INFO\",health-check=\"http://0.0.0.0:8086\"},auth:{realm=\"stunner.l7mp.io\",type=\"longterm\",shared-secret=\"<SECRET>\"},listeners=[\"stunner/owt-udp-gateway/owt-udp-listener\":{://$STUNNER_ADDR:3478?transport=<32768-65535>,public=-:-,cert/key=-/-,routes=[]}],clusters=[]}"}
2023-09-22T08:53:57.13284211Z	LEVEL(-5)	ctrl-runtime	Reconcile successful	{"controller": "configmap", "object": {"name":"stunnerd-config","namespace":"stunner"}, "namespace": "stunner", "name": "stunnerd-config", "reconcileID": "a52e5f73-3bf8-4397-b1b6-24f9a6c190f0"}
2023-09-22T08:53:57.393834523Z	LEVEL(-5)	ctrl-runtime	Reconciling	{"controller": "configmap", "object": {"name":"stunnerd-config","namespace":"stunner"}, "namespace": "stunner", "name": "stunnerd-config", "reconcileID": "89fb25cf-7e77-4f27-8260-243e2f33596c"}
2023-09-22T08:53:57.393856977Z	INFO	configmap-controller	reconciling	{"gateway-config": "stunner/stunnerd-config"}
2023-09-22T08:53:57.393954362Z	LEVEL(-2)	configmap-controller	reset ConfigMap store	{"configs": "store (1 objects): {version=\"v1alpha1\",admin:{name=\"stunner-daemon\",logLevel=\"all:INFO\",health-check=\"http://0.0.0.0:8086\"},auth:{realm=\"stunner.l7mp.io\",type=\"longterm\",shared-secret=\"<SECRET>\"},listeners=[\"stunner/owt-udp-gateway/owt-udp-listener\":{://$STUNNER_ADDR:31768?transport=<32768-65535>,public=-:31768,cert/key=-/-,routes=[]}],clusters=[]}"}
2023-09-22T08:53:57.393975489Z	LEVEL(-5)	ctrl-runtime	Reconcile successful	{"controller": "configmap", "object": {"name":"stunnerd-config","namespace":"stunner"}, "namespace": "stunner", "name": "stunnerd-config", "reconcileID": "89fb25cf-7e77-4f27-8260-243e2f33596c"}
2023-09-22T08:54:01.829351338Z	LEVEL(-5)	ctrl-runtime	Reconciling	{"controller": "configmap", "object": {"name":"stunnerd-config","namespace":"stunner"}, "namespace": "stunner", "name": "stunnerd-config", "reconcileID": "d49b5ba7-3cb6-41bf-8378-1a6a0d3eea89"}
2023-09-22T08:54:01.829441931Z	INFO	configmap-controller	reconciling	{"gateway-config": "stunner/stunnerd-config"}
2023-09-22T08:54:01.829587518Z	LEVEL(-2)	configmap-controller	reset ConfigMap store	{"configs": "store (1 objects): {version=\"v1alpha1\",admin:{name=\"stunner-daemon\",logLevel=\"all:INFO\",health-check=\"http://0.0.0.0:8086\"},auth:{realm=\"stunner.l7mp.io\",type=\"longterm\",shared-secret=\"<SECRET>\"},listeners=[\"stunner/owt-udp-gateway/owt-udp-listener\":{://192.168.0.247:3478?transport=<32768-65535>,public=192.168.0.247:3478,cert/key=-/-,routes=[]}],clusters=[]}"}
2023-09-22T08:54:01.82960411Z	LEVEL(-5)	ctrl-runtime	Reconcile successful	{"controller": "configmap", "object": {"name":"stunnerd-config","namespace":"stunner"}, "namespace": "stunner", "name": "stunnerd-config", "reconcileID": "d49b5ba7-3cb6-41bf-8378-1a6a0d3eea89"}
2023-09-22T08:54:56.920937223Z	INFO	handler	GetIceAuth: serving ICE config request	{"params": {"service":"turn","username":"user1","ttl":3600}}
2023-09-22T08:54:56.920979416Z	DEBUG	handler	getIceServerConf: serving ICE config request	{"params": {"service":"turn","username":"user1","ttl":3600}}
2023-09-22T08:54:56.920985649Z	DEBUG	handler	getIceServerConfForStunnerConf: considering Stunner config	{"stunner-config": "{version=\"v1alpha1\",admin:{name=\"stunner-daemon\",logLevel=\"all:INFO\",health-check=\"http://0.0.0.0:8086\"},auth:{realm=\"stunner.l7mp.io\",type=\"longterm\",shared-secret=\"<SECRET>\"},listeners=[\"stunner/owt-udp-gateway/owt-udp-listener\":{://192.168.0.247:3478?transport=<32768-65535>,public=192.168.0.247:3478,cert/key=-/-,routes=[]}],clusters=[]}", "params": {"service":"turn","username":"user1","ttl":3600}}
2023-09-22T08:54:56.921018655Z	DEBUG	handler	considering Listener	{"namespace": "stunner", "gateway": "owt-udp-gateway", "listener": "owt-udp-listener"}
2023-09-22T08:54:56.921031429Z	DEBUG	handler	getIceServerConfForStunnerConf: ready	{"repsonse": {"credential":"Q8ara2lUtQ8/vvKSlqAoXVW1bH8=","urls":[":192.168.0.247:3478?transport="],"username":"1695376496:user1"}}
2023-09-22T08:54:56.921098868Z	DEBUG	handler	getIceServerConf: ready	{"repsonse": {"iceServers":[{"credential":"Q8ara2lUtQ8/vvKSlqAoXVW1bH8=","urls":[":192.168.0.247:3478?transport="],"username":"1695376496:user1"}],"iceTransportPolicy":"all"}}
2023-09-22T08:54:56.921109065Z	INFO	handler	GetIceAuth: ready	{"response": {"iceServers":[{"credential":"Q8ara2lUtQ8/vvKSlqAoXVW1bH8=","urls":[":192.168.0.247:3478?transport="],"username":"1695376496:user1"}],"iceTransportPolicy":"all"}, "status": 200}
2023-09-22T08:55:01.455624663Z	INFO	handler	GetIceAuth: serving ICE config request	{"params": {"service":"turn","username":"user1","ttl":3600}}
2023-09-22T08:55:01.455658526Z	DEBUG	handler	getIceServerConf: serving ICE config request	{"params": {"service":"turn","username":"user1","ttl":3600}}
2023-09-22T08:55:01.45566384Z	DEBUG	handler	getIceServerConfForStunnerConf: considering Stunner config	{"stunner-config": "{version=\"v1alpha1\",admin:{name=\"stunner-daemon\",logLevel=\"all:INFO\",health-check=\"http://0.0.0.0:8086\"},auth:{realm=\"stunner.l7mp.io\",type=\"longterm\",shared-secret=\"<SECRET>\"},listeners=[\"stunner/owt-udp-gateway/owt-udp-listener\":{://192.168.0.247:3478?transport=<32768-65535>,public=192.168.0.247:3478,cert/key=-/-,routes=[]}],clusters=[]}", "params": {"service":"turn","username":"user1","ttl":3600}}
2023-09-22T08:55:01.455689067Z	DEBUG	handler	considering Listener	{"namespace": "stunner", "gateway": "owt-udp-gateway", "listener": "owt-udp-listener"}
2023-09-22T08:55:01.455701758Z	DEBUG	handler	getIceServerConfForStunnerConf: ready	{"repsonse": {"credential":"EPddI2tMN9vtfGMhup1RYE5nSkA=","urls":[":192.168.0.247:3478?transport="],"username":"1695376501:user1"}}
2023-09-22T08:55:01.455755771Z	DEBUG	handler	getIceServerConf: ready	{"repsonse": {"iceServers":[{"credential":"EPddI2tMN9vtfGMhup1RYE5nSkA=","urls":[":192.168.0.247:3478?transport="],"username":"1695376501:user1"}],"iceTransportPolicy":"all"}}
2023-09-22T08:55:01.455762762Z	INFO	handler	GetIceAuth: ready	{"response": {"iceServers":[{"credential":"EPddI2tMN9vtfGMhup1RYE5nSkA=","urls":[":192.168.0.247:3478?transport="],"username":"1695376501:user1"}],"iceTransportPolicy":"all"}, "status": 200}

Here is the log of the stunner pod:

03:29:56.051942 main.go:82: stunnerd INFO: watching configuration file at "/etc/stunnerd/stunnerd.conf"
03:29:56.052247 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
03:29:56.052280 reconcile.go:141: stunner WARNING: running with no listeners
03:29:56.052395 reconcile.go:157: stunner WARNING: running with no clusters: all traffic will be dropped
03:29:56.052409 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 2, changed objects: 0, deleted objects: 0, started objects: 0, restarted objects: 0
03:29:56.052423 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: plaintext, listeners: NONE, active allocations: 0
03:29:56.055569 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
03:29:56.055620 server.go:19: stunner INFO: listener stunner/owt-tcp-gateway/owt-tcp-listener: [tcp://10.233.74.92:3478<32768:65535>] (re)starting
03:29:56.055687 server.go:161: stunner INFO: listener stunner/owt-tcp-gateway/owt-tcp-listener: TURN server running
03:29:56.055693 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 2, changed objects: 2, deleted objects: 0, started objects: 1, restarted objects: 0
03:29:56.055703 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: longterm, listeners: stunner/owt-tcp-gateway/owt-tcp-listener: [tcp://10.233.74.92:3478<32768:65535>], active allocations: 0
08:52:45.642881 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
08:52:45.643022 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 0, changed objects: 2, deleted objects: 0, started objects: 0, restarted objects: 0
08:52:45.643051 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: longterm, listeners: stunner/owt-tcp-gateway/owt-tcp-listener: [tcp://10.233.74.92:3478<32768:65535>], active allocations: 0
08:53:18.252190 config.go:347: watch-config WARNING: config file deleted "REMOVE", disabling watcher
08:53:20.252891 config.go:283: watch-config WARNING: waiting for config file "/etc/stunnerd/stunnerd.conf"
08:53:30.253135 config.go:283: watch-config WARNING: waiting for config file "/etc/stunnerd/stunnerd.conf"
08:53:40.252690 config.go:283: watch-config WARNING: waiting for config file "/etc/stunnerd/stunnerd.conf"
08:53:50.252410 config.go:283: watch-config WARNING: waiting for config file "/etc/stunnerd/stunnerd.conf"
08:53:57.254904 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
08:53:57.254999 reconcile.go:157: stunner WARNING: running with no clusters: all traffic will be dropped
08:53:57.255015 server.go:19: stunner INFO: listener stunner/owt-udp-gateway/owt-udp-listener: [udp://10.233.74.92:3478<32768:65535>] (re)starting
08:53:57.255022 server.go:42: stunner INFO: setting up UDP listener socket pool at 10.233.74.92:3478 with 16 readloop threads
08:53:57.255282 server.go:161: stunner INFO: listener stunner/owt-udp-gateway/owt-udp-listener: TURN server running
08:53:57.255293 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 1, changed objects: 0, deleted objects: 2, started objects: 1, restarted objects: 0
08:53:57.255301 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: longterm, listeners: stunner/owt-udp-gateway/owt-udp-listener: [udp://10.233.74.92:3478<32768:65535>], active allocations: 0
08:53:57.394702 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
08:53:57.394733 reconcile.go:157: stunner WARNING: running with no clusters: all traffic will be dropped
08:53:57.394739 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 0, changed objects: 1, deleted objects: 0, started objects: 0, restarted objects: 0
08:53:57.394750 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: longterm, listeners: stunner/owt-udp-gateway/owt-udp-listener: [udp://10.233.74.92:3478<32768:65535>], active allocations: 0
08:54:03.331831 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
08:54:03.331866 reconcile.go:157: stunner WARNING: running with no clusters: all traffic will be dropped
08:54:03.331872 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 0, changed objects: 1, deleted objects: 0, started objects: 0, restarted objects: 0
08:54:03.331884 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: longterm, listeners: stunner/owt-udp-gateway/owt-udp-listener: [udp://10.233.74.92:3478<32768:65535>], active allocations: 0
08:54:03.331937 reconcile.go:113: stunner INFO: setting loglevel to "all:INFO"
08:54:03.331956 reconcile.go:157: stunner WARNING: running with no clusters: all traffic will be dropped
08:54:03.331959 reconcile.go:177: stunner INFO: reconciliation ready: new objects: 0, changed objects: 1, deleted objects: 0, started objects: 0, restarted objects: 0
08:54:03.331966 reconcile.go:181: stunner INFO: status: READY, realm: stunner.l7mp.io, authentication: longterm, listeners: stunner/owt-udp-gateway/owt-udp-listener: [udp://10.233.74.92:3478<32768:65535>], active allocations: 0
08:54:03.421273 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.421795 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.424394 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.426351 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.434381 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.443200 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.472294 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.513291 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.537337 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.551542 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.556254 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:03.731422 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: unexpected EOF: not enough bytes to read header
08:54:05.391543 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: BadFormat for message/cookie: 34353637 is invalid magic cookie (should be 2112a442)
08:54:05.757221 server.go:194: turn ERROR: error when handling datagram: failed to create stun message from packet: BadFormat for message/cookie: 34353637 is invalid magic cookie (should be 2112a442)

Here is my config:

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GatewayClass
metadata:
  name: stunner-gatewayclass
spec:
  controllerName: "stunner.l7mp.io/gateway-operator"
  parametersRef:
    group: "stunner.l7mp.io"
    kind: GatewayConfig
    name: stunner-gatewayconfig
    namespace: stunner
  description: "STUNner is a WebRTC ingress gateway for Kubernetes"

---
apiVersion: stunner.l7mp.io/v1alpha1
kind: GatewayConfig
metadata:
  name: stunner-gatewayconfig
  namespace: stunner
spec:
  realm: stunner.l7mp.io
  authType: ephemeral
  sharedSecret: 'XXXXXXXXXX'
  loadBalancerServiceAnnotations:
    kubernetes.io/elb.class: shared
    kubernetes.io/elb.id: XXXXXXXXXX
    kubernetes.io/elb.lb-algorithm: LEAST_CONNECTIONS
    kubernetes.io/elb.session-affinity-flag: 'on'
    kubernetes.io/elb.session-affinity-option: '{"type": "SOURCE_IP", "persistence_timeout": 15}'
    kubernetes.io/elb.health-check-flag: 'on'
    kubernetes.io/elb.health-check-option: '{"delay": 3, "timeout": 15, "max_retries": 3}'
    kubernetes.io/elb.enable-transparent-client-ip: "true"


---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: Gateway
metadata:
  name: owt-udp-gateway
  namespace: stunner
spec:
  gatewayClassName: stunner-gatewayclass
  listeners:
    - name: owt-udp-listener
      port: 3478
      protocol: UDP
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
  name: owt-media-plane
  namespace: stunner
spec:
  parentRefs:
    - name: owt-udp-listener
  rules:
    - backendRefs:
        - name: owt-server
          namespace: default

Make Stunner react faster to Gateway API changes

Background: The current model to handle control plane changes (e.g., kubectl edit gateway my-stunner-gateway) is as follows: the gateway operator watches the Gateway API CRs and every time there is a change it renders a new Stunner configuration into a ConfigMap (usually stunnerd-config). This ConfigMap is mapped into the filesystem of the dataplane pods (stunnerd) that actively watch the config file and, whenever there is a fsnotify watch event, immediately reconcile the new config.

Problem: The time spent from updating the YAML to stunnerd picking up the new config is too much for certain use cases. The main limitation is in the kubelet: it may take more than 1min for the kubelet to map the changed stunnerd-config ConfigMap into the filesystem of the stunnerd pods. We could adjust the refresh period of kubelet to, say, 1sec, to make this faster: unfortunately many cloud providers lock down the kubelet config from users.

Plan: In the long run, we will implement a Stunner REST API that will make it possible for the operator to push new configs to the stunnerd pods over HTTP. Until this gets implemented, we will create a workaround: we will take over the responsibility of watching for the stunnerd-config ConfigMap and mapping the new config into the filesystem of the stunnerd pods from the kubelet by deploying a dedicated a sidecar container next to stunnerd pods for this purpose; see, e.g., this project.

This issue tracks the progress on this work.

Can't install the helm chart using terraform: invalid tab character

Using Terraform to deploy the stunner helm chart withhashicorp/helm 2.8.0

Code


resource "helm_release" "stunner-gateway-operator" {
  name       = "stunner-gateway-operator"
  repository = "https://l7mp.io/stunner"
  chart      = "stunner-gateway-operator"
  namespace  = kubernetes_namespace.stunner.metadata.0.name

  depends_on = [
    kubernetes_daemonset.kubelet_config_ds
  ]
}

resource "helm_release" "stunner" {
  depends_on = [
    kubernetes_daemonset.kubelet_config_ds
  ]
  name       = "stunner"
  repository = "https://l7mp.io/stunner"
  chart      = "stunner"
  namespace  = kubernetes_namespace.stunner.metadata.0.name
}


Error

╷
│ Error: YAML parse error on stunner-gateway-operator/templates/stunner-gateway-operator.yaml: error converting YAML to JSON: yaml: line 85: found a tab character that violates indentation
│ 
│   with module.kubernetes-config.helm_release.stunner-gateway-operator,
│   on kubernetes-config/stunner.tf line 9, in resource "helm_release" "stunner-gateway-operator":
│    9: resource "helm_release" "stunner-gateway-operator" {
│ 

Question: Using a FQDN in `STUNNER_PUBLIC_ADDR`

Is is possible to use a domain like stunner.example.foo instead of an IP on the STUNNER_PUBLIC_ADDR variable?

With this, we can leverage some dns auto-updating tools such as external-dns for Kubernetes.

Helm chart deployment - some issues and confusing `operatorless` options

Hey!
First of all, great solution you have here...

I've been trying to deploy this using the helm chart provided, and I'm finding that some of the values are confusing...
For example, the operatorless property, is somehow confusing...because if I leave the operatorless to its default (which is false), the deployment fails due to a missing volume mount; after digging into the chart, I found out the operatorless mode is dependent on another deployment (i think it is the gateway one).

In the end, I was not able to deploy the chart and I had to use the manifests and deploy them via kubectl.

Another thing I found out, is that you are creating the namespace on the chart...this renders multiple problems when the namespace already exists... Perhaps you could consider using the builtin helm options to automatically handle the namespace creation instead of declaring and creating it on the chart?

Thank you!

Panic on two gatewayclass definitions

Run into a panic case where I defined to GatewayClass with distinct name and paramRef, is it feasible ? My use case is that I would like to have two separated stunner ingress instance one for staging and one for production. What's the official way to achieve this ?

Here is the crash scene

2022-09-16T06:29:41.903883429Z  DPANIC  renderer        odd number of arguments passed as key-value pairs for logging   {"ignored key": "\"/stunner-gatewayclass-prod\", \"/stunner-gatewayclass\""}
github.com/l7mp/stunner-gateway-operator/internal/renderer.(*Renderer).Start.func1
        /workspace/internal/renderer/renderer.go:73
panic: odd number of arguments passed as key-value pairs for logging

goroutine 79 [running]:
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc00092e180, {0xc000733400, 0x1, 0x1})
        /go/pkg/mod/go.uber.org/[email protected]/zapcore/entry.go:232 +0x446
go.uber.org/zap.(*Logger).DPanic(0x1728f71, {0x1778b51, 0x14ef6e0}, {0xc000733400, 0x1, 0x1})
        /go/pkg/mod/go.uber.org/[email protected]/logger.go:220 +0x59
github.com/go-logr/zapr.(*zapLogger).handleFields(0xc00040d1d0, 0x0, {0xc00082bec0, 0x1, 0x40af3d}, {0x0, 0x162e9a0, 0x1})
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:147 +0xdea
github.com/go-logr/zapr.(*zapLogger).Info(0xc00040d1d0, 0x0, {0x17b4553, 0x0}, {0xc00082bec0, 0x1, 0x1})
        /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:210 +0x8d
github.com/go-logr/logr.Logger.Info({{0x1962948, 0xc00040d1d0}, 0x2}, {0x17b4553, 0x125}, {0xc00082bec0, 0x1, 0x1})
        /go/pkg/mod/github.com/go-logr/[email protected]/logr.go:261 +0xd0
github.com/l7mp/stunner-gateway-operator/internal/renderer.(*Renderer).Render(0xc0000932c0, 0xc000302680)
        /workspace/internal/renderer/render_pipeline.go:49 +0x745
github.com/l7mp/stunner-gateway-operator/internal/renderer.(*Renderer).Start.func1()
        /workspace/internal/renderer/renderer.go:73 +0x17d
created by github.com/l7mp/stunner-gateway-operator/internal/renderer.(*Renderer).Start
        /workspace/internal/renderer/renderer.go:58 +0xb4

Thanks in advance!

Question: Test `cloudretro` with TURN TCP instead of UDP

Like the title says, I would like to know if it is possible to deploy the CloudRetro example with STUNner TCP instead of UDP.

I was able to deploy it successfully on a corporate network (behind a firewall), but the UDP is very very flaky with lots of packet loss rendering the experience almost unplayable... I would like to give it a try on TCP and see if I could get an improved experience.

Could you point me on the steps I need to take to achieve such setup?

Update examples

The docs in the examples are not up-to-date.
Currently known issues with them:

  • service name generated by the gateway-operator has changed
    • previously it has been
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: udp-gateway

->
stunner-gateway-udp-gateway-svc

  • now it's
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
 name: udp-gateway

->
udp-gateway

Bug: Turncat cannot fetch turn URI

Turncat command cannot fetch URI from the stunnerd-config config map
This if clause never gets true. The way the CLI takes the listener's name is: k8s://stunner/stunnerd-config:udp-listener.
In reality, the listener's name is not just udp-listener but stunner/udp-gateway/udp-listener.

Using stunner to achieve "deploying 1 coturn server per service" in K8s

Hi all,

I am now working on a K8s cluster service which will dynamically deploy k8s services (with related Pod, containers, ingress, etc) via a customized centralized python app with K8s python client API. Now user device could simply send a request to cluster's master IP and an unique K8s service will be created and dedicated to serve that user device. The service and related deployment will be teared down after use.

i.e. https://<cluster_master_ip>/<unique_service_name>/.... [Lots of RESTful APIs]

Backgrounds:

  • Further more, the cluster_master_ip will be further routed from a public domain in order to allow user access the service outside the cluster subnet.
  • 1 node of K8s may consist of several such service.
  • No node machines will expose to public network. Only cluster master.

Under this circumstance, a new sub-feature will be added. We need to let user creating a webRTC peer connection from its own Chrome browser (PC) for continuous video / image sharing towards a COTURN server. The user device also connected to this COTURN as a webRTC peer. The image will now able to be sent from PC Chrome browser to user device. Then the application on user device could further process the video / image for some UX.

As The COTURN server will only serve this single p2p connection so that we want to embed the COTURN into the our runtime-created k8s service Pod. And we would like to tear down all related resources (ingress, Pod, services including COTURN) after use.

Together with above backgrounds, is it possible to access the COTURN inside that runtime-created K8s services Pod via cluster_master_ip/<unique_service_name> ?

Or we could also accept several copies of COTURN inside same node (# of COTURN = # of runtime-created K8s service) which they are listening to difference ports, but only via cluster_master_ip:PORT ?

I would like to ask does this could be done by using stunner ?

Sorry for my bad english.

Thanks so much.

Stunner service External IP still pending

Hi,

I try to install stunner. I finish installation and configuration but stunner pod still pending.

$ kubectl get pod -n stunner
NAME                       READY   STATUS    RESTARTS   AGE
stunner-7ff4875b47-6dtzt   0/2     Pending   0          12m
$ kubectl get service -n stunner
NAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
stunner   ClusterIP   10.245.91.141   <none>        3478/UDP   13m

There is some way how to debug?

Pod Can't connect the stunner server

I deployed a service in a different namespace than Stunner, Stunner has some error messages when I access the service
image
10.42.0.180 is my service pod IP 10.42.0.181 is stunner pod ip

UDPRoutes from other namespaces are not getting attached

Hello,

I hit very weird problem today:

When I'm using UDPRoute in app's namespace (and allowed all namespaces on the listener) - the route seems attached in route status, but on Gateway status there are 0 attached routes. My clients also cannot connect to the backend app and are getting permission denied from the stunner.

There's a route:

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"gateway.networking.k8s.io/v1alpha2","kind":"UDPRoute","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"janus-dev"},"name":"janus-dev","namespace":"dev"},"spec":{"parentRefs":[{"name":"stunner-config","namespace":"stunner"}],"rules":[{"backendRefs":[{"name":"janus-dev","namespace":"dev"}]}]}}
  creationTimestamp: '2023-06-30T01:11:58Z'
  generation: 1
  labels:
    argocd.argoproj.io/instance: janus-dev
  name: janus-dev
  namespace: dev
  resourceVersion: '71179887'
  uid: f86715d8-b32c-4459-ad64-b9b33951239b
spec:
  parentRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: stunner-config
      namespace: stunner
  rules:
    - backendRefs:
        - group: ''
          kind: Service
          name: janus-dev
          namespace: dev
          weight: 1
status:
  parents:
    - conditions:
        - lastTransitionTime: '2023-06-30T01:16:09Z'
          message: parent accepts the route
          observedGeneration: 1
          reason: Accepted
          status: 'True'
          type: Accepted
        - lastTransitionTime: '2023-06-30T01:16:09Z'
          message: all backend references successfully resolved
          observedGeneration: 1
          reason: ResolvedRefs
          status: 'True'
          type: ResolvedRefs
      controllerName: stunner.l7mp.io/gateway-operator
      parentRef:
        group: gateway.networking.k8s.io
        kind: Gateway
        name: stunner-config
        namespace: stunner

But when I apply similar route to the same namespace as Gateway is - it works just fine.

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
  name: janus-dev
  namespace: stunner
spec:
  parentRefs:
    - name: stunner-config
  rules:
    - backendRefs:
        - name: janus-dev
          namespace: dev

Now traffic is being passed through stunner and gateway shows attachedRoutes: 1 on the listener status.

Question: How to tell clients to connect to a different STUNner IP with the `stunner-gateway-operator`?

I have a scenario where we have an internal network where our Kubernetes exposed services are behind 1:1 NAT IPs...
This means that the LoadBalancer's IP addresses that Kubernetes knows are not the same that the clients use to connect to them.

For example:

  • Suppose we expose STUNner's LoadBalancer service on external-ip address 10.0.0.1:3478/udp but our clients will reach it on 10.0.20.1:3478/udp as configured on our internal firewall.

On the Kurento One2one Call example I need to be able to configure the webrtc-server frontend to expose the TURN URI to something like turn:10.0.20.1:3478?transport=UDP, which is the IP address that the client will be able to reach.

Using STUNner in standalone mode I believe I can tweak the config STUNNER_PUBLIC_ADDR to point to the correct IPAddress that the client can reach; but I was not able to figure it out with the stunner-gateway-operator.

Maybe you can shed some light?

Support for Auth Secret authentication instead of username/password

Hello,

I just stumbled onto your project as I'm looking into deploying Coturn to Kubernetes.
First of all thank you for your hard work and contribution.

It feels like it would not be too complicated to "migrate" from Coturn to your solution, right?

We are not using the username/password mechanism but instead auth-secret-token. Is that something supported by stunner?

Bug: Monitoring frontend fails when the server is not running

This is an uncommon issue, which can be easily reproduced with no listener and clusters configured in the STUNner config.

In such situation polling the prometheus metrics from the pod results an empty reply similar to curl: (52) Empty reply from server.

The resulting STUNner error:

2022/10/26 09:57:57 http: panic serving 127.0.0.1:43952: runtime error: invalid memory address or nil pointer dereference
goroutine 49 [running]:
net/http.(*conn).serve.func1()
	net/http/server.go:1850 +0xbf
panic({0x8f5080, 0xd66750})
	runtime/panic.go:890 +0x262
github.com/pion/turn/v2.(*Server).AllocationCount(...)
	github.com/pion/turn/[email protected]/server.go:137
github.com/l7mp/stunner.NewStunner.func1()
	github.com/l7mp/stunner/stunner.go:85 +0x1c
github.com/prometheus/client_golang/prometheus.(*valueFunc).Write(0xc0002298c0, 0x2?)
	github.com/prometheus/[email protected]/prometheus/value.go:96 +0x27
github.com/prometheus/client_golang/prometheus.processMetric({0xa4a8e8, 0xc0002298c0}, 0x4156b0?, 0x0?, 0x0)
	github.com/prometheus/[email protected]/prometheus/registry.go:605 +0x98
github.com/prometheus/client_golang/prometheus.(*Registry).Gather(0xc000076b40)
	github.com/prometheus/[email protected]/prometheus/registry.go:499 +0x81d
github.com/prometheus/client_golang/prometheus.(*noTransactionGatherer).Gather(0x414b86?)
	github.com/prometheus/[email protected]/prometheus/registry.go:1042 +0x22
github.com/prometheus/client_golang/prometheus/promhttp.HandlerForTransactional.func1({0x7fa0145ba0a8, 0xc000316050}, 0xc000336000)
	github.com/prometheus/[email protected]/prometheus/promhttp/http.go:135 +0xfe
net/http.HandlerFunc.ServeHTTP(0x909e00?, {0x7fa0145ba0a8?, 0xc000316050?}, 0x899bde?)
	net/http/server.go:2109 +0x2f
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerInFlight.func1({0x7fa0145ba0a8, 0xc000316050}, 0xa4c500?)
	github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:56 +0xd4
net/http.HandlerFunc.ServeHTTP(0xa4c558?, {0x7fa0145ba0a8?, 0xc000316050?}, 0x0?)
	net/http/server.go:2109 +0x2f
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1({0xa4c558?, 0xc00034c000?}, 0xc000336000)
	github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:142 +0xb8
net/http.HandlerFunc.ServeHTTP(0xc0000a9af0?, {0xa4c558?, 0xc00034c000?}, 0x0?)
	net/http/server.go:2109 +0x2f
net/http.(*ServeMux).ServeHTTP(0x0?, {0xa4c558, 0xc00034c000}, 0xc000336000)
	net/http/server.go:2487 +0x149
net/http.serverHandler.ServeHTTP({0xc0003100c0?}, {0xa4c558, 0xc00034c000}, 0xc000336000)
	net/http/server.go:2947 +0x30c
net/http.(*conn).serve(0xc000314000, {0xa4cc98, 0xc0000ba7e0})
	net/http/server.go:1991 +0x607
created by net/http.(*Server).Serve
	net/http/server.go:3102 +0x4db

Milestone v1.14: Performance: Per-allocation CPU load-balancing

This issue is to plan & discuss the performance optimizations that should go into v1.14.

Problem: Currently STUNner UDP performance is limited at about 100-200 kpps per UDP listener (i.e., per UDP Gateway/listener in the Kubernetes Gateway API terminology). This is because we allocate a single net.PacketConn per UDP listener, which is then drained by a single CPU thread/go-routine. This means that all client allocations made via that listener will share the same CPU thread and there is no way to load-balance client allocations across CPUs; i.e., each listener is restricted to a single CPU. If STUNner is exposed via a single UDP listener (the most common setting) then it will be restricted to about 1200-1500 mcore.

Notes:

  • This is not a problem in Kubernetes: instead of vertical scaling (let a single STUNner instance use as many CPUs as available), Kubernetes defaults to horizontal scaling; if a single stunnerd pod is a bottleneck we simply fire up more (e.g., using HPA). In fact, the single-CPU-restriction makes HPA simpler since the CPU triggers are easier to set (e.g., we have to scale-out when when the average CPU load approaches 1000 mcores); when the application can vertically scale to some arbitrary number of CPUs by itself we never know how to fix the CPU trigger for HPA (this is when vertical scaling interferes with horizonmtal scaling). Eventually we'll have as many pods as CPU cores and Kubernetes will readily load-balance client connections across our pods. This makes us wonder whether to solve the vertical scaling problem at all, since there is very little use of such a feature in Kubernetes.
  • The single-CPU restriction apples per-UDP-listener: if STUNner is exposed via multiple UDP TURN listeners then each listener will receive a separate CPU thread.
  • This limitation applies to UDP only: for TCP, TLS and DTLS the TURN sockets are connected back to the client and therefore a separate CPU thread/go-routine is created for each allocation.

Solution: The plan is to create a separate net.Conn for each UDP allocation, by (1) sharing the same listener server address using REUSEADDR/REUSEPORT, (2) connecting each per-allocation connection back to the client (this will turn the net.PacketConn into a connected net.Conn), and (3) firing up a separate read-loop/go-routine per each allocation/socket. Extreme care must be taken though in implementing this: if we blindly create a new socket per received UDP packet then a simple UDP portscan will DoS the TURN listener.

Plan:

  1. Move the creation of per-allocation connection creation after the client has authenticated with the server, e.g., when the TURN allocation request has been successfully processed. Note that this still allows a client with a valid credential to DoS the server, so we need to quota per-client connections.

  2. Implement per-client quotas as per RFC8656, Section 7.2., "Receiving an Allocate Request", point 10:

At any point, the server MAY choose to reject the request with a 486 (Allocation Quota Reached) error if it feels the client is trying to exceed some locally defined allocation quota. The server is free to define this allocation quota any way it wishes, but it SHOULD define it based on the username used to authenticate the request and not on the client's transport address.

  1. Expose the client quota via turn.ServerConfig. Possibly also expose a setting to let users to opt in to per-allocation CPU load-balancing.

  2. Test and upstream.

Feedback appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.