GithubHelp home page GithubHelp logo

livekit / livekit Goto Github PK

View Code? Open in Web Editor NEW
7.1K 105.0 592.0 9.24 MB

End-to-end stack for WebRTC. SFU media server and SDKs.

Home Page: https://docs.livekit.io

License: Apache License 2.0

Go 99.77% Shell 0.16% Dockerfile 0.06%
golang webrtc sfu media-server video

livekit's Introduction

The LiveKit icon, the name of the repository and some sample code in the background.

LiveKit: Real-time video, audio and data for developers

LiveKit is an open source project that provides scalable, multi-user conferencing based on WebRTC. It's designed to provide everything you need to build real-time video audio data capabilities in your applications.

LiveKit's server is written in Go, using the awesome Pion WebRTC implementation.

GitHub stars Slack community Twitter Follow GitHub release (latest SemVer) GitHub Workflow Status License

Features

Documentation & Guides

https://docs.livekit.io

Live Demos

Ecosystem

  • Agents: build real-time multimodal AI applications with programmable backend participants
  • Egress: record or multi-stream rooms and export individual tracks
  • Ingress: ingest streams from external sources like RTMP, WHIP, HLS, or OBS Studio

SDKs & Tools

Client SDKs

Client SDKs enable your frontend to include interactive, multi-user experiences.

Language Repo Declarative UI Links
JavaScript (TypeScript) client-sdk-js React docs | JS example | React example
Swift (iOS / MacOS) client-sdk-swift Swift UI docs | example
Kotlin (Android) client-sdk-android Compose docs | example | Compose example
Flutter (all platforms) client-sdk-flutter native docs | example
Unity WebGL client-sdk-unity-web docs
React Native (beta) client-sdk-react-native native
Rust client-sdk-rust

Server SDKs

Server SDKs enable your backend to generate access tokens, call server APIs, and receive webhooks. In addition, the Go SDK includes client capabilities, enabling you to build automations that behave like end-users.

Language Repo Docs
Go server-sdk-go docs
JavaScript (TypeScript) server-sdk-js docs
Ruby server-sdk-ruby
Java (Kotlin) server-sdk-kotlin
Python (community) python-sdks
PHP (community) agence104/livekit-server-sdk-php

Tools

Install

Tip

We recommend installing LiveKit CLI along with the server. It lets you access server APIs, create tokens, and generate test traffic.

The following will install LiveKit's media server:

MacOS

brew install livekit

Linux

curl -sSL https://get.livekit.io | bash

Windows

Download the latest release here

Getting Started

Starting LiveKit

Start LiveKit in development mode by running livekit-server --dev. It'll use a placeholder API key/secret pair.

API Key: devkey
API Secret: secret

To customize your setup for production, refer to our deployment docs

Creating access token

A user connecting to a LiveKit room requires an access token. Access tokens (JWT) encode the user's identity and the room permissions they've been granted. You can generate a token with our CLI:

livekit-cli create-token \
    --api-key devkey --api-secret secret \
    --join --room my-first-room --identity user1 \
    --valid-for 24h

Test with example app

Head over to our example app and enter a generated token to connect to your LiveKit server. This app is built with our React SDK.

Once connected, your video and audio are now being published to your new LiveKit instance!

Simulating a test publisher

livekit-cli join-room \
    --url ws://localhost:7880 \
    --api-key devkey --api-secret secret \
    --room my-first-room --identity bot-user1 \
    --publish-demo

This command publishes a looped demo video to a room. Due to how the video clip was encoded (keyframes every 3s), there's a slight delay before the browser has sufficient data to begin rendering frames. This is an artifact of the simulation.

Deployment

Use LiveKit Cloud

LiveKit Cloud is the fastest and most reliable way to run LiveKit. Every project gets free monthly bandwidth and transcoding credits.

Sign up for LiveKit Cloud.

Self-host

Read our deployment docs for more information.

Building from source

Pre-requisites:

  • Go 1.20+ is installed
  • GOPATH/bin is in your PATH

Then run

git clone https://github.com/livekit/livekit
cd livekit
./bootstrap.sh
mage

Contributing

We welcome your contributions toward improving LiveKit! Please join us on Slack to discuss your ideas and/or PRs.

License

LiveKit server is licensed under Apache License v2.0.


LiveKit Ecosystem
Real-time SDKsReact Components · JavaScript · iOS/macOS · Android · Flutter · React Native · Rust · Python · Unity (web) · Unity (beta)
Server APIsNode.js · Golang · Ruby · Java/Kotlin · Python · Rust · PHP (community)
Agents FrameworksPython · Playground
ServicesLivekit server · Egress · Ingress · SIP
ResourcesDocs · Example apps · Cloud · Self-hosting · CLI

livekit's People

Contributors

akarl10 avatar ashellunts avatar bekriebel avatar biglittlebigben avatar boks1971 avatar cnderrauber avatar davidliu avatar davidzhao avatar dennwc avatar dsa avatar frostbyte73 avatar hn8 avatar im-pingo avatar lukasio avatar matkam avatar nbsp avatar ocupe avatar paulwe avatar ramakrishnachilaka avatar real-danm avatar renovate[bot] avatar ronnieeeeee avatar ryouaki avatar samhumeau avatar sean-der avatar shishirng avatar slzeroth avatar theomonnom avatar tomxiong avatar treyhakanson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

livekit's Issues

Room with > 2 Participants Causes Issues

When running a call with > 2 people, there's some weird behavior that happens:

  1. Cannot reach a state where all callers can see each other (some callers see a subset of all publishers)
  2. Some participants lose the stream of a user they're already subscribed to when a new user calls in

I've attached a file that contains the server logs + all client side logs for a call with 3 distinct users.

Info to reproduce:

  • Server Version: 0.3.2
  • Client Version 0.1.3
  • Multi-node mode

Test was performed after a fresh reboot of both the livekit-server and the redis server.

I will publish multiple log files, each containing a distinct reproduction of the issue described above. Anytime a test is performed, I restart all services, close the browser on all clients, then start over and dump the logs after all participants call in.

Discover 2021-02-03T21_59_56.345Z - 2021-02-03T22_14_56.345Z.csv.zip

Missing the ability to lock down a node, and force new calls to be initialized on a different one

I'm currently experimenting with running LiveKit in a Kubernetes environment, and I'm load testing it heavily.

I'm using a Redis instance to balance the rooms, and it all works well until we hit CPU limits.

The scenario is:

  • We have CPU requests/limits set, which means our pods cannot use e.g. more than 3.5CPUs.
  • We have HPA enabled, so new pods come up depending on the current avg CPU usage of the pods.

New clients connect via LB, and if one of LK pods is currently running at e.g. >95% CPU, then it would be advisable for new rooms to be created on ANOTHER node, instead of the busy one. But, at the moment, I don't see a way to do that without altering the source code in some way.

So, I feel like there is no way to tell a specific node "do not accept new rooms" at runtime. This can cause a lot of chaos in autoscaling environments because already-busy nodes will keep receiving new rooms 🤔

Is this a currently missing feature?

Thanks!

High activity rooms cause `"error": "channel is full"` errors

Hi,

We had a live call with over 90 people. Things were going fine early on, but then we noticed users weren't able to publish video/audio, and suddenly some users who did publish would come through as a black screen. The error we were seeing over and over in the server looks like this:


Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.858904Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  
-- | -- | --

  |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.858898Z - - - - - github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker |  

  |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.859405Z - - - - - 2021-04-02T18:27:10.859Z	ERROR	routing/redisrouter.go:311	error processing signal message	{"error": "channel is full"} |  

  |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.859435Z - - - - - github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker |  

  |   | Apr 2, 2021 @ 14:27:11.777 | <14>1 2021-04-02T18:27:10.859444Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  

  |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.854218Z - - - - - github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker |  

  |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.855512Z - - - - - github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker |  

  |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.855518Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  

  |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.854223Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  

  |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.858236Z - - - - - 	/workspace/pkg/routing/redisrouter.go:311 |  

  |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.856528Z - - - - - github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker |  

  |   | Apr 2, 2021 @ 14:27:11.776 | <14>1 2021-04-02T18:27:10.858231Z - - - - - github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker

It looks like the channel used for the socket is filling up and once that happens, the connected user doesn't receive any new events and can't publish any new events either

Recording each video and audio track seperately

Hi,
Thanks for developing livekit. It looks fantastic.
I'm building an app to perform actions based on user behaviours.
Is it possible to get the tracks of each individual participant on the server?

Thanks.

Cannot connect to insecure Websocket

I am trying to set up a server to test this and use your frontend example before i build my own, but every time i try to connect I get the error saying can not connect to insecure websocket from https. I then set up a reverse proxy and secured the backend, but it still gives me the same issue! Any advice?

Clients cannot reconnect properly when server restarts

When the server is restarted while active rooms are in place, we leave users "stuck" in a weird state due to:

  1. clients will attempt to resume the session with ?reconnect=1
  2. server picks up the new signal connection thinking it's a new client, and sends JoinResponse
  3. clients aren't expecting a new JoinResponse and do not handle it
  4. at this point, the server has no knowledge of existing tracks, while client thinks it has reconnected to the server correctly.

Ability to restrict participants from publishing data

Currently any connected participant may publish data packets to the room. It'd be great if there's a way to limit that permission as part of the access token.

I propose we introduce a canPublishData permission in the video grant. participants with it set to false should not be able to publish data.

Custom claims in the `AccessToken`

Support for custom claims in the access token that can be forwarded alongside all of the Participant objects could allow for building new features on top of LiveKit without having to modify the codebase.

Simple example:

  • Seat ID for rendering tracks on an exact location for all participants

[Regression] Server version `>= 0.3.2` first publisher audio does not come through

Hi,

Version info:

  • live-server >= 0.3.2
  • client-sdk-js >= 0.1.0

I tested the sample app found in test/sample.ts of the client-sdk-js project and found that the first publishers audio never comes through.

When using server version 0.3.1 and below the first publishers audio does come through.

Steps to reproduce:

  1. Run the server version >= 0.3.2
  2. Call in with first user
  3. Call in with second user
  4. Mute second user using Mute Audio button (on the second users webpage)
  5. Second user cannot hear first user

Bridge rooms

Hi,

from the documentation:

In a multi-node setup, LiveKit can support a large number of concurrent rooms. However, there are limits to the number of participants in a room since, for now, a room must fit on a single node.

My question would be if you have plans for publishing media across rooms? The main use case probably being a single speaker being heard/seen across different rooms. I'm eagerly waiting for the roadmap, but couldn't wait to ask that question after reading through the docs!

Clients cannot reconnect when running without Redis

Describe the bug
When clients attempt to reconnect to the server, the server does not resume the connection if Redis is not used in the setup.

Server

  • Version: 0.12.0
  • Environment: local dev
  • without redis.

Client

  • SDK: JS, iOS

Audio downtrack stop sending after a new participant joins.

Describe the bug
This is a tricky one. Sometimes in a room, when a new participant joins, one of the existing participant would lose one of their subscribed tracks. The track is supposedly still there, and the new participant could hear everyone. But a single person would stop being able to hear another but still be able to hear everyone else.

Server

  • Version: 0.10.4
  • Environment: EKS/Kubernetes

Client

  • SDK: React
  • Version: 0.3.11

To Reproduce
Steps to reproduce the behavior:

  1. two users in an audio room (russ and I)
  2. another user joins
  3. I stopped getting russ's audio, with webrtc-internals showing packetReceived/s going to 0
  4. russ and the new user could hear each other fine.

The issue happens when a new downtrack is added to the receiver. it seems that one of the existing downtracks get dropped or somehow stops receiving data.

Manual disconnect should fire immediately

Right now it seems that when calling room.disconnect(), it waits for roughly ~7 seconds before sending the event to other clients. It seems like it's maybe waiting for connections to die or something? In practice this means when someone disconnects their room, the other clients see a frozen image for that time until the disconnect event fires and we can do removal and cleanup.

cannot get default stun server address

Describe the bug
I was trying to run livekit server in Google Cloud.
but connection was failed because ice candidate of srflx is bind with local ip address.
When debugging source code, GetLocalIPAddress is was not called

To Reproduce
Steps to reproduce the behavior:

  1. one client connect to the room
  2. you can see that connection is failed on CLOUD

Expected behavior
The stun server argument must be entered in the GetExternalIP function.

I think insert that code to determineIP function
var stunServers []string if len(conf.RTC.StunServers) == 0 { stunServers = DEFAULT_STUN_SERVERS } else { stunServers = conf.RTC.StunServers } ip, err := GetExternalIP(stunServers)

Trying to transmit a stream of go native image.Image

Hi there!
First, thanks to authors and contributors of this project, livekit is amazing.

Actually, I'm trying to send a stream of images (native image.Image) to my livekit room.

My first approach was to try to encode the images into H264 frames, using [this package],(github.com/gen2brain/x264-go) but when start the publishing track I got an unable to start track, codec is not supported by remote error and I don't know if it depends on the other clients, or what. I'm testing this approach with your React client, but I don't know how to make to accept h264 codec (If I'm understanding correctly the error).

Well, after some hours trying with this approach, I decide to change to other codec: VP8, and the only valid (and updated) package I find is this wrapper of libvpx, but it is more low level I think, then I'm currently reading and learning how to encode my image.Image images with this codec, but I'm very doubted about my approach.

If anyone knows or have any ideas to another way to solve this problem, I'll appreciate so much.

Production Ready

Hello, I tried this and it looks great to me.
i read your online documentation. Its well documented. However, it says that it should be used in the same manner for production purposes.
Can you help me more into explaining what would be the ideal way that this can be used in production.

Since, it has docker. I am thinking of using AWS ECS for this. Would be great, If you can provide some context in that regard.

Thank You

Help Setting Up Production Server

Hey everyone, I am requesting some guidance in setting up a production server. I am still pretty new when it comes to messing around with docker and kubernetes. I currently have a digital ocean kubernetes cluster running with a nginx ingess controller installed. I have read the documentation and need a little help regarding using a helm chart to finish setting it up the right way. if anyone can help me, i would very much appreciate it!

read tcp 100.96.182.189:7880->100.97.78.3:34846: use of closed network connection

Seeing a few of these:

<14>1 2021-02-10T01:05:35.052605Z - - - - - 2021-02-10T01:05:35.052Z	ERROR	service/rtcservice.go:147	error reading from websocket	{"error": "read tcp 100.96.182.189:7880->100.97.78.3:34846: use of closed network connection"}

Results in the following message on the client side:

"websocket connection closed" ""

Support for multiple video codes in the same room

When a participant has an active track published with VP8, and another subscriber that has successfully received that codec. If a new H264 track gets published to the room, the existing subscriber will fail to receive the track.

This is due to Pion MediaEngine making simple assumptions that if a video codec is already negotiated in the session, it will ignore other codecs that could support the new tracks that have been added.

This limitation is in Pion, and here's a PR aimed at fixing it.

[Feature Request] API for Broadcasting Clients

One use case I find myself having over and over is the need to bring in media sources that don't originate from browsers, such as media sources from ffmpeg or gstreamer.

In the past I've used mediasoup to bring these sources into rooms via RTP using what mediasoup calls "plain transport".

For Pion, there are examples on Github of converting rtsp sources to webrtc (https://github.com/deepch/RTSPtoWebRTC).

I know it's possible to use virtual camera devices on a stream by stream basis to bring in these sources via a browser, but that doesn't scale and doesn't work very well in a server context.

It would be really helpful for this use case if livekit had an API that made this kind of "broadcaster" functionality easier to implement inside livekit rooms.

Simulcast video tracks with lower resolution will always be pending

Describe the bug
When using simulcast with low resolutions, video tracks will always be in a pending state. This causes events like mute status changes to never be recognized by the server or broadcast to clients.

Server

  • Version: [0.12.5]
  • Environment: Docker image on fly.io

Client

  • SDK: js
  • Version: 0.12.1

To Reproduce
Steps to reproduce the behavior:

  1. Start a conference with lower video resolution (VideoPresets43.vga and simulcast enabled.
  2. Have debug logging enabled
  3. Mute/unmute the video track
  4. Note that the mute status changed log is ever printed

Expected behavior
Tracks should not be considered pending if the total number of expected tracks are being sent, regardless of that number.

Additional context
This appears to be getting caused by a combination of how the JS Client SDK handles simulcast tracks with lower resolution and expectations of the livekit-server.

In participant.go - SetTrackMuted, if a track is considered pending the function returns early and does not change the Muted status server side or send onTrackUpdated events. https://github.com/livekit/livekit-server/blob/1bcaf9d0ea2681aadb6f60bfe42062f252903a42/pkg/rtc/participant.go#L586-L588

In participant.go - onMediaTrack, pending tracks are only removed from the pending status if the track isn't simulcasted, or if the number of established tracks is 3. https://github.com/livekit/livekit-server/blob/1bcaf9d0ea2681aadb6f60bfe42062f252903a42/pkg/rtc/participant.go#L824-L827

In the JS Client SDK, if the starting resolution is not high enough, only one simulcast track (for a total of two tracks) will be sent. https://github.com/livekit/client-sdk-js/blob/64dc4b52fc363fe34e8ac58427f233868b0d0e7e/src/room/participant/LocalParticipant.ts#L389-L417

Either the client code would need to be modified to always sent two tracks, or the sever code needs to remove the assumption that simulcasted tracks will always have three active encoding levels.

Publishing streams freeze and new connections unable to join (logs included)

This is the third time we've seen this issue where the publishing streams freeze up (and sometimes the audio goes in a glitchy loop that does not recover). After this initial instance of publishing streams freezing, no one is able to connect to the livekit server. (when the publisher tries to recover by reloading, they are no longer able to connect. Same with subscribers).

Also, this may or may not be related, but every time we've seen this issue, it's happened when we set a UDP port range (multi port). When we set the single port udp_port setting, we have not experienced this. (EDIT - ok, we just encountered it once on the single port setting, so maybe this has nothing to do with it. But I'll still leave the original comment in, just in case something comes to your mind)

We are on livekit-server v0.9.3. CPU and memory usage are very healthy

There isn't a specific reproducible action that I've observed that the publisher(s) took to get to this problem.

The errors I'm seeing from the logs. Also attaching full logs around the incident in case they are helpful

Jun 4, 2021 @ 17:06:10.805	<14>1 2021-06-05T00:06:10.703899Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:207	could not set local description	{"err": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.805	<14>1 2021-06-05T00:06:10.703889Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:159	could not negotiate	{"error": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.805	<14>1 2021-06-05T00:06:10.703942Z - - - - - 	/workspace/pkg/rtc/transport.go:159
	
	Jun 4, 2021 @ 17:06:10.805	<14>1 2021-06-05T00:06:10.703936Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:159	could not negotiate	{"error": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.805	<14>1 2021-06-05T00:06:10.703939Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).Negotiate.func1
	
	Jun 4, 2021 @ 17:06:10.805	<14>1 2021-06-05T00:06:10.703912Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:159	could not negotiate	{"error": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.805	<14>1 2021-06-05T00:06:10.703921Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:207	could not set local description	{"err": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}

Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703918Z - - - - - 	/workspace/pkg/rtc/transport.go:159
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703929Z - - - - - 	/workspace/pkg/rtc/transport.go:207
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703915Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).Negotiate.func1
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703895Z - - - - - 	/workspace/pkg/rtc/transport.go:159
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703931Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).Negotiate.func1
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703907Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).Negotiate.func1
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703905Z - - - - - 	/workspace/pkg/rtc/transport.go:207
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703910Z - - - - - 	/workspace/pkg/rtc/transport.go:158
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703934Z - - - - - 	/workspace/pkg/rtc/transport.go:158
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703924Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).CreateAndSendOffer
	
	Jun 4, 2021 @ 17:06:10.804	<14>1 2021-06-05T00:06:10.703902Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).CreateAndSendOffer
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703867Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:159	could not negotiate	{"error": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703884Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).Negotiate.func1
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703823Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:207	could not set local description	{"err": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703892Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).Negotiate.func1
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703879Z - - - - - github.com/livekit/livekit-server/pkg/rtc.(*PCTransport).CreateAndSendOffer
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703851Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:159	could not negotiate	{"error": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703836Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:207	could not set local description	{"err": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703814Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:159	could not negotiate	{"error": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703876Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:207	could not set local description	{"err": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703882Z - - - - - 	/workspace/pkg/rtc/transport.go:207
	
	Jun 4, 2021 @ 17:06:10.803	<14>1 2021-06-05T00:06:10.703782Z - - - - - 2021-06-05T00:06:10.703Z	ERROR	rtc/transport.go:207	could not set local description	{"err": "InvalidModificationError: invalid proposed signaling state transition: have-local-offer->SetLocal(offer)->have-local-offer"}

More logs around the error: Untitled document.txt

Sometimes clients will be stuck on the `connecting to wss://SERVER_URL/rtc?access_token=` message

I'm not sure what logs to provide for this because it happens even on a fresh server boot. Here are the logs:

2021-02-18T10:17:13.589Z        INFO    server/main.go:145      configured key provider {"num_keys": 1}
2021-02-18T10:17:13.711Z        INFO    server/main.go:178      using multi-node routing via redis     {"address": "livespot-lk-redis-production:6379"}
2021-02-18T10:17:13.714Z        INFO    service/roommanager.go:88       deleting room state     {"room": "2c035db5-8c1e-47df-903f-03e245b496fc"}
2021-02-18T10:17:13.716Z        INFO    service/roommanager.go:88       deleting room state     {"room": "07fb1453-21ca-4cd8-ba51-28c5775ec56d"}
2021-02-18T10:17:13.718Z        DEBUG   routing/redisrouter.go:288      starting redisWorker    {"node": "ND_uduvfI4D"}
2021-02-18T10:17:13.718Z        INFO    service/server.go:110   starting LiveKit server {"address": ":7880", "nodeId": "ND_uduvfI4D", "version": "0.5.1"}
2021-02-18T13:29:59.805Z        INFO    service/rtcservice.go:113       new client WS connected {"connectionId": "CO_YdqJFkYLMt8M", "room": "RM_XHgJhgiy8eFS", "roomName": "8fb5bc1e-1918-4240-9e05-b1c01420152c", "name": "0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20"}
2021-02-18T13:30:11.927Z        INFO    service/rtcservice.go:97        WS connection closed    {"participant": "0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20", "connectionId": "CO_YdqJFkYLMt8M"}
2021-02-18T13:30:15.610Z        INFO    service/rtcservice.go:113       new client WS connected {"connectionId": "CO_C2rQZ7M839Yn", "room": "RM_XHgJhgiy8eFS", "roomName": "8fb5bc1e-1918-4240-9e05-b1c01420152c", "name": "0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20"}

On the client side all we see is:

connecting to wss://SERVER_URL/rtc?access_token=

And in the network tab:

image

image

The only solution to this that we've found is to reboot the livekit-server

Server crashed - `runtime: note: mlock workaround for kernel bug failed with errno 12`

I'm not sure how this happened but the server crashed all-of-a-sudden in the middle of a call:

"<14>1 2021-02-10T22:39:58.667188Z - - - - - 2021-02-10T22:39:58.667Z\tDEBUG\trtc/participant.go:460\tadded subscribedTrack\t{\"srcParticipant\": \"PA_dfwVQBAcTeugkyG35GxzcV\", \"participant\": \"0xc9e9464748852EAfC09E8c52A0e09240BAfaC25C\"}\n"
--
"<14>1 2021-02-10T22:39:58.742327Z - - - - - 2021-02-10T22:39:58.742Z\tDEBUG\trtc/participant.go:499\tstarting server negotiation\t{\"participant\": \"0xc9e9464748852EAfC09E8c52A0e09240BAfaC25C\"}\n"
"<14>1 2021-02-10T22:39:58.642056Z - - - - - 2021-02-10T22:39:58.641Z\tDEBUG\trtc/participant.go:137\tnegotiation needed\t{\"participant\": \"0xc9e9464748852EAfC09E8c52A0e09240BAfaC25C\"}\n"
"<14>1 2021-02-10T22:39:58.641924Z - - - - - 2021-02-10T22:39:58.641Z\tDEBUG\trtc/participant.go:460\tadded subscribedTrack\t{\"srcParticipant\": \"PA_dfwVQBAcTeugkyG35GxzcV\", \"participant\": \"0xc9e9464748852EAfC09E8c52A0e09240BAfaC25C\"}\n"
"<14>1 2021-02-10T22:39:58.742779Z - - - - - 2021-02-10T22:39:58.742Z\tDEBUG\trtc/participant.go:519\tsending offer to participant\t{\"participant\": \"0xc9e9464748852EAfC09E8c52A0e09240BAfaC25C\"}\n"
"<14>1 2021-02-10T22:39:58.742634Z - - - - - 2021-02-10T22:39:58.742Z\tDEBUG\trtc/participant.go:519\tsending offer to participant\t{\"participant\": \"0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20\"}\n"
"<14>1 2021-02-10T22:39:58.934109Z - - - - - 2021-02-10T22:39:58.933Z\tDEBUG\trtc/participant.go:300\tsetting participant answer\t{\"participant\": \"0xFFC80bd2A413f37E125D39C281Cc85B88dcebF20\"}\n"
"<14>1 2021-02-10T22:39:58.877805Z - - - - - 2021-02-10T22:39:58.877Z\tDEBUG\trtc/participant.go:300\tsetting participant answer\t{\"participant\": \"0xc9e9464748852EAfC09E8c52A0e09240BAfaC25C\"}\n"
"<14>1 2021-02-10T22:40:23.674224Z - - - - - github.com/pion/ion-sfu/pkg/sfu.(*sequencer).getSeqNoPairs(0x0, 0xc001fbc030, 0x3, 0x11, 0x2, 0x2, 0x0)\n"
"<14>1 2021-02-10T22:40:23.674235Z - - - - - \t/go/pkg/mod/github.com/davidzhao/[email protected]/pkg/sfu/sequencer.go:117 +0x37\n"
"<14>1 2021-02-10T22:40:23.674216Z - - - - - goroutine 4321 [running]:\n"
"<14>1 2021-02-10T22:40:23.674189Z - - - - - [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xbe48b7]\n"
"<14>1 2021-02-10T22:40:23.674206Z - - - - - \n"
"<14>1 2021-02-10T22:40:23.674117Z - - - - - panic: runtime error: invalid memory address or nil pointer dereference\n"
"<14>1 2021-02-10T22:40:23.674461Z - - - - - \t/go/pkg/mod/github.com/pion/srtp/[email protected]/stream_srtcp.go:30 +0x56\n"
"<14>1 2021-02-10T22:40:23.674446Z - - - - - \t/go/pkg/mod/github.com/davidzhao/[email protected]/pkg/buffer/rtcpreader.go:22 +0x73\n"
"<14>1 2021-02-10T22:40:23.674432Z - - - - - github.com/pion/ion-sfu/pkg/buffer.(*RTCPReader).Write(0xc0029d72c0, 0xc004196000, 0x2c, 0x2000, 0x0, 0x0, 0x0)\n"
"<14>1 2021-02-10T22:40:23.674399Z - - - - - github.com/pion/ion-sfu/pkg/sfu.(*DownTrack).Bind.func1(0xc004196000, 0x2c, 0x2000)\n"
"<14>1 2021-02-10T22:40:23.674245Z - - - - - github.com/pion/ion-sfu/pkg/sfu.(*DownTrack).handleRTCP(0xc000e9ea80, 0xc004196000, 0x2c, 0x2000)\n"
"<14>1 2021-02-10T22:40:23.674469Z - - - - - github.com/pion/srtp/v2.(*SessionSRTCP).decrypt(0xc0037c94a0, 0xc004196000, 0x3a, 0x2000, 0x3a, 0x0)\n"
"<14>1 2021-02-10T22:40:23.674408Z - - - - - \t/go/pkg/mod/github.com/davidzhao/[email protected]/pkg/sfu/downtrack.go:100 +0x48\n"
"<14>1 2021-02-10T22:40:23.674386Z - - - - - \t/go/pkg/mod/github.com/davidzhao/[email protected]/pkg/sfu/downtrack.go:484 +0x6d8\n"
"<14>1 2021-02-10T22:40:23.674454Z - - - - - github.com/pion/srtp/v2.(*ReadStreamSRTCP).write(0xc001ccf1c0, 0xc004196000, 0x2c, 0x2000, 0xe25880, 0xf30b00, 0xc001ccf1c0)\n"
"<14>1 2021-02-10T22:40:23.674475Z - - - - - \t/go/pkg/mod/github.com/pion/srtp/[email protected]/session_srtcp.go:173 +0x1fa\n"
"<14>1 2021-02-10T22:40:23.674511Z - - - - - runtime: note: your Linux kernel may be buggy\n"
"<14>1 2021-02-10T22:40:23.674517Z - - - - - runtime: note: see https://golang.org/wiki/LinuxKernelSignalVectorBug\n"
"<14>1 2021-02-10T22:40:23.674490Z - - - - - \t/go/pkg/mod/github.com/pion/srtp/[email protected]/session.go:141 +0x129\n"
"<14>1 2021-02-10T22:40:23.674482Z - - - - - github.com/pion/srtp/v2.(*session).start.func1(0xc0037c94a0, 0xc0037fa7b0, 0xf30b80, 0xc0037c94a0)\n"
"<14>1 2021-02-10T22:40:23.674497Z - - - - - created by github.com/pion/srtp/v2.(*session).start\n"
"<14>1 2021-02-10T22:40:23.674504Z - - - - - \t/go/pkg/mod/github.com/pion/srtp/[email protected]/session.go:120 +0x226\n"
"<14>1 2021-02-10T22:40:23.674524Z - - - - - runtime: note: mlock workaround for kernel bug failed with errno 12\n"
"<14>1 2021-02-10T22:40:25.838367Z - - - - - 2021-02-10T22:40:25.838Z\tINFO\tserver/main.go:178\tusing multi-node routing via redis\t{\"address\": \"livespot-lk-redis-production:6379\"}\n"
"<14>1 2021-02-10T22:40:25.843262Z - - - - - 2021-02-10T22:40:25.843Z\tDEBUG\trouting/redisrouter.go:317\tstarting redisWorker\t{\"node\": \"ND_0000000000000000\"}\n"
"<14>1 2021-02-10T22:40:25.843473Z - - - - - 2021-02-10T22:40:25.843Z\tINFO\tservice/server.go:109\tstarting LiveKit server\t{\"address\": \":7880\", \"nodeId\": \"ND_0000000000000000\", \"version\": \"0.3.3\"}\n"
"<14>1 2021-02-10T22:40:25.699517Z - - - - - 2021-02-10T22:40:25.699Z\tINFO\tserver/main.go:145\tconfigured key provider\t{\"num_keys\": 1}\n"

Panic when several users are connected with simulcast enabled

Describe the bug
The livekit server throws a panic error and stops functioning after several users are connected with simulcast enabled. The error thrown is:

{
  "level": "error",
  "ts": 1630353563.8694067,
  "logger": "livekit",
  "caller": "runtime/panic.go:965",
  "msg": "recovered panic",
  "panic": "runtime error: invalid memory address or nil pointer dereference",
  "error": "runtime error: invalid memory address or nil pointer dereference",
  "stacktrace": "runtime.gopanic\n\t/usr/local/go/src/runtime/panic.go:965\nruntime.panicmem\n\t/usr/local/go/src/runtime/panic.go:212\nruntime.sigpanic\n\t/usr/local/go/src/runtime/signal_unix.go:734\ngithub.com/pion/ion-sfu/pkg/buffer.(*Buffer).GetSenderReportData\n\t/go/pkg/mod/github.com/livekit/[email protected]/pkg/buffer/buffer.go:556\ngithub.com/pion/ion-sfu/pkg/sfu.(*WebRTCReceiver).GetSenderReportTime\n\t/go/pkg/mod/github.com/livekit/[email protected]/pkg/sfu/receiver.go:414\ngithub.com/pion/ion-sfu/pkg/sfu.(*DownTrack).CreateSenderReport\n\t/go/pkg/mod/github.com/livekit/[email protected]/pkg/sfu/downtrack.go:338\ngithub.com/livekit/livekit-server/pkg/rtc.(*ParticipantImpl).downTracksRTCPWorker\n\t/workspace/pkg/rtc/participant.go:912"
}

Once this error is thrown, I will see that some video streams stop getting shared to the newest connected user. Shortly after this, signaling events stop being passed between connected users and no new users can connect or reconnect to the server.

Server

  • Version: master branch at commit 6a88bcc
  • Environment: Fly.io using a docker container

Client

  • SDK: client-sdk-js
  • Version: main branch at commit f22916c

To Reproduce
Steps to reproduce the behavior:
The crash is inconsistent, but seems to occur after 6 or 7 users connect to a session, all sharing video and audio with simlcast enabled. If simulcast is not enabled, I have not reproduced the error.

Expected behavior
The the error to not be spawned, or for the server to recover cleanly.

Additional context
I'm not sure if this affects the issue, but my video streams are all opened with low resolution streams:

          resolution: {
            width: { ideal: 320 },
            height: { ideal: 240 },
          }

Even though I'm not requesting high quality streams, simulcast still seems to be helpful for slower clients.

I want to optimize the intranet delay to 25-50ms, is it possible?

11

I ran a test on k8s on the intranet. The program was deployed behind the k8s balance and connected to the client machine using wss. In the case of only two clients, the test showed that the video delay was very high, the average value was around 180ms, but I The ping between intranet machines is within 1ms. I want to optimize the intranet delay to 25-50ms. Is it possible? Can it be achieved? If possible, tell me the approximate method available.
thanks.

WebRTC screen sharing support

Thanks for the great project guys! I've read the whole documentation and it looks amazing.

I've got a question regarding the support of the WebRTC screen sharing though (let me know if this is the wrong place to ask question and if I should prefer using Slack for that). Maybe I missed something but while checking the client SDK and server SDKs I have not found a way to start the WebRTC screen sharing. Is it planned to add a support of it?

At the same time I understand that adding such thing to the client library is [probably] not going to be super portable and might [at the beginning at least] be only supported in the SDKs for the web clients because of the way how the screen sharing works across different platforms (iOS for instance has quite a sophisticated set up that would require implementing a broadcasting extension to be able to get the frames).

Runtime Error - channel closed when 2nd client joins

Describe the bug
Room closes when 2nd user with audience permission joins the room. Right Now I am not using the rooms API, but directly 2 on demand token using js server sdk using the following -
// for the audience
const at = new AccessToken(process.env.WEBRTC_API_KEY, process.env.WEBRTC_API_SECRET, {
identity: user1id
})
at.addGrant({ roomJoin: true, room: channelName, canPublish: false, canSubscribe: true })

const tokenAudience = at.toJwt()

// for the host
const at = new AccessToken(process.env.WEBRTC_API_KEY, process.env.WEBRTC_API_SECRET, {
identity: user2id
})
at.addGrant({ roomJoin: true, room: channelName })

const token = at.toJwt()

Server

  • Version: Latest
  • Environment: Hosted directly on a dedecated linode vps
  • Reverse Proxy and ssl is setup using caddy

Client

  • SDK: react js
  • Version: 0.4.1

To Reproduce
Steps to reproduce the behavior:

  1. one client (publisher) is connected to room
  2. 2nd client joins
  3. See error -
2021-08-08T20:19:03.373Z	INFO	livekit	server/main.go:191	configured key provider	{"numKeys": 1}
2021-08-08T20:19:03.373Z	INFO	livekit	service/utils.go:58	using single-node routing
2021-08-08T20:19:03.374Z	INFO	livekit	service/server.go:176	starting LiveKit server	{"addr": ":7880", "nodeID": "ND_QxvnQfTh", "nodeIP": "172.105.63.149", "version": "0.11.5", "rtc.portTCP": 7881, "rtc.portUDP": 7882}
2021-08-08T20:19:24.559Z	DEBUG	livekit	service/roommanager.go:271	starting RTC session	{"room": "610d7213d9c36993088b7c6a", "nodeID": "ND_QxvnQfTh", "participant": "610d9426d9c36993088b7d45", "planB": false, "protocol": 2}
2021-08-08T20:19:24.572Z	INFO	livekit	rtc/room.go:190	new participant joined	{"pID": "PA_yKMYGpbHQNvb", "participant": "610d9426d9c36993088b7d45", "room": "610d7213d9c36993088b7c6a", "roomID": "RM_3B38sVksx4vr"}
2021-08-08T20:19:24.574Z	INFO	livekit	service/rtcservice.go:151	new client WS connected	{"connID": "610d9426d9c36993088b7d45", "roomID": "RM_3B38sVksx4vr", "room": "610d7213d9c36993088b7c6a", "participant": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:24.749Z	DEBUG	livekit	rtc/participant.go:250	answering pub offer	{"state": "JOINING", "participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb"}
2021-08-08T20:19:24.750Z	DEBUG	livekit	rtc/participant.go:270	sending answer to client	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb"}
2021-08-08T20:19:24.750Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "JOINED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb"}
2021-08-08T20:19:24.750Z	DEBUG	livekit	rtc/room.go:163	participant state changed	{"state": "JOINED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb", "oldState": "JOINING"}
2021-08-08T20:19:24.751Z	DEBUG	livekit	rtc/participant.go:636	sending ice candidates	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb", "candidate": "udp4 host 172.105.63.149:7882"}
2021-08-08T20:19:24.751Z	DEBUG	livekit	rtc/participant.go:636	sending ice candidates	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb", "candidate": "tcp4 host 172.105.63.149:7881"}
2021-08-08T20:19:24.838Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "ACTIVE", "participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb"}
2021-08-08T20:19:24.839Z	DEBUG	livekit	rtc/room.go:163	participant state changed	{"state": "ACTIVE", "participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb", "oldState": "JOINED"}
2021-08-08T20:19:32.215Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "DISCONNECTED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb"}
2021-08-08T20:19:32.215Z	INFO	livekit	service/rtcservice.go:172	source closed connection	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:32.215Z	INFO	livekit	service/rtcservice.go:134	WS connection closed	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:32.215Z	DEBUG	livekit	service/roommanager.go:370	RTC session finishing	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_yKMYGpbHQNvb", "room": "610d7213d9c36993088b7c6a", "roomID": "RM_3B38sVksx4vr"}
2021-08-08T20:19:32.216Z	DEBUG	livekit	service/roommanager.go:271	starting RTC session	{"room": "610d7213d9c36993088b7c6a", "nodeID": "ND_QxvnQfTh", "participant": "610d9426d9c36993088b7d45", "planB": false, "protocol": 2}
2021-08-08T20:19:32.216Z	INFO	livekit	rtc/room.go:190	new participant joined	{"pID": "PA_gUiNizASj6Vo", "participant": "610d9426d9c36993088b7d45", "room": "610d7213d9c36993088b7c6a", "roomID": "RM_3B38sVksx4vr"}
2021-08-08T20:19:32.216Z	INFO	livekit	rtc/participant.go:674	could not send message to participant	{"error": "channel closed", "pID": "PA_gUiNizASj6Vo", "participant": "610d9426d9c36993088b7d45", "message": "*livekit.SignalResponse_Join"}
2021-08-08T20:19:32.216Z	ERROR	livekit	service/roommanager.go:313	could not join room	{"error": "channel closed"}
github.com/livekit/livekit-server/pkg/service.(*RoomManager).StartSession
	/root/livekit-server/pkg/service/roommanager.go:313
github.com/livekit/livekit-server/pkg/routing.(*LocalRouter).StartParticipantSignal
	/root/livekit-server/pkg/routing/localrouter.go:91
github.com/livekit/livekit-server/pkg/service.(*RTCService).ServeHTTP
	/root/livekit-server/pkg/service/rtcservice.go:125
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2428
github.com/urfave/negroni.Wrap.func1
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:46
github.com/urfave/negroni.HandlerFunc.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:29
github.com/urfave/negroni.middleware.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2049
github.com/livekit/livekit-server/pkg/service.(*APIKeyAuthMiddleware).ServeHTTP
	/root/livekit-server/pkg/service/auth.go:80
github.com/urfave/negroni.middleware.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
github.com/urfave/negroni.(*Recovery).ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/recovery.go:193
github.com/urfave/negroni.middleware.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
github.com/urfave/negroni.(*Negroni).ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:96
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2867
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:1932
2021-08-08T20:19:32.217Z	INFO	livekit	service/rtcservice.go:151	new client WS connected	{"connID": "610d9426d9c36993088b7d45", "roomID": "RM_3B38sVksx4vr", "room": "610d7213d9c36993088b7c6a", "participant": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:32.217Z	INFO	livekit	service/rtcservice.go:172	source closed connection	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:32.217Z	INFO	livekit	service/rtcservice.go:134	WS connection closed	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:49.778Z	INFO	livekit	rtc/participant.go:674	could not send message to participant	{"error": "channel closed", "pID": "PA_gUiNizASj6Vo", "participant": "610d9426d9c36993088b7d45", "message": "*livekit.SignalResponse_Leave"}
2021-08-08T20:19:49.778Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "DISCONNECTED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_gUiNizASj6Vo"}
2021-08-08T20:19:49.778Z	DEBUG	livekit	service/roommanager.go:271	starting RTC session	{"room": "610d7213d9c36993088b7c6a", "nodeID": "ND_QxvnQfTh", "participant": "610d9426d9c36993088b7d45", "planB": false, "protocol": 2}
2021-08-08T20:19:49.779Z	INFO	livekit	rtc/room.go:190	new participant joined	{"pID": "PA_NXRpqZEdDuch", "participant": "610d9426d9c36993088b7d45", "room": "610d7213d9c36993088b7c6a", "roomID": "RM_3B38sVksx4vr"}
2021-08-08T20:19:49.782Z	INFO	livekit	service/rtcservice.go:151	new client WS connected	{"connID": "610d9426d9c36993088b7d45", "roomID": "RM_3B38sVksx4vr", "room": "610d7213d9c36993088b7c6a", "participant": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:50.042Z	DEBUG	livekit	rtc/participant.go:250	answering pub offer	{"state": "JOINING", "participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch"}
2021-08-08T20:19:50.044Z	DEBUG	livekit	rtc/participant.go:270	sending answer to client	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch"}
2021-08-08T20:19:50.046Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "JOINED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch"}
2021-08-08T20:19:50.046Z	DEBUG	livekit	rtc/participant.go:636	sending ice candidates	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch", "candidate": "udp4 host 172.105.63.149:7882"}
2021-08-08T20:19:50.048Z	DEBUG	livekit	rtc/room.go:163	participant state changed	{"state": "JOINED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch", "oldState": "JOINING"}
2021-08-08T20:19:50.048Z	DEBUG	livekit	rtc/participant.go:636	sending ice candidates	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch", "candidate": "tcp4 host 172.105.63.149:7881"}
2021-08-08T20:19:50.160Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "ACTIVE", "participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch"}
2021-08-08T20:19:50.160Z	DEBUG	livekit	rtc/room.go:163	participant state changed	{"state": "ACTIVE", "participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch", "oldState": "JOINED"}
2021-08-08T20:19:54.207Z	DEBUG	livekit	service/roommanager.go:402	add track request	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch", "track": "a1e326b9-48b5-4312-a55a-006fd93895c3"}
2021-08-08T20:19:54.294Z	DEBUG	livekit	rtc/participant.go:250	answering pub offer	{"state": "ACTIVE", "participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch"}
2021-08-08T20:19:54.296Z	DEBUG	livekit	rtc/participant.go:270	sending answer to client	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch"}
2021-08-08T20:19:54.680Z	DEBUG	livekit	rtc/participant.go:709	mediaTrack added	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch", "track": "a1e326b9-48b5-4312-a55a-006fd93895c3", "rid": ""}
2021-08-08T20:19:54.680Z	DEBUG	livekit	rtc/mediatrack.go:308	Setting feedback	{"type": "webrtc.TypeRTCPFBGoogREMB"}
2021-08-08T20:19:54.680Z	DEBUG	livekit	rtc/mediatrack.go:308	Setting feedback	{"type": "transport-cc"}
2021-08-08T20:19:54.680Z	DEBUG	livekit	rtc/mediatrack.go:308	Setting feedback	{"type": "nack"}
2021-08-08T20:19:54.680Z	DEBUG	livekit	rtc/mediatrack.go:308	Setting feedback	{"type": "nack"}
2021-08-08T20:19:54.680Z	DEBUG	livekit	rtc/mediatrack.go:308	NewBuffer	{"MaxBitRate": 3145728}
2021-08-08T20:19:58.843Z	DEBUG	livekit	rtc/mediatrack.go:325	removing all subscribers	{"track": "TR_mwTgGu6JH6ux"}
2021-08-08T20:19:58.843Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "DISCONNECTED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch"}
2021-08-08T20:19:58.843Z	INFO	livekit	service/rtcservice.go:172	source closed connection	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:58.843Z	INFO	livekit	service/rtcservice.go:134	WS connection closed	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:58.843Z	DEBUG	livekit	service/roommanager.go:370	RTC session finishing	{"participant": "610d9426d9c36993088b7d45", "pID": "PA_NXRpqZEdDuch", "room": "610d7213d9c36993088b7c6a", "roomID": "RM_3B38sVksx4vr"}
2021-08-08T20:19:58.843Z	DEBUG	livekit	service/roommanager.go:271	starting RTC session	{"room": "610d7213d9c36993088b7c6a", "nodeID": "ND_QxvnQfTh", "participant": "610d9426d9c36993088b7d45", "planB": false, "protocol": 2}
2021-08-08T20:19:58.844Z	INFO	livekit	rtc/room.go:190	new participant joined	{"pID": "PA_DVEShkqH2NEC", "participant": "610d9426d9c36993088b7d45", "room": "610d7213d9c36993088b7c6a", "roomID": "RM_3B38sVksx4vr"}
2021-08-08T20:19:58.844Z	INFO	livekit	rtc/participant.go:674	could not send message to participant	{"error": "channel closed", "pID": "PA_DVEShkqH2NEC", "participant": "610d9426d9c36993088b7d45", "message": "*livekit.SignalResponse_Join"}
2021-08-08T20:19:58.844Z	ERROR	livekit	service/roommanager.go:313	could not join room	{"error": "channel closed"}
github.com/livekit/livekit-server/pkg/service.(*RoomManager).StartSession
	/root/livekit-server/pkg/service/roommanager.go:313
github.com/livekit/livekit-server/pkg/routing.(*LocalRouter).StartParticipantSignal
	/root/livekit-server/pkg/routing/localrouter.go:91
github.com/livekit/livekit-server/pkg/service.(*RTCService).ServeHTTP
	/root/livekit-server/pkg/service/rtcservice.go:125
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2428
github.com/urfave/negroni.Wrap.func1
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:46
github.com/urfave/negroni.HandlerFunc.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:29
github.com/urfave/negroni.middleware.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:2049
github.com/livekit/livekit-server/pkg/service.(*APIKeyAuthMiddleware).ServeHTTP
	/root/livekit-server/pkg/service/auth.go:80
github.com/urfave/negroni.middleware.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
github.com/urfave/negroni.(*Recovery).ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/recovery.go:193
github.com/urfave/negroni.middleware.ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
github.com/urfave/negroni.(*Negroni).ServeHTTP
	/root/go/pkg/mod/github.com/urfave/[email protected]/negroni.go:96
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2867
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:1932
2021-08-08T20:19:58.845Z	INFO	livekit	service/rtcservice.go:151	new client WS connected	{"connID": "610d9426d9c36993088b7d45", "roomID": "RM_3B38sVksx4vr", "room": "610d7213d9c36993088b7c6a", "participant": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:58.845Z	INFO	livekit	service/rtcservice.go:172	source closed connection	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:58.845Z	INFO	livekit	service/rtcservice.go:134	WS connection closed	{"participant": "610d9426d9c36993088b7d45", "connID": "610d9426d9c36993088b7d45"}
2021-08-08T20:19:58.853Z	DEBUG	livekit	rtc/mediatrack.go:325	removing all subscribers	{"track": "TR_mwTgGu6JH6ux"}
2021-08-08T20:20:06.616Z	INFO	livekit	server/main.go:208	exit requested, shutting down	{"signal": "interrupt"}
2021-08-08T20:20:06.618Z	INFO	livekit	rtc/participant.go:674	could not send message to participant	{"error": "channel closed", "pID": "PA_DVEShkqH2NEC", "participant": "610d9426d9c36993088b7d45", "message": "*livekit.SignalResponse_Leave"}
2021-08-08T20:20:06.618Z	DEBUG	livekit	rtc/participant.go:655	updating participant state	{"state": "DISCONNECTED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_DVEShkqH2NEC"}
2021-08-08T20:20:06.618Z	INFO	livekit	rtc/room.go:331	closing room	{"roomID": "RM_3B38sVksx4vr", "room": "610d7213d9c36993088b7c6a"}
2021-08-08T20:20:06.618Z	INFO	livekit	service/roommanager.go:132	deleting room state	{"room": "610d7213d9c36993088b7c6a"}
2021-08-08T20:20:06.618Z	DEBUG	livekit	rtc/room.go:163	participant state changed	{"state": "DISCONNECTED", "participant": "610d9426d9c36993088b7d45", "pID": "PA_DVEShkqH2NEC", "oldState": "JOINING"}
2021-08-08T20:20:06.618Z	INFO	livekit	service/roommanager.go:344	room closed	{"incomingStats": {"packetBytes":146491,"packetTotal":184,"nackTotal":0,"pliTotal":0,"firTotal":0}, "outgoingStats": {"packetBytes":0,"packetTotal":0,"nackTotal":0,"pliTotal":0,"firTotal":0}}

```


**Expected behavior**
Room should not close and audience should be able to view the livestream.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Please note the order in which the clients join do not affect the server, the room closes irrespective of whether the audience joins first or the host joins first ( The logs attached above contains the following order : audience join -> host join room -> room closed -> repeat the process but this time host joins first and is followed by the audience joining)  

Support TURN server without TLS cert

Hi, we're trying to understand why is the TLS cert required for the TURN server.

We're considering deploying in DigitalOcean with k8s and their load balancer. The load balancer will terminate the SSL before it reaches the cluster.

However, following the Helm chart and also the server source code, it seems like the TLS cert is necessary. Is there any way to run TURN without the cert?

Support Redis ACL and/or URL

Currently we have a password field, but no username. Redis 6.0+ introduced username/password pairs, that we should support.

Recording support?

The server looks very promising since it's supported with mobile and web SDKs.

However I could not see any section about recording in the documentation. Do you plan to add support for recording + processing recordings?

[Feature Request] metadata for rooms

Hi,

when reading through the docs I came across this section in "Working with data":

LiveKit participants have a metadata field that you could use to store application-specific data. For example, this could be the seat number around a table, or other state.

Maybe I misunderstood the use case here, but wouldn't something like a seat number rather be data that only needs to be updated for one entity - namely the room itself, instead for each of the participants?

I found I have the need for displaying state to the user that is room specific state or rather exactly the same for all participants of a room, for example the current number of participants of other rooms.
To update room specific state on each participant seems highly redundant so I was wondering if there is a possibility to have a (server-)api for changing a metadata property for the whole room? Something like updateRoom in addition to the already existing updateParticipant.

Thanks!

Lingering connections even after a client socket is disconnected

I'm not sure what to include for this when it comes to logs because I can't pinpoint this down to a specific error

When there is a flood of new connections, some of the participants never get cleaned out when their socket disconnects

I ran this command to see how many connections were open on the server:

# cat /proc/net/tcp | wc -l
25

Although one of the rooms I'm currently in reports 926 participants and the redis records for all the disconnected users are still there.

This causes a miscount in how many participants are actually in the call that cannot be cleaned up without manually clearing redis, or starting a new room.

Please let me know if there's something I can include to make this bug more clear. I may be able reproduce it on the livekit sample if I spam the server with lots of connections all at once.

The only thing I see in the logs repeatedly is

2021-04-05T03:51:51.950Z        ERROR   routing/redisrouter.go:317      error processing signal message{"error": "channel is full"}
github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker
        /workspace/pkg/routing/redisrouter.go:317
2021-04-05T03:51:51.950Z        ERROR   routing/redisrouter.go:317      error processing signal message{"error": "channel is full"}
github.com/livekit/livekit-server/pkg/routing.(*RedisRouter).redisWorker
        /workspace/pkg/routing/redisrouter.go:317

when this occurred

Connection not upgrading to WS

I have deployed livekit instance using docker at 69.30.226.214:7880

HTTP connection is working on this... but in websocket connection Is returning 200, but it should return 101 Protocol switching.

Since websocket connection is not happening i am not able to test livekit

Unpublish leaves tracks on peer connection

Steps to reproduce:

U1 connects -> publishes (successfully)
U2 connects -> subscribes to U1 (successfully)
U1 unpublishes + republishes -> U1 unSub + sub to U1

(will add more logs / data shortly)

Runtime error: Use of closed socket

Another one of these popped up today:

2021-02-15T02:37:49.607Z        ERROR   service/rtcservice.go:131       source closed connection       {"participant": "0x46efc301B793a0d8C2999B11d8Bad43D1b4c4E8F"}
github.com/livekit/livekit-server/pkg/service.(*RTCService).ServeHTTP.func2
        /workspace/pkg/service/rtcservice.go:131
2021-02-15T02:37:49.607Z        ERROR   service/rtcservice.go:157       error reading from websocket   {"error": "read tcp 100.96.146.52:7880->100.97.78.3:36192: use of closed network connection"}
github.com/livekit/livekit-server/pkg/service.(*RTCService).ServeHTTP
        /workspace/pkg/service/rtcservice.go:157
net/http.(*ServeMux).ServeHTTP
        /usr/local/go/src/net/http/server.go:2387
github.com/urfave/negroni.Wrap.func1
        /go/pkg/mod/github.com/urfave/[email protected]/negroni.go:46
github.com/urfave/negroni.HandlerFunc.ServeHTTP
        /go/pkg/mod/github.com/urfave/[email protected]/negroni.go:29
github.com/urfave/negroni.middleware.ServeHTTP
        /go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
net/http.HandlerFunc.ServeHTTP
        /usr/local/go/src/net/http/server.go:2012
github.com/livekit/livekit-server/pkg/service.(*APIKeyAuthMiddleware).ServeHTTP
        /workspace/pkg/service/auth.go:76
github.com/urfave/negroni.middleware.ServeHTTP
        /go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
github.com/urfave/negroni.(*Recovery).ServeHTTP
        /go/pkg/mod/github.com/urfave/[email protected]/recovery.go:193
github.com/urfave/negroni.middleware.ServeHTTP
        /go/pkg/mod/github.com/urfave/[email protected]/negroni.go:38
github.com/urfave/negroni.(*Negroni).ServeHTTP
        /go/pkg/mod/github.com/urfave/[email protected]/negroni.go:96
net/http.serverHandler.ServeHTTP
        /usr/local/go/src/net/http/server.go:2807
net/http.(*conn).serve
        /usr/local/go/src/net/http/server.go:1895
2021-02-15T02:37:49.607Z        INFO    service/rtcservice.go:95        WS connection closed    {"participant": "0x46efc301B793a0d8C2999B11d8Bad43D1b4c4E8F"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.