GithubHelp home page GithubHelp logo

srt-rs's Introduction

srt-rs

codecov Rust

NOTE: THIS IS NOT PRODUCTION READY.

Pure rust implementation of SRT (Secure Reliable Transport), without unsafe code.

Reference implementation is available at https://github.com/haivision/srt

Features

  • Fast (heap allocations are rare, uses async IO)
  • Full safety guarantees of rust

What works

  • Listen server connecting
  • Client (connect) connecting
  • Rendezvous connecting
  • Receiving
  • Sending
  • Special SRT packets (partial)
  • Actual SRT (TSBPD)
  • Timestamp drift recovery (not throughly tested)
  • Congestion control
  • Encryption
  • Bidirectional

Thread Efficiency

The reference implementation of SRT requires 3 threads per sender and 5 threads per receiver.

With srt-rs, you can assign as many connections to exactly as many threads as you want (usually as many as you have cores) using tokio's futures scheduling. This should allow for handing of many more connections.

Examples

Generate and send SRT packets

cargo run --example sender

Receive SRT packets

cargo run --example receiver

Structure

This repository is structured into 5 crates:

  • srt-protocol: State machines for the SRT protocol, with no dependencies on futures or tokio. Someday, I would like this to be a no-std crate. I expect this to have frequent breaking changes.
  • srt-tokio: Tokio elements written on top of the protocol, expected to be a relatively stable API.
  • srt-transmit: A srt-live-tranmsit replacement written ontop of srt-tokio
  • srt-c: Experimental C bindings to this crate, intended to be both API and ABI compatiable with the reference implementation
  • srt-c-unittests: The unit tests from the reference implementation that are ran against srt-c. Many of these do not pass yet.

srt-rs's People

Contributors

belltoy avatar blessanabraham avatar cmleinz avatar dependabot[bot] avatar dholroyd avatar fkaa avatar hexd0t avatar howlowck avatar k0ur05h avatar kijewski avatar lu-zero avatar marcantoine-arnaud avatar nipierre avatar robertream avatar russelltg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

srt-rs's Issues

Test default CC implementation

I'm pretty sure it's wrong, wayyy too slow.

Ways to test this: sending big files over lossy connections? Should compare to reference UDT?

Also could be possible it's not worth implementing and this repo should be pure SRT.

Receiver fails to shut down

When testing a connected receiver by shutting down the listening sender, the receiver seems to get stuck.

...
[2021-04-02T16:24:47.399737591Z INFO  srt_protocol::protocol::receiver] SRT#5648B729: Shutdown packet received, flushing receiver...
[2021-04-02T16:24:47.900748198Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=2
[2021-04-02T16:24:48.400108071Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=3
[2021-04-02T16:24:48.900079839Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=4
[2021-04-02T16:24:49.399843113Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=5
[2021-04-02T16:24:49.900738399Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=6
[2021-04-02T16:24:50.400831478Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=7
[2021-04-02T16:24:50.900475513Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=8
[2021-04-02T16:24:51.400811724Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=9
[2021-04-02T16:24:51.900250477Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=10
[2021-04-02T16:24:52.400394786Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=11
[2021-04-02T16:24:52.900561083Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=12
[2021-04-02T16:24:53.400346057Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=13
[2021-04-02T16:24:53.900796898Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=14
[2021-04-02T16:24:54.400438398Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=15
[2021-04-02T16:24:54.900032529Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=16
[2021-04-02T16:24:54.900066650Z INFO  srt_protocol::protocol::connection] 16 exps, timeout!
[2021-04-02T16:24:55.402768898Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=17
[2021-04-02T16:24:55.900875327Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=18
[2021-04-02T16:24:56.410334415Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=19
[2021-04-02T16:24:56.905888980Z INFO  srt_protocol::protocol::connection] Exp event hit, exp count=20
...

this continues until I kill the process

C API

A C API made with cbindgen or something would make this crate more useful

Threading

Hello,

I'm starting using the library, what do you mean by The reference implementation of SRT requires 3 threads per sender and 5 threads per receiver ?

Can you provide simple examples in a folder examples ? it can be run using cargo run --example sender for a filename sender.rs
It can send fake data and implement also a demo example.

Write multiplex example

Hello,

I'm researching to send a stream to multiple listeners.
Is it possible with SRT protocol ?

Or does I need to send for each one on a different port ?

Thanks
Marc-Antoine

Encryption

The haicrypt module. Unfortunately I think this has to be integrated into sender, and not layered in.

It's possible it could lie under sender and receiver, sending it's own handshake packets etc.

Implement pre-shared key

The encryption docs for srt note that it can support a pre-shared kek instead of a password. This should be relatively easy to implement

ipv6 support

I saw that the current master has at least a TODO about it in srt-protocol/src/packet/control.rs:

                // TODO: this is probably really wrong, so fix it
                let peer_addr = if ip_buf[4..] == [0; 12][..] {
                    IpAddr::from(Ipv4Addr::new(ip_buf[3], ip_buf[2], ip_buf[1], ip_buf[0]))
                } else {
                    IpAddr::from(ip_buf)
                };

Is there anything else to be addressed?

Decreasing arrival time

My reading of the SrtSocket description is that client code can expect that the Instant value does not decrease from one yielded value to the next.

I added some logging to my application code

    let mut last_arrival_time = None;
    while let Some((arrival_time, bytes)) = sock.try_next().await.expect("sock.try_next() failed") {
        if let Some(last_arrival_time) = last_arrival_time {
            if arrival_time < last_arrival_time {
                error!("Out of order packets. last={:?} this={:?}", last_arrival_time, arrival_time);
                continue;
            }
        }
        last_arrival_time = Some(arrival_time);
        ...
    }

and can see that very occasionally the value does decrease, by 10ms in this case,

[2021-04-07T11:10:03.777845477Z ERROR test::srt] Out of order packets. last=Instant { tv_sec: 4108711, tv_nsec: 501700506 } this=Instant { tv_sec: 4108711, tv_nsec: 501690506 }

by 8ms in this case,

[2021-04-08T06:52:01.946181252Z ERROR test::srt] Out of order packets. last=Instant { tv_sec: 4179629, tv_nsec: 670029779 } this=Instant { tv_sec: 4179629, tv_nsec: 670021779 }

Is my reading of the API wrong, or does this represent a problem somewhere?

Multiplexed server quits unexpectedly

I am trying to connect multiple clients (senders) to a server (multiplexed receiver). The connection of two clients works without issues, but the server quits unexpectedly after one of the clients closes the connection.

The code looks as follows:

#[tokio::main]
async fn main() -> io::Result<()> {
    env_logger::init();

    let binding = SrtSocketBuilder::new_listen()
        .local_port(3333)
        .build_multiplexed()
        .await?;
    tokio::pin!(binding);

    info!("Listening on :3333");

    while let Some(Ok(connection)) = binding.next().await {
        let mut srt_socket: SrtSocket = connection.into();

        info!("New client: {:?}", srt_socket.settings());

        tokio::spawn(async move {
            while let Some((_instant, _bytes)) = srt_socket.try_next().await.unwrap_or(None) {
                // info!("Received {:?} bytes", bytes.len());
            }
            match srt_socket.close().await {
                Ok(()) => {
                    info!("Client disconnected");
                },
                Err(error) => {
                    error!("Error on client disconnect: {:?}", error);
                }
            }
        });
    }

    info!("Closing server");
    Ok(())
}

Here are the logs:

[2021-02-13T17:19:02Z INFO  live] Listening on :3333
[2021-02-13T17:19:25Z INFO  live] New client: ConnectionSettings { remote: 127.0.0.1:51988, remote_sockid: SRT#07374E6E, local_sockid: SRT#CA89564E, socket_start_time: Instant { t: 7910.6199645s }, init_send_seq_num: SeqNumber(55492652), init_recv_seq_num: SeqNumber(206182548), max_packet_size: 1500, max_flow_size: 8192, send_tsbpd_latency: 50ms, recv_tsbpd_latency: 120ms, crypto_manager: None, stream_id: Some("first") }
[2021-02-13T17:19:25Z INFO  srt_protocol::protocol::receiver] Receiving started from 127.0.0.1:51988, with latency=120ms
[2021-02-13T17:19:31Z INFO  live] New client: ConnectionSettings { remote: 127.0.0.1:51989, remote_sockid: SRT#0B96972D, local_sockid: SRT#5130BE66, socket_start_time: Instant { t: 7915.9080202s }, init_send_seq_num: SeqNumber(192230949), init_recv_seq_num: SeqNumber(250224084), max_packet_size: 1500, max_flow_size: 8192, send_tsbpd_latency: 50ms, recv_tsbpd_latency: 120ms, crypto_manager: None, stream_id: Some("second") }
[2021-02-13T17:19:31Z INFO  srt_protocol::protocol::receiver] Receiving started from 127.0.0.1:51989, with latency=120ms
[2021-02-13T17:19:40Z INFO  srt_protocol::protocol::receiver] SRT#5130BE66: Shutdown packet received, flushing receiver...
[2021-02-13T17:19:40Z INFO  live] Closing server
[2021-02-13T17:19:40Z INFO  srt_tokio::tokio::socket] SRT#CA89564E Exiting because underlying stream ended

Fix receiver flushing issue

There is a certain order of events that fails the tests (one of the reasons for lossy's flakiness) --

  • Sender sends last packet
  • Receiver sends ACK
  • Sender sends ACK2
    • Sender is now flushed and exits
  • ACK 2 gets dropped
  • Receiver never becomes flushed, as it's waiting for the ACK2

https://russelltg.visualstudio.com/srt-rs/_build/results?buildId=269&view=logs&jobId=4b8c379e-28f4-5b8f-fa0b-784d388c1b0a&j=4b8c379e-28f4-5b8f-fa0b-784d388c1b0a&t=22d562d2-6d09-5b1d-995a-3d120df6fb13

Please do not use custom Debug

The current custom implementations make much harder to debug since useful information stays omitted.

If you are fine with it, I can send a PR that either removes or changes the current impls to Display.

Figure out plan for buffer sizing

Currently, the buffers are fully dynamicly sized and can grow to any arbitrary size. This is a security vulnrability, as we are trusting the peer to not send wild packets. For example. if it sends a packet for a seq number far in the future, it adds loss list entries for each one, which generally crashes due to out of memory.

  • packet history window
  • loss list

Example/receiver.rs doesn't work for me

Hi I'm jaehyuk from S.Korea.

Thanks to this open source, I could progressed my project eaisly, but I faced something weired recently.
My application(streaming server with srt and rust) server always stops at specific time when I test this code below.

  1. cargo run --example receiver
  2. cargo run --example sender

After 4295 ~ 4300 seconds(or so), approximately 1hour 11minutes later, receiver stops.
I tested over 10 times, but always got the same result.

Please somebody help me.
At least I hope you would test example code.
Don't forget, you should stream over 4300 seconds.

I'm looking forward to your any answer.
Thanks.

// execute sender
% cargo run --example sender
    Finished dev [unoptimized + debuginfo] target(s) in 0.10s
     Running `target/debug/examples/sender`
Sent 711010 packets

// execute receiver
examples % cargo run --example receiver
    Finished dev [unoptimized + debuginfo] target(s) in 2.72s
     Running `/Users/me/test/srt-rs/target/debug/examples/receiver`
Received SystemTime { tv_sec: 1611567922, tv_nsec: 399080000 }  <<----- start point
Received SystemTime { tv_sec: 1611572217, tv_nsec: 305890000 } <<------ stop pooint. it took 4295 sec, and stopped.

Implement Light, Small, and Full ACK as defined by the spec

See: https://datatracker.ietf.org/doc/html/draft-sharabayko-mops-srt-00#section-3.2.3

There are several types of ACK packets:

A Full ACK control packet is sent every 10 ms and has all the
fields of Figure 10.

A Lite ACK control packet includes only the Last Acknowledged
Packet Sequence Number field. The Type-specific Information field
should be set to 0.

A Small ACK includes the fields up to and including the Available
Buffer Size field. The Type-specific Information field should be
set to 0.

The sender only acknowledges the receipt of Full ACK packets (see
ACKACK).

The Lite ACK and Small ACK packets are used in cases when the
receiver should acknowledge received data packets more often than
every 10 ms. This is usually needed at high data rates. It is up to
the receiver to decide the condition and the type of ACK packet to
send (Lite or Small). The recommendation is to send a Lite ACK for
every 64 packets received.

Local UDP packet drops reported in /proc/net/udp

In my testing I've been seeing relatively high rates of UDP packet drops reported via /proc/net/udp. I have the same stream being received on the same machine via RIST instead of SRT, and the socket entries for that process in /proc/net/udp report no drops; I hope this shows that the problem is not down to CPU overload etc on the machine hosting the test.

I've not debugged enough yet to show this is a problem with srt-rs, rather than a problem with my own application code blocking the event loop (for example), but I wondered if this is a known issue?

Documention

  • Module level docs
  • Sender
  • Receiver
  • Packet
  • Integrating with other software #91

Timestamp looping

Timestamps are only 32 bit integers. Right now they don't loop and get_timestamp will prob overflow, which is not good.

Probably needs a Timestamp struct.

Connection freezes in high_bandwidth

Steps to reproduce:

Set the default buffer sizes to be larger (this should be an option that we expose, but for now it isn't):

sudo sysctl -w net.core.rmem_default=8388608
sudo sysctl -w net.core.wmem_default=8388608

Run the high_bandwidth test:

cargo test --release --test=high_bandwidth -- --nocapture

And notice after a few seconds it freezes. You may have to tweak the bandwidth by changing the RATE_MBPS constant to get it to repro (higher if it's working properly, lower if it isn't delivering the requested bandwidth).

This may or may not be reproducible outside of Linux, haven't tried.

Report missing data

If data is lost on the network, or dropped because it arrived too late, it would be great to have a way to tell where the losses lie within the data actually available.

Is this already possible? Looking at the API, I suppose it could be signaled via some kind of Err(io:Error) response?

If not possible right now, consider expanding the API so an application can know data was lost before it attempts to process the next data following the loss.

statistic reporting

If #53 wants to be more or less compatible with the reference implementation API, it might be good to support extracting the same statistics.

// Performance monitor with Byte counters for better bitrate estimation.
int srt_bstats(SRTSOCKET u, SRT_TRACEBSTATS * perf, int clear);

// Performance monitor with Byte counters and instantaneous stats instead of moving averages for Snd/Rcvbuffer sizes.
int srt_bistats(SRTSOCKET u, SRT_TRACEBSTATS * perf, int clear, int instantaneous);

With SRT_TRACEBSTATS containing the following:

Statistic Type of Statistic Unit of Measurement Available for Sender Available for Receiver Data Type Completed
msTimeStamp accumulated ms (milliseconds) int64_t
pktSentTotal accumulated packets - int64_t
pktRecvTotal accumulated packets - int64_t
pktSentUniqueTotal accumulated packets - int64_t
pktRecvUniqueTotal accumulated packets - int64_t
pktSndLossTotal accumulated packets - int32_t
pktRcvLossTotal accumulated packets - int32_t
pktRetransTotal accumulated packets - int32_t
pktRcvRetransTotal accumulated packets - int32_t
pktSentACKTotal accumulated packets - int32_t
pktRecvACKTotal accumulated packets - int32_t
pktSentNAKTotal accumulated packets - int32_t
pktRecvNAKTotal accumulated packets - int32_t
usSndDurationTotal accumulated us (microseconds) - int64_t
pktSndDropTotal accumulated packets - int32_t
pktRcvDropTotal accumulated packets - int32_t
pktRcvUndecryptTotal accumulated packets - int32_t
pktSndFilterExtraTotal accumulated packets - int32_t
pktRcvFilterExtraTotal accumulated packets - int32_t
pktRcvFilterSupplyTotal accumulated packets - int32_t
pktRcvFilterLossTotal accumulated packets - int32_t
byteSentTotal accumulated bytes - uint64_t
byteRecvTotal accumulated bytes - uint64_t
byteSentUniqueTotal accumulated bytes - uint64_t
byteRecvUniqueTotal accumulated bytes - uint64_t
byteRcvLossTotal accumulated bytes - uint64_t
byteRetransTotal accumulated bytes - uint64_t
byteSndDropTotal accumulated bytes - uint64_t
byteRcvDropTotal accumulated bytes - uint64_t
byteRcvUndecryptTotal accumulated bytes - uint64_t
pktSent interval-based packets - int64_t
pktRecv interval-based packets - int64_t
pktSentUnique interval-based packets - int64_t
pktRecvUnique interval-based packets - int64_t
pktSndLoss interval-based packets - int32_t
pktRcvLoss interval-based packets - int32_t
pktRetrans interval-based packets - int32_t
pktRcvRetrans interval-based packets - int32_t
pktSentACK interval-based packets - int32_t
pktRecvACK interval-based packets - int32_t
pktSentNAK interval-based packets - int32_t
pktRecvNAK interval-based packets - int32_t
pktSndFilterExtra interval-based packets - int32_t
pktRcvFilterExtra interval-based packets - int32_t
pktRcvFilterSupply interval-based packets - int32_t
pktRcvFilterLoss interval-based packets - int32_t
mbpsSendRate interval-based Mbps - double
mbpsRecvRate interval-based Mbps - double
usSndDuration interval-based us (microseconds) - int64_t
pktReorderDistance interval-based packets - int32_t
pktRcvBelated interval-based packets - int64_t
pktSndDrop interval-based packets - int32_t
pktRcvDrop interval-based packets - int32_t
pktRcvUndecrypt interval-based packets - int32_t
byteSent interval-based bytes - uint64_t
byteRecv interval-based bytes - uint64_t
byteSentUnique interval-based bytes - uint64_t
byteRecvUnique interval-based bytes - uint64_t
byteRcvLoss interval-based bytes - uint64_t
byteRetrans interval-based bytes - uint64_t
byteSndDrop interval-based bytes - uint64_t
byteRcvDrop interval-based bytes - uint64_t
byteRcvUndecrypt interval-based bytes - uint64_t
usPktSndPeriod instantaneous us (microseconds) - double
pktFlowWindow instantaneous packets - int32_t
pktCongestionWindow instantaneous packets - int32_t
pktFlightSize instantaneous packets - int32_t
msRTT instantaneous ms (milliseconds) double
mbpsBandwidth instantaneous Mbps double
byteAvailSndBuf instantaneous bytes - int32_t
byteAvailRcvBuf instantaneous bytes - int32_t
mbpsMaxBW instantaneous Mbps - double
byteMSS instantaneous bytes int32_t
pktSndBuf instantaneous packets - int32_t
byteSndBuf instantaneous bytes - int32_t
msSndBuf instantaneous ms (milliseconds) - int32_t
msSndTsbPdDelay instantaneous ms (milliseconds) - int32_t
pktRcvBuf instantaneous packets - int32_t
byteRcvBuf instantaneous bytes - int32_t
msRcvBuf instantaneous ms (milliseconds) - int32_t
msRcvTsbPdDelay instantaneous ms (milliseconds) - int32_t
pktReorderTolerance instantaneous packets - int32_t
pktRcvAvgBelatedTime instantaneous ms (milliseconds) - double

Customization in stransmit-rs

Figure out how to parse flags. I'd like the format to look like:

stranmsit-rs --latency 200ms --local_ip 34.25.14.13 srt://:1831 udp://127.0.0.1:1234

Exposing streamid

Would it be possible to expose the streamid described here? I see that it's already getting parsed in the handshake, but I'm not really sure how the best way to expose it would be. AFAICT it only arrives after a listener socket has received a connection.

Message number custom struct

They're 29 bit integers, and need special attention (like sequence numbers).

This should be easy with the mod_number! macro

How to get OBS Stream --> srt-rs --> Player

Ubuntu 21.04

sudo apt update; sudo apt install git build-essential cargo
mkdir -p build
cd build
git clone https://github.com/russelltg/srt-rs
cd srt-rs
cargo build --release
sudo ln -s "$PWD/target/release/srt-transmit" "/usr/local/bin/srt-transmit"

It has now been installed and is ready to use. Make sure any ports you want to use are allowed through the firewall:

sudo ufw allow 3333
sudo ufw allow 4444

Now you can run the application, QUOTATION MARKS ARE REQUIRED or else the & will screw up everything

srt-transmit "srt://:3333?latency_ms=20&autoreconnect" "srt://:4444?latency_ms=20&multiplex&autoreconnect"

Receiving SRT at 0.0.0.0:3333 and Sending SRT at 0.0.0.0:4444

OBS Settings
image

And I test this with ffplay

ffplay -fflags nobuffer -i srt://192.168.68.56:4444

Memory leak after 1h+ tests

I've tried to do a few tests just leaving a receiving application running, but there seems to be some change in behavior after an hour-and-a-bit, with unbounded memory usage growth leading to eventual failure.

Here are memory consumption graphs from two different runs showing the same pattern after about the same running time (the data rate of the source would be around the same,

image

I assume the memory use itself is a consequence of #40 - I've raised this as a separate issue since the pattern of behavior change seems like its own problem. I'm not clear if this is correlated with the passage of time, bytes or packets - I will test further.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.