russelltg / srt-rs Goto Github PK
View Code? Open in Web Editor NEWSRT implementation in Rust
License: Apache License 2.0
SRT implementation in Rust
License: Apache License 2.0
I think they should have Instant -> Instant
signatures, returning the next wakeup time.
Timestamps are only 32 bit integers. Right now they don't loop and get_timestamp
will prob overflow, which is not good.
Probably needs a Timestamp
struct.
Data is pushed into the Receiver.packet_history_window
here,
However there doesn't seem to be any code to remove items from this Vec
, so we eventually run out of memory.
As detailed in
https://tools.ietf.org/html/draft-sharabayko-mops-srt-00#section-4.6
Previous partial attempt: c94d2b0
Srt-rs should fare much better than the reference implemetation for tens of thousands of connections.
srt-rs should have equal performance, so that should be tested.
The current custom implementations make much harder to debug since useful information stays omitted.
If you are fine with it, I can send a PR that either removes or changes the current impls to Display.
Should be easy, will require thinking about how DuplexConnection sends packets.
If data is lost on the network, or dropped because it arrived too late, it would be great to have a way to tell where the losses lie within the data actually available.
Is this already possible? Looking at the API, I suppose it could be signaled via some kind of Err(io:Error)
response?
If not possible right now, consider expanding the API so an application can know data was lost before it attempts to process the next data following the loss.
Hi I'm jaehyuk from S.Korea.
Thanks to this open source, I could progressed my project eaisly, but I faced something weired recently.
My application(streaming server with srt and rust) server always stops at specific time when I test this code below.
After 4295 ~ 4300 seconds(or so), approximately 1hour 11minutes later, receiver stops.
I tested over 10 times, but always got the same result.
Please somebody help me.
At least I hope you would test example code.
Don't forget, you should stream over 4300 seconds.
I'm looking forward to your any answer.
Thanks.
// execute sender
% cargo run --example sender
Finished dev [unoptimized + debuginfo] target(s) in 0.10s
Running `target/debug/examples/sender`
Sent 711010 packets
// execute receiver
examples % cargo run --example receiver
Finished dev [unoptimized + debuginfo] target(s) in 2.72s
Running `/Users/me/test/srt-rs/target/debug/examples/receiver`
Received SystemTime { tv_sec: 1611567922, tv_nsec: 399080000 } <<----- start point
Received SystemTime { tv_sec: 1611572217, tv_nsec: 305890000 } <<------ stop pooint. it took 4295 sec, and stopped.
When testing a connected receiver by shutting down the listening sender, the receiver seems to get stuck.
...
[2021-04-02T16:24:47.399737591Z INFO srt_protocol::protocol::receiver] SRT#5648B729: Shutdown packet received, flushing receiver...
[2021-04-02T16:24:47.900748198Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=2
[2021-04-02T16:24:48.400108071Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=3
[2021-04-02T16:24:48.900079839Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=4
[2021-04-02T16:24:49.399843113Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=5
[2021-04-02T16:24:49.900738399Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=6
[2021-04-02T16:24:50.400831478Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=7
[2021-04-02T16:24:50.900475513Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=8
[2021-04-02T16:24:51.400811724Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=9
[2021-04-02T16:24:51.900250477Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=10
[2021-04-02T16:24:52.400394786Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=11
[2021-04-02T16:24:52.900561083Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=12
[2021-04-02T16:24:53.400346057Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=13
[2021-04-02T16:24:53.900796898Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=14
[2021-04-02T16:24:54.400438398Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=15
[2021-04-02T16:24:54.900032529Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=16
[2021-04-02T16:24:54.900066650Z INFO srt_protocol::protocol::connection] 16 exps, timeout!
[2021-04-02T16:24:55.402768898Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=17
[2021-04-02T16:24:55.900875327Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=18
[2021-04-02T16:24:56.410334415Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=19
[2021-04-02T16:24:56.905888980Z INFO srt_protocol::protocol::connection] Exp event hit, exp count=20
...
this continues until I kill the process
When are they to be sent? Etc.
I am trying to connect multiple clients (senders) to a server (multiplexed receiver). The connection of two clients works without issues, but the server quits unexpectedly after one of the clients closes the connection.
The code looks as follows:
#[tokio::main]
async fn main() -> io::Result<()> {
env_logger::init();
let binding = SrtSocketBuilder::new_listen()
.local_port(3333)
.build_multiplexed()
.await?;
tokio::pin!(binding);
info!("Listening on :3333");
while let Some(Ok(connection)) = binding.next().await {
let mut srt_socket: SrtSocket = connection.into();
info!("New client: {:?}", srt_socket.settings());
tokio::spawn(async move {
while let Some((_instant, _bytes)) = srt_socket.try_next().await.unwrap_or(None) {
// info!("Received {:?} bytes", bytes.len());
}
match srt_socket.close().await {
Ok(()) => {
info!("Client disconnected");
},
Err(error) => {
error!("Error on client disconnect: {:?}", error);
}
}
});
}
info!("Closing server");
Ok(())
}
Here are the logs:
[2021-02-13T17:19:02Z INFO live] Listening on :3333
[2021-02-13T17:19:25Z INFO live] New client: ConnectionSettings { remote: 127.0.0.1:51988, remote_sockid: SRT#07374E6E, local_sockid: SRT#CA89564E, socket_start_time: Instant { t: 7910.6199645s }, init_send_seq_num: SeqNumber(55492652), init_recv_seq_num: SeqNumber(206182548), max_packet_size: 1500, max_flow_size: 8192, send_tsbpd_latency: 50ms, recv_tsbpd_latency: 120ms, crypto_manager: None, stream_id: Some("first") }
[2021-02-13T17:19:25Z INFO srt_protocol::protocol::receiver] Receiving started from 127.0.0.1:51988, with latency=120ms
[2021-02-13T17:19:31Z INFO live] New client: ConnectionSettings { remote: 127.0.0.1:51989, remote_sockid: SRT#0B96972D, local_sockid: SRT#5130BE66, socket_start_time: Instant { t: 7915.9080202s }, init_send_seq_num: SeqNumber(192230949), init_recv_seq_num: SeqNumber(250224084), max_packet_size: 1500, max_flow_size: 8192, send_tsbpd_latency: 50ms, recv_tsbpd_latency: 120ms, crypto_manager: None, stream_id: Some("second") }
[2021-02-13T17:19:31Z INFO srt_protocol::protocol::receiver] Receiving started from 127.0.0.1:51989, with latency=120ms
[2021-02-13T17:19:40Z INFO srt_protocol::protocol::receiver] SRT#5130BE66: Shutdown packet received, flushing receiver...
[2021-02-13T17:19:40Z INFO live] Closing server
[2021-02-13T17:19:40Z INFO srt_tokio::tokio::socket] SRT#CA89564E Exiting because underlying stream ended
Right now it just hashes the peer address, which is potentially wrong.
They're 29 bit integers, and need special attention (like sequence numbers).
This should be easy with the mod_number!
macro
Would it be possible to expose the streamid
described here? I see that it's already getting parsed in the handshake, but I'm not really sure how the best way to expose it would be. AFAICT it only arrives after a listener socket has received a connection.
Should be an easy fix, just manually implement Debug to not include salt and wrapped_keys
There is a certain order of events that fails the tests (one of the reasons for lossy's flakiness) --
This probably entails setting the item in Sender
to be (Bytes, Instant)
Right now they're hardcoded.
Right now not even localhost
works...
I'm thinking 3 crates
See: https://datatracker.ietf.org/doc/html/draft-sharabayko-mops-srt-00#section-3.2.3
There are several types of ACK packets:
A Full ACK control packet is sent every 10 ms and has all the
fields of Figure 10.
A Lite ACK control packet includes only the Last Acknowledged
Packet Sequence Number field. The Type-specific Information field
should be set to 0.
A Small ACK includes the fields up to and including the Available
Buffer Size field. The Type-specific Information field should be
set to 0.
The sender only acknowledges the receipt of Full ACK packets (see
ACKACK).
The Lite ACK and Small ACK packets are used in cases when the
receiver should acknowledge received data packets more often than
every 10 ms. This is usually needed at high data rates. It is up to
the receiver to decide the condition and the type of ACK packet to
send (Lite or Small). The recommendation is to send a Lite ACK for
every 64 packets received.
Right now it's unused. Low priority as it's not useful for video.
Ubuntu 21.04
sudo apt update; sudo apt install git build-essential cargo
mkdir -p build
cd build
git clone https://github.com/russelltg/srt-rs
cd srt-rs
cargo build --release
sudo ln -s "$PWD/target/release/srt-transmit" "/usr/local/bin/srt-transmit"
It has now been installed and is ready to use. Make sure any ports you want to use are allowed through the firewall:
sudo ufw allow 3333
sudo ufw allow 4444
Now you can run the application, QUOTATION MARKS ARE REQUIRED or else the & will screw up everything
srt-transmit "srt://:3333?latency_ms=20&autoreconnect" "srt://:4444?latency_ms=20&multiplex&autoreconnect"
Receiving SRT at 0.0.0.0:3333 and Sending SRT at 0.0.0.0:4444
And I test this with ffplay
ffplay -fflags nobuffer -i srt://192.168.68.56:4444
My reading of the SrtSocket
description is that client code can expect that the Instant
value does not decrease from one yielded value to the next.
I added some logging to my application code
let mut last_arrival_time = None;
while let Some((arrival_time, bytes)) = sock.try_next().await.expect("sock.try_next() failed") {
if let Some(last_arrival_time) = last_arrival_time {
if arrival_time < last_arrival_time {
error!("Out of order packets. last={:?} this={:?}", last_arrival_time, arrival_time);
continue;
}
}
last_arrival_time = Some(arrival_time);
...
}
and can see that very occasionally the value does decrease, by 10ms in this case,
[2021-04-07T11:10:03.777845477Z ERROR test::srt] Out of order packets. last=Instant { tv_sec: 4108711, tv_nsec: 501700506 } this=Instant { tv_sec: 4108711, tv_nsec: 501690506 }
by 8ms in this case,
[2021-04-08T06:52:01.946181252Z ERROR test::srt] Out of order packets. last=Instant { tv_sec: 4179629, tv_nsec: 670029779 } this=Instant { tv_sec: 4179629, tv_nsec: 670021779 }
Is my reading of the API wrong, or does this represent a problem somewhere?
There are still:
I've tried to do a few tests just leaving a receiving application running, but there seems to be some change in behavior after an hour-and-a-bit, with unbounded memory usage growth leading to eventual failure.
Here are memory consumption graphs from two different runs showing the same pattern after about the same running time (the data rate of the source would be around the same,
I assume the memory use itself is a consequence of #40 - I've raised this as a separate issue since the pattern of behavior change seems like its own problem. I'm not clear if this is correlated with the passage of time, bytes or packets - I will test further.
Keys need to be re-created using the even-odd thing. It's partially implemented, shoudln't be too much work, but maybe tricky to get right.
Figure out how to parse flags. I'd like the format to look like:
stranmsit-rs --latency 200ms --local_ip 34.25.14.13 srt://:1831 udp://127.0.0.1:1234
The haicrypt
module. Unfortunately I think this has to be integrated into sender, and not layered in.
It's possible it could lie under sender and receiver, sending it's own handshake packets etc.
Sender should drop packets that are too late to be delivered and then notify the receiver
@robertream assign yourself if you want it
One issue in TSBPD is knowing how to gague when to release packets, which requires somehow syncing clocks on both sides.
Important snippets of code from reference impl: a
The ref impl can do automatic reconnect
I think if the recvr sends shutdown, it should cancel the message. However, the mechanism for this seems complicated--worth researching how the reference implementation does this
The encryption docs for srt note that it can support a pre-shared kek instead of a password. This should be relatively easy to implement
A C API made with cbindgen or something would make this crate more useful
If #53 wants to be more or less compatible with the reference implementation API, it might be good to support extracting the same statistics.
// Performance monitor with Byte counters for better bitrate estimation.
int srt_bstats(SRTSOCKET u, SRT_TRACEBSTATS * perf, int clear);
// Performance monitor with Byte counters and instantaneous stats instead of moving averages for Snd/Rcvbuffer sizes.
int srt_bistats(SRTSOCKET u, SRT_TRACEBSTATS * perf, int clear, int instantaneous);
With SRT_TRACEBSTATS
containing the following:
Statistic | Type of Statistic | Unit of Measurement | Available for Sender | Available for Receiver | Data Type | Completed |
---|---|---|---|---|---|---|
msTimeStamp | accumulated | ms (milliseconds) | ✓ | ✓ | int64_t | |
pktSentTotal | accumulated | packets | ✓ | - | int64_t | |
pktRecvTotal | accumulated | packets | - | ✓ | int64_t | |
pktSentUniqueTotal | accumulated | packets | ✓ | - | int64_t | |
pktRecvUniqueTotal | accumulated | packets | - | ✓ | int64_t | |
pktSndLossTotal | accumulated | packets | ✓ | - | int32_t | |
pktRcvLossTotal | accumulated | packets | - | ✓ | int32_t | |
pktRetransTotal | accumulated | packets | ✓ | - | int32_t | |
pktRcvRetransTotal | accumulated | packets | - | ✓ | int32_t | |
pktSentACKTotal | accumulated | packets | - | ✓ | int32_t | |
pktRecvACKTotal | accumulated | packets | ✓ | - | int32_t | |
pktSentNAKTotal | accumulated | packets | - | ✓ | int32_t | |
pktRecvNAKTotal | accumulated | packets | ✓ | - | int32_t | |
usSndDurationTotal | accumulated | us (microseconds) | ✓ | - | int64_t | |
pktSndDropTotal | accumulated | packets | ✓ | - | int32_t | |
pktRcvDropTotal | accumulated | packets | - | ✓ | int32_t | |
pktRcvUndecryptTotal | accumulated | packets | - | ✓ | int32_t | |
pktSndFilterExtraTotal | accumulated | packets | ✓ | - | int32_t | |
pktRcvFilterExtraTotal | accumulated | packets | - | ✓ | int32_t | |
pktRcvFilterSupplyTotal | accumulated | packets | - | ✓ | int32_t | |
pktRcvFilterLossTotal | accumulated | packets | - | ✓ | int32_t | |
byteSentTotal | accumulated | bytes | ✓ | - | uint64_t | |
byteRecvTotal | accumulated | bytes | - | ✓ | uint64_t | |
byteSentUniqueTotal | accumulated | bytes | ✓ | - | uint64_t | |
byteRecvUniqueTotal | accumulated | bytes | - | ✓ | uint64_t | |
byteRcvLossTotal | accumulated | bytes | - | ✓ | uint64_t | |
byteRetransTotal | accumulated | bytes | ✓ | - | uint64_t | |
byteSndDropTotal | accumulated | bytes | ✓ | - | uint64_t | |
byteRcvDropTotal | accumulated | bytes | - | ✓ | uint64_t | |
byteRcvUndecryptTotal | accumulated | bytes | - | ✓ | uint64_t | |
pktSent | interval-based | packets | ✓ | - | int64_t | |
pktRecv | interval-based | packets | - | ✓ | int64_t | |
pktSentUnique | interval-based | packets | ✓ | - | int64_t | |
pktRecvUnique | interval-based | packets | - | ✓ | int64_t | |
pktSndLoss | interval-based | packets | ✓ | - | int32_t | |
pktRcvLoss | interval-based | packets | - | ✓ | int32_t | |
pktRetrans | interval-based | packets | ✓ | - | int32_t | |
pktRcvRetrans | interval-based | packets | - | ✓ | int32_t | |
pktSentACK | interval-based | packets | - | ✓ | int32_t | |
pktRecvACK | interval-based | packets | ✓ | - | int32_t | |
pktSentNAK | interval-based | packets | - | ✓ | int32_t | |
pktRecvNAK | interval-based | packets | ✓ | - | int32_t | |
pktSndFilterExtra | interval-based | packets | ✓ | - | int32_t | |
pktRcvFilterExtra | interval-based | packets | - | ✓ | int32_t | |
pktRcvFilterSupply | interval-based | packets | - | ✓ | int32_t | |
pktRcvFilterLoss | interval-based | packets | - | ✓ | int32_t | |
mbpsSendRate | interval-based | Mbps | ✓ | - | double | |
mbpsRecvRate | interval-based | Mbps | - | ✓ | double | |
usSndDuration | interval-based | us (microseconds) | ✓ | - | int64_t | |
pktReorderDistance | interval-based | packets | - | ✓ | int32_t | |
pktRcvBelated | interval-based | packets | - | ✓ | int64_t | |
pktSndDrop | interval-based | packets | ✓ | - | int32_t | |
pktRcvDrop | interval-based | packets | - | ✓ | int32_t | |
pktRcvUndecrypt | interval-based | packets | - | ✓ | int32_t | |
byteSent | interval-based | bytes | ✓ | - | uint64_t | |
byteRecv | interval-based | bytes | - | ✓ | uint64_t | |
byteSentUnique | interval-based | bytes | ✓ | - | uint64_t | |
byteRecvUnique | interval-based | bytes | - | ✓ | uint64_t | |
byteRcvLoss | interval-based | bytes | - | ✓ | uint64_t | |
byteRetrans | interval-based | bytes | ✓ | - | uint64_t | |
byteSndDrop | interval-based | bytes | ✓ | - | uint64_t | |
byteRcvDrop | interval-based | bytes | - | ✓ | uint64_t | |
byteRcvUndecrypt | interval-based | bytes | - | ✓ | uint64_t | |
usPktSndPeriod | instantaneous | us (microseconds) | ✓ | - | double | |
pktFlowWindow | instantaneous | packets | ✓ | - | int32_t | |
pktCongestionWindow | instantaneous | packets | ✓ | - | int32_t | |
pktFlightSize | instantaneous | packets | ✓ | - | int32_t | |
msRTT | instantaneous | ms (milliseconds) | ✓ | ✓ | double | |
mbpsBandwidth | instantaneous | Mbps | ✓ | ✓ | double | |
byteAvailSndBuf | instantaneous | bytes | ✓ | - | int32_t | |
byteAvailRcvBuf | instantaneous | bytes | - | ✓ | int32_t | |
mbpsMaxBW | instantaneous | Mbps | ✓ | - | double | |
byteMSS | instantaneous | bytes | ✓ | ✓ | int32_t | |
pktSndBuf | instantaneous | packets | ✓ | - | int32_t | |
byteSndBuf | instantaneous | bytes | ✓ | - | int32_t | |
msSndBuf | instantaneous | ms (milliseconds) | ✓ | - | int32_t | |
msSndTsbPdDelay | instantaneous | ms (milliseconds) | ✓ | - | int32_t | |
pktRcvBuf | instantaneous | packets | - | ✓ | int32_t | |
byteRcvBuf | instantaneous | bytes | - | ✓ | int32_t | |
msRcvBuf | instantaneous | ms (milliseconds) | - | ✓ | int32_t | |
msRcvTsbPdDelay | instantaneous | ms (milliseconds) | - | ✓ | int32_t | |
pktReorderTolerance | instantaneous | packets | - | ✓ | int32_t | |
pktRcvAvgBelatedTime | instantaneous | ms (milliseconds) | - | ✓ | double |
I'm pretty sure it's wrong, wayyy too slow.
Ways to test this: sending big files over lossy connections? Should compare to reference UDT?
Also could be possible it's not worth implementing and this repo should be pure SRT.
Currently, the buffers are fully dynamicly sized and can grow to any arbitrary size. This is a security vulnrability, as we are trusting the peer to not send wild packets. For example. if it sends a packet for a seq number far in the future, it adds loss list entries for each one, which generally crashes due to out of memory.
Hello,
I'm researching to send a stream to multiple listeners.
Is it possible with SRT protocol ?
Or does I need to send for each one on a different port ?
Thanks
Marc-Antoine
In my testing I've been seeing relatively high rates of UDP packet drops reported via /proc/net/udp
. I have the same stream being received on the same machine via RIST instead of SRT, and the socket entries for that process in /proc/net/udp
report no drops; I hope this shows that the problem is not down to CPU overload etc on the machine hosting the test.
I've not debugged enough yet to show this is a problem with srt-rs, rather than a problem with my own application code blocking the event loop (for example), but I wondered if this is a known issue?
I saw that the current master has at least a TODO about it in srt-protocol/src/packet/control.rs
:
// TODO: this is probably really wrong, so fix it
let peer_addr = if ip_buf[4..] == [0; 12][..] {
IpAddr::from(Ipv4Addr::new(ip_buf[3], ip_buf[2], ip_buf[1], ip_buf[0]))
} else {
IpAddr::from(ip_buf)
};
Is there anything else to be addressed?
Steps to reproduce:
Set the default buffer sizes to be larger (this should be an option that we expose, but for now it isn't):
sudo sysctl -w net.core.rmem_default=8388608
sudo sysctl -w net.core.wmem_default=8388608
Run the high_bandwidth test:
cargo test --release --test=high_bandwidth -- --nocapture
And notice after a few seconds it freezes. You may have to tweak the bandwidth by changing the RATE_MBPS
constant to get it to repro (higher if it's working properly, lower if it isn't delivering the requested bandwidth).
This may or may not be reproducible outside of Linux, haven't tried.
It's roughly correct now, but it needs to be RTT_0/2 + latency, which it isn't exactly. See
https://tools.ietf.org/html/draft-sharabayko-mops-srt-00#section-4.5
Hello,
what has to happen for library to became production ready?
Hello,
I'm starting using the library, what do you mean by The reference implementation of SRT requires 3 threads per sender and 5 threads per receiver
?
Can you provide simple examples in a folder examples
? it can be run using cargo run --example sender
for a filename sender.rs
It can send fake data and implement also a demo example.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.