GithubHelp home page GithubHelp logo

timonpost / laminar Goto Github PK

View Code? Open in Web Editor NEW
813.0 813.0 67.0 9.28 MB

A simple semi-reliable UDP protocol for multiplayer games

Rust 99.40% Makefile 0.12% Dockerfile 0.25% Shell 0.23%
gamedev networking protocol rust udp

laminar's Introduction

Hi There ๐Ÿ‘‹

LinkedIn Medium Website

Rust Go C++ Python HTML5 CSS3 JavaScript Java PHP Elixir Lua

Unreal Engine Unity Vulkan Opengl Godot

laminar's People

Contributors

bors[bot] avatar cbenoit avatar cosinep avatar cosmicspacedragon avatar daxpedda avatar erlend-sh avatar fhaynes avatar fraillt avatar futile avatar heliozoa avatar joseluis avatar jstnlef avatar kstrafe avatar luciofranco avatar luxter77 avatar moxinilian avatar ncallaway avatar nehliin avatar palash25 avatar philpax avatar ploppz avatar quant1um avatar realitix avatar smokku avatar speedyninja avatar timonpost avatar tom-leys avatar torkleyy avatar tw3n avatar xaeroxe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

laminar's Issues

Re-work client counting to return a result

    pub fn count(&self) -> usize {
         match self.connections.read() {
             Ok(connections) => { connections.len() },
             Err(_) => { 0 },

Should probably return a result, not swallow the error and return 0.

Clean up code base

Steps:

  • Run Clippy and fix suggestions
  • Clean up warnings in the crate, test, and examples
  • Rustfmt the repo with v0.99

Congestion avoidance behavior

Situation

I think you know what congestion avoidance is but to recap it is shortly said an way to limit sending packets when connection is bad and when good we can send more.

Currently I am making an estimation of the round trip time (RTT). Every time we receive an acknowledgement we check the time the packet is send and we can see how long it took to between sending the packet and receiving its acknowledgement. This value we measure is the RTT. If the RTT is to high we have bad network conditions when this value is low we have good internet conditions.

So now we are at the point that if the RTT is to high we want to limit outgoing packets, however I don't think we want to limit sending packets with laminar UdpSocket. If the UdpSocket gets a packet to send it needs to send it. So that the user can know for sure that the packet pushed onto the socket will be send. So this discarding of packets when the network is bad should be an option for or needs to be under control of the end user. Imagine you queue a packet and without you knowing it will be discarded and not sent.

Of course is this pretty technical and so wee need to add some abstraction that will help the user to decide on how much packets to send based on or estimation.

Question being:

Should we control sending speed or should we make user able to control witch packets to send when network is slow. We probably need to add some abstractions etc. Please add ideas on how we can handle this congestion behavior better.

Remove extern crates

Currently we have dependencies to serde and bincode those are only used in examples. We need to fix those by for example putting the exanples in their own project. Please suggest some alternatives.

Transition to use macro instead of macro_use

Now that rust 1.30 is stable and out. We can use use macro instead of the ambiguous #[macro_use].

This issue should be easy and I am opening it up for a good first issue. If anyone needs more direction please ask in here or on discord ๐Ÿ˜„

Tests not failing on panic

thread 'thread 'check_for_timeoutscheck_for_timeouts' panicked at '' panicked at 'thread 'Unable to send disconnect event: "SendError(..)"Unable to send disconnect event: "SendError(..)"check_for_timeouts', thread '', ' panicked at 'check_for_timeoutsUnable to send disconnect event: "SendError(..)"' panicked at 'libcore/result.rs', libcore/result.rsUnable to send disconnect event: "SendError(..)":libcore/result.rs:', 1009:1009libcore/result.rs:1009::5:51009
5
:note: Run with `RUST_BACKTRACE=1` for a backtrace.

5
thread 'check_for_timeouts' panicked at 'Unable to send disconnect event: "SendError(..)"', libcore/result.rs:1009:5
thread 'check_for_timeouts' panicked at 'Unable to send disconnect event: "SendError(..)"', libcore/result.rs:1009:5

I am getting this panic when running the tests locally. But the tests still all pass ๐Ÿค”

Enhance locking when getting or inserting connections

In connection_pool.rs, we have this function:

    pub fn get_connection_or_insert(&self, addr: &SocketAddr) -> NetworkResult<Connection> {
        let mut lock = self
            .connections
            .write()
            .map_err(|error| NetworkError::poisoned_connection_error(error.description()))?;

        let connection = lock
            .entry(*addr)
            .or_insert_with(|| Arc::new(RwLock::new(VirtualConnection::new(*addr, &self.config))));

        Ok(connection.clone())
    }

It acquires a write lock. It could be re-structured so that a read lock is acquired to check if an insertion is needed, and if so, acquire a write lock. A minor optimization.

Let Packet own a slice of data instead of a vector.

We should try to make Packet work with a slice of bytes. Currently, it takes in a vector of bytes.

It's going to be difficult since Packet would need a lifetime of function send() I think. If it has a reference to bytes. Note that we use this packet also for other purposes which will probably impossible until we do some redesigning.

Some research needed on how we could fix this.

Enable deny attributes

This issue is to track the progress of denying warnings / missing docs.

  • deny warnings on CI
  • warn for missing docs on public interfaces

Module inception in project architecture

There are currently some modules that have the same name as their parent module (https://github.com/amethyst/laminar/blob/master/src/packet/packet.rs and https://github.com/amethyst/laminar/blob/master/src/sequence_buffer/sequence_buffer.rs).
This is linted by this clippy warning: https://rust-lang-nursery.github.io/rust-clippy/v0.0.212/index.html#module_inception

Maybe we should move content of packet.rs in packet/mod.rs and content of sequence_buffer.rs in sequence_buffer/mod.rs.
This wouldn't even change imports as pub use are already used:
https://github.com/amethyst/laminar/blob/master/src/packet/mod.rs#L12

This isn't a critical issue and can be ignored if you prefer another sub-module + pub use.

Adding Congestion avoidance

Promblem
If we send just packets without caring about the inernet speed of the client we can flood the network. Since the router tries to deliver all packages it buffes up all packets in chache. We do not want the router to buffer up packets instead it should drop them. We need to try to avoid sending too much bandwidth in the first place, and then if we detect congestion, we attempt to back off and send even less.

Solution
At first we need to measure roundtrip time RTT

How to implement it see:

  • For each packet we send, we add an entry to a queue containing the sequence number of the packet and the time it was sent.

  • Each time we receive an ack, we look up this entry and note the difference in local time between the time we receive the ack, and the time we sent the packet. This is the RTT time for that packet.

  • Because the arrival of packets varies with network jitter, we need to smooth this value to provide something meaningful, so each time we obtain a new RTT we move a percentage of the distance between our current RTT and the packet RTT. 10% seems to work well for me in practice. This is called an exponentially smoothed moving average, and it has the effect of smoothing out noise in the RTT with a low pass filter.

  • To ensure that the sent queue doesnโ€™t grow forever, we discard packets once they have exceeded some maximum expected RTT. As discussed in the previous section on reliability, it is exceptionally likely that any packet not acked within a second was lost, so one second is a good value for this maximum RTT.

Two network conditions:
I think the easies one is that we have two types of network conditions: Good, Bad. If connection is good we send packets at a rate for example 30 p/s if the connection is bad drop down to 10 p/s (depents on packet size)

How do you switch between good and bad? The algorithm I like to use operates as follows:

  • If you are currently in good mode, and conditions become bad, immediately drop to bad mode

  • If you are in bad mode, and conditions have been good for a specific length of time โ€™tโ€™, then return to good mode

  • To avoid rapid toggling between good and bad mode, if you drop from good mode to bad in under 10 seconds, double the amount of time โ€™tโ€™ before bad mode goes back to good. Clamp this at some maximum, say 60 seconds.

  • To avoid punishing good connections when they have short periods of bad behavior, for each 10 seconds the connection is in good mode, halve the time โ€™tโ€™ before bad mode goes back to good. Clamp this at some minimum like 1 second.

And if that is workig we can even detect if we can send more packets if the connection is good but we should be carfull with this. So first let get the base implementation.

Adding unique protocol id

Problem

Imagine we have multiple version of this protocol someone is using for the first version and the for the server an old version. This could go wrong since we might have changed the packet content in newer versions.

Solution
We could include some crc32 code in the packet this crc32 code is some encode version of the version number or something like that. The crc32 encoded string will be encoded with the version id / protocol version. This way we can identify if the two packets are from the same server/client version. If the packet is not matching the server version it will be dropped each packet should have some unique protocol id

Rethinking error handling

We need to rethink how or errors are handled. I have been researching and came to the conclusion that we should use this way of error handling: https://boats.gitlab.io/failure/error-errorkind.html.

I have made a list of all error messages and divided them into groups.

/// Most high-level error type that contains network errors 
enum NetworkError
{
    #[fail(display = "Something went wrong when sending")]
    SendingError { inner: SendingError },
    #[fail(display = "Something went wrong when receiving")]
    ReceivingError { inner: ReceivingError },
    #[fail(display = "Something went wrong when perfoming action on socket")]
    SocketError { inner: SocketError },
    #[fail(display = "Lock poisoned")]
    FailedToAddConnection { error: ::std::sync::PoisonError }
}

/// Socket errors that could occur with the socket.
enum SocketError
{
    #[fail(display = "Unable to set nonblocking option")]
    UnableToSetNonblocking,
    #[fail(display = "Unable to create UDP SocketState structure")]
    UDPSocketStateCreationFailed,
}

/// Errors that could occur when sending an packet.
enum SendingError
{
    #[fail(display = "Io operation failed")]
    IoError { io_error: io::Error },
    #[fail(display = "Something went wrong with fragmentation")]
    FragmentError { inner: FragmentError },
    #[fail(display = "Something went wrong with constructing/parsing packets")]
    PacketError { inner: PacketError },
}

/// Errors that could occur when receiving
enum ReceivingError
{
    #[fail(display = "Io operation failed")]
    IoError { io_error: io::Error },
    #[fail(display = "Something went wrong with fragmentation")]
    FragmentError { inner: FragmentError },
    #[fail(display = "Something went wrong with receiving/parsing packets")]
    PacketError { inner: PacketError },
}

/// Errors that could occur with constructing parsing packet contents
enum PacketError {
    #[fail(display = "The packet size was bigger than the max allowed size.")]
    ExceededMaxPacketSize,
    #[fail(display = "Something went wrong when parsing the packet header")]
    ParsingError { io_error: io::Error }
}

/// Errors that could occur with constructing parsing fragment contents
enum FragmentError
{
    #[fail(display = "No packet header attached to fragment.")]
    FragmentNotFound,
    #[fail(display = "The total of fragments the packet was divided into is bigger than the allowed fragments.")]
    ExceededMaxFragments,
    #[fail(display = "The fragment processed was already received.")]
    AlreadyProcessedFragment,
    #[fail(display = "Something went wrong when parsing the fragment header")]
    ParsingError { io_error: io::Error }
}

Take note I have left TCP errors since there is a dicussion about TCP belonging to this crate

Some errors also have some more information for an specific error. You want this forexample when you get an IO error.

Should error handling be done like this? Please comment if you have a better solution or some feedback

Finish Md Book chapters

I think it would be cool to have our own book for this crate. Describing the decisions we made, and how we are generally handling all the network stuff. Feel free to improve the book and open an PR.

Categories:

  • Intro
    • Introduction
    • Motivation
    • Contributing
  • Protocols
    • Tcp - UDP
    • Why UDP
    • Other protocols (quick recap)
  • Reliablity
    • Acknowledgements
    • Sequence Numbers
    • Packet Loss
    • Different reliability options (ordered, non ordered, etc.)
  • Congestion Avoidance
    • With RTT
    • With Packet loss
  • Fragmentation
  • Protocol Version Control
  • Connection Managment
  • Protocol Details
    • Packet Headers
    • Channels
    • Ordering Streams

There are things we only can add once version 0.1 of laminar is realsed.

  • Adding code examples.

RFC packet processing.

This is an proposal about how to process packets based on different reliabilities and priories.

There are two library I stole some ideas from.

  1. LiteNet (C# networking library)
  2. RakNet (C++ networking library)

There are three components essential for what or library should have.

  1. Channels like LiteNet.
  2. Streams like RakNet.
  3. Packet priorities like Raknet.

Channels

Bot of two libraries are working with the concept of 'Channels', let me clarify what 'Channel' means.
An 'Channel' will process packets based on there reliabilities property.

Reliabilities property's

Examples of RakNet and LiteNet there reliability property's:

I want suppose we want to support the following ones:

/// Unreliable. Packets can be dropped, duplicated or arrive without order
///
/// *Details*
///
///  1. Unreliable
///  2. No guarantee for delivery.
///  3. No guarantee for order.
///  4. No way of getting dropped packet
///  5. Duplication possible
///
/// Basically just bare UDP
Unreliable,
/// Reliable. All packets will be sent and received, but without order
///
/// *Details*
///
///  1. Reliable.
///  2. Guarantee of delivery.
///  3. No guarantee for order.
///  4. Packets will not be dropped.
///  5. Duplication not possible
///
/// Basically this is almost TCP like without ordering of packets.
ReliableUnordered,
/// Unreliable. Packets can be dropped, but never duplicated and arrive in order
///
/// This will create an reliable ordered packet.
///
/// *Details*
///
///  1. Unreliable.
///  2. No guarantee of delivery.
///  3. Guarantee for order.
///  4. Packets can be dropped but you will be able to retrieve dropped packets.
///  5. Duplication not possible
///
/// Basically this is UDP with the ability to retrieve dropped packets by acknowledgements.
SequencedOrdered,
/// Unreliable. Packets can be dropped, and arrive out of order but you will be able to retrieve dropped packet.
///
/// *Details*
///
///  1. Unreliable.
///  2. No guarantee of delivery.
///  3. No Guarantee for order.
///  4. Packets can be dropped but you will be able to retrieve dropped packets.
///  5. Duplication not possible
///
/// Basically this is UDP with the ability to retrieve dropped packets by acknowledgements.
SequencedUnordered,
/// *Details*
///
///  1. Reliable.
///  2. Guarantee of delivery.
///  3. Guarantee for order.
///  4. Packets will not be dropped.
///  5. Duplication not possible
///
/// Basically this is almost TCP like with ordering of packets.
ReliableOrdered,

We should somehow process packets with different reliability property's.
There fore we use 'Channels' to separate the process concerns.

Implementation

So we define an trait called Channel (LiteNet):

trait Channel {
    /// Add an packet to queue awaiting to be processed before send.
    fn add_to_queue(packet: Packet);
    /// Process all packets in queue and send them out. 
    fn send_next_packets();
    /// progress the received packets.
    fn process_packet(packet: Packet);
}

Next, we define channels which implements the Channel trait.

  1. ReliableChannel (see)

    This channel will be reliable and manage the reliability of packets it could also order packets as RakNet does. And queue them for the socket to send.

  2. SequencedChannel (see)

    This channel will be unreliable and can order packets as RakNet does. It will only take in the newest data. And queue them for the socket to send.

  3. UnreliableChanel (see)

    This is bare UDP as discussed before. Packets are directly processed. And queue them for the socket to send.

When a packet arrives and it is processed by a channel decided by the packet reliability property.
Next, we can notify the user by using, for example, the mpsc channels.

Note for @LucioFranco no we don't use one socket for each channel. The channels will only process data and queue data for the client to send.

Streams

Next topic I want to discuss are the streams from RakNet, RakNet has a nice concept of how to order packets (check out ordering streams for more info)

So what are those ordering streams?

You can think of ordering streams as something to separate the ordering of packets that have totally no relations to one and another.

So when a game-developer sends data to all the clients it might want to send some data ordered; some data unordered while other data needs to be send sequenced etc.

Let's take a look at the following data the dev might be sending:

  1. Player movement, we want to order player movement because we don't care about old positions.
  2. Bullet movement, we want to order bullet movement because we don't care about old positions of bullets.
  3. Chat messages, we want to order chat messages because it is nice to see text in the correct order.

Player movement and chat messages are totally not related to one and another. You don't want the movement packets to be disturbed if a chat messages are dropped.
It would be nice if we can order player movement, chat messages separated from each other.

This is exactly where ordering streams are for.
We should let the user specify on which stream their packets should be ordered.
The user can, for example, say: "Let me put all chat messages on ordering stream 1 and all movement packets on ordering stream 2".
This way you can have different types of packets ordered separately from each other.

Why let the user control streams?

  1. Let's take for example a player who is shooting bullets at something.
    It makes absolute sense to order both player movement and bullet position in the same ordering stream.
    You wouldn't want the shot to originate from the wrong position.
    So you'd put player firing packets on the same stream as movement packets (stream 1),
    and that if a movement packet arrived later than a firing packet but was actually sent earlier the firing packet would not be given to you until the movement packet arrived.
  2. We can't interpreter what the contents of packets are whereon we decide where to order packets.

Packet Priority

A packet should also have some priority. Based on the priority we will decide which goes out first.
I did not fully research this yet but it is also not that important yet.

We basically have the following priority.

/// The highest possible priority.
///
/// 1. These message trigger sends immediately, and are generally not buffered or aggregated into a single datagram.
/// Messages at HighPriority priority and lower are buffered to be sent in groups at 10-millisecond intervals
immediate priority,
/// For every 2 ImmediatePriority messages, 1 HighPriority will be sent.
HighPriority,
/// The second lowest priority an datagram could be.
///
/// For every 2 HighPriority messages, 1 MediumPriority will be sent.
MediumPriority,
/// The lowest priority an datagram could be.
///
/// For every 2 MediumPriority messages, 1 LowPriority will be sent.
LowPriority

High priority packets go out before medium priority packets and medium priority packets go out before low priority packets do.

Check RakNet out for more information

General ideas

I think now the current architecture is quite closed for modification and closed for extension.
With this channel idea, we could be already more flexible.

Also, fragmentation is kind of handled throughout the code.
I like to see fragmentation into its own type. So we could make it optional.
When in development we need to move allot of the current code.

Idea to split this project up.

To prevent big PR's I want to spit this project up. The changes above have an impact on PacketProcessor, SocketState.

  1. Create Channel trait, and implement different channels without logic.
  2. Move processing logic out of SocketState and PacketProcessing client (ideas to spit this more up?)
  3. Implement packet priority.

I'll be starting to implement some basic stuff if you guys agree with the above proposal. I think it is a nice way to handle or data. To note is that I did look at how other libraries were doing this and that I am not just making this all up. RakNet has been developt over 13 years. So I think there idea's are pretty solid.

Related issues to this RFC

Create a testing binary

We should add src/bin/tester.rs or similar so that it can be compiled as a binary to be used in more complex tests. https://docs.rs/clap/2.32.0/clap/ is good for this. One example of how I've used it is: https://gitlab.com/subnetzero/iridium/blob/master/src/bin/cli.yml.

The minimal flags for this issue would be:

--shutdown-in 600 (seconds until the process self terminates)
--bind-address (e.g., 0.0.0.0)
--bind-port (e.g., 5000)
--connect-address (e.g., 1.2.3.4)
--connect-port (e.g., 5000)

Milestone 0.1.0

This is the milestone we need to do before releasing laminar 0.1.0.

  • Congestion avoidance (issue #3).
    • We now have an estimation of the RTT time (PR #30).
    • The real congestion avoidance will be done at a higher level may be in amtehyst_network.
  • Protocol Id, (issue: #4, PR, #55)
  • Fragmentation (#1)
  • Acknowlegement system.
  • Examples (PR #6, PR #5)
  • Virtual Connections connections.
    • Implemented virtualconnections ( PR #20).
    • Issue #32 still needs to be done.
  • Different reliablities
    We need away to devide different procesess packets with differenbt reliablilitie options.
    • Packet header implementation to support different reliablities (PR #42).
    • Process different reliablities when packet is received (issue: #39, #65 )
  • Testing
    • Add scenario tests (issue #10)
    • Integration tests (PR #19)
  • Md Book (issue #33)
    • Proof of concept
    • Still needs to be updated alot e.g. grammar, more information etc.

If all of the above are implemented we are ready for 0.1.0 release. Feel free to add more if I forgot some.

Make more rigorous network tests

At a minimum, we need to simulate:

  1. Client disconnects then reconnects
  2. Variable rates of packet loss
  3. Corruption of data in the payload
  4. We should have more tests of the connection pool, like are clients with the Connection Pool actually removed when not heard from them X time. This is accounted for in our unit tests now

Add in more here as you think of them @TimonPost @LucioFranco

Create Dockerfile for building releases

This is the first step in automated testing and benchmarking. We need containers built and pushed to a registry. Cross or Xargo should work for cross-compiling.

Connection Pool Send Disconnect failed.

I get this error with the connection pool when the program is trying to send the Disconnect event over the channel: panicked at Unable to send disconnect event: "SendError(..)

To test this just add this test module to the connection_pool.rs and run it.

mod test {
    use std::sync::mpsc::channel;
    use std::thread;
    use std::time::Duration;

    use super::{ConnectionPool, TIMEOUT_POLL_INTERVAL, Arc, Mutex};
    use net::connection::{VirtualConnection};
    use events::Event;

    #[test]
    fn disconnected_client()
    {
        let (tx, rx) = channel();

        let mut connections = ConnectionPool::new();
        let handle = connections.start_time_out_loop(tx.clone()).unwrap();

        connections.get_connection_or_insert(&("127.0.0.1:12345".parse().unwrap()));

        thread::sleep(Duration::from_secs(TIMEOUT_POLL_INTERVAL + 1));

        match rx.try_recv() {
            Ok(event) => {
                match event {
                    Event::Disconnected(client) => {
                       assert_eq!(client.read().unwrap().remote_address, "127.0.0.1:12345".parse().unwrap());
                    },
                    _ => assert!(false)
                };
            },
            Err(e) => assert!(false)
        };
        handle.join();
    }
}

Thinking of packet processing logic.

I am thinking of the packet workflow under which you can think: easiest way for creating packets and sending them over to the other side with all the options that a packet could have like encryption etc.

Take the following things into consideration:

  1. Options like encryption/decryption
  2. Options for different reliability strategies: Unreliable, ReliableUnordered, Sequenced ReliableOrdered.
  3. Must be configurable.
  4. Must be easily extendable.
  5. The user does not have to be bothered with all the different network tactics and implementations on how to send packets. The only thing a user should care about is, sending bytes!

Here I have a draft on how to separate packets of different types. And on how we can convert a simple user packet to some more complex packet types about which the user should not care about.

https://gist.github.com/TimonPost/303c53fa1454d724120a1173b04a9745

Now working on processing the bytes of each packet this issue will be updated soon.

Review library level exports and public API.

Right now, it seems that we export almost everything in the crate publically at the library level. I would like to go through and review these to make sure we want to expose it or not. If we do, we need it to be documented.

Since we are coming close to version 0.1.0 I want to have some thought about how the public API should look and what the user should not be seeing. Please leave suggestions. Once we agree we update that into the main issue description.

Thinking of crate API

Since we are coming close to version 0.1.0 I want to have some thought about how the public API should look and what the user should not be seeing. Please leave suggestions. Once we agree we update that into the main issue description.

  • laminar

Sending takes to long

So when @torkleyy and I where adding examples we noticed some huge delays.

I did some tests to monitor how long it took to send a message and it took 1 second. So there is some bug somewhere :)

test results:

= Message took 72.982ยตs to send =
Moving to lat: 10.555, long: 10.55454, alt: 1.3
=Message took 990.276878ms to send =
Moving to lat: 5.4545, long: 3.344, alt: 1.33
= Message took 990.114451ms to send =
Received text: "Some information"

So you can see that messages take about 1 second to be processed when by the send method.

After debugging I found that create_connection_if_not_exists method takes 0.5 - 0.9 seconds to execute.

Where after I found out that acquiring the lock on connections is taking to long in create_connection_if_not_exists method.

  let mut lock = self
            .connections
            .write()
            .map_err(|_| NetworkError::AddConnectionToManagerFailed)?;

Why does acquiring the lock takes so long?

Support WebRTC data channel, or general transport mechanism

I would like to use this for a webassembly game. What would be needed to support the WebRTC data channel?

Ideally I would implement some trait and laminar uses my transport mechanism, this would also be useful for simulated networks.

Use a thread pool in TcpListener.

(probably 0.2.0 milestone)

A thread pool or some other way of limiting thread creation should be used here:
https://github.com/amethyst/laminar/blob/master/src/net/tcp.rs#L200

We briefly discussed about it with @fhaynes on the Discord. You were thinking about using https://github.com/nathansizemore/epoll ? I see several issues with using this.
First, as you said, the point is to use only one thread (epoll is basically about monitoring multiple file descriptors to see whether I/O is possible on any of them). So there is no true parallelism.
Second, this use Unix specific API that is not portable. We want laminar to run on windows, right?

What I suggest is to really use a thread pool. A basic one can be done really simply (like discussed the Rust book:
https://doc.rust-lang.org/stable/book/2018-edition/ch20-03-graceful-shutdown-and-cleanup.html )
Or we could use another crate.
Rayon provide a thread pool: https://docs.rs/rayon/1.0.2/rayon/struct.ThreadPool.html
Tokyo can also provide a thread pool and even an interface similar to epoll if you prefer a single thread (but not specific to Unix!): https://github.com/tokio-rs/tokio/tree/master/tokio-threadpool
Alternatively there is this crate dedicated to thread pool: https://github.com/rust-threadpool/rust-threadpool

TcpListener should include a way to specify how many threads the thread pool should spawn if we indeed use a thread pool.

Implement ordering streams.

Streams

RakNet has a nice concept of how to order packets (check out ordering streams for more info)

So what are those ordering streams?

You can think of ordering streams as something to separate the ordering of packets that have totally no relations to one and another.

So when a game-developer sends data to all the clients it might want to send some data ordered; some data unordered while other data needs to be send sequenced etc.

Let's take a look at the following data the dev might be sending:

  1. Player movement, we want to order player movement because we don't care about old positions.
  2. Bullet movement, we want to order bullet movement because we don't care about old positions of bullets.
  3. Chat messages, we want to order chat messages because it is nice to see text in the correct order.

Player movement and chat messages are totally not related to one and another. You don't want the movement packets to be disturbed if a chat messages are dropped.
It would be nice if we can order player movement, chat messages separated from each other.

This is exactly where ordering streams are for.
We should let the user specify on which stream their packets should be ordered.
The user can, for example, say: "Let me put all chat messages on ordering stream 1 and all movement packets on ordering stream 2".
This way you can have different types of packets ordered separately from each other.

Why let the user control streams?

  1. Let's take for example a player who is shooting bullets at something.
    It makes absolute sense to order both player movement and bullet position in the same ordering stream.
    You wouldn't want the shot to originate from the wrong position.
    So you'd put player firing packets on the same stream as movement packets (stream 1),
    and that if a movement packet arrived later than a firing packet but was actually sent earlier the firing packet would not be given to you until the movement packet arrived.
  2. We can't interpreter what the contents of packets are whereon we decide where to order packets

Those ordering stream should be implemented so that both reliable and unreliable channels can make use of this. If this is implemented we support all delivery methods we have in our enum.

Guidelines, overal crate improvements.

Here are a couple of things I think who are important for a library like this one. If you see something lac of these issues feel free to open a PR. When I read some code from other libs like this one, but in C++/C I saw that there was a huge shortcoming on most of the following points. Which makes the code difficult to read and understand.

  • Every type, method and some fields should be commented.
  • Calculations should be documented. Whether it would be in a PR or code. But it must be clear what is done.
  • Don't hard code values. We should only use config, consts etc hardcoded values are the root of all evil.
  • Everything should be tested, so feel free to write tests.
  • Documentation.
  • Keep files small better have small files with small pieces of logic than having one file with 1000 lines of logic with multiple types/structs etc. Note that I speak of logic, tests not included.
  • Absolutely no panics/unwraps in the logic we need to have propper error handling. You may have panics/unwraps in test code.
  • Grammar / English improvement. I am not a native English speaker so forgive me when making some mistakes. I am trying to avoid making them in the first place. But some checks could not be wrong.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.