GithubHelp home page GithubHelp logo

projectharmonia / bevy_replicon Goto Github PK

View Code? Open in Web Editor NEW
246.0 3.0 26.0 1.27 MB

ECS-focused high-level networking crate for the Bevy game engine.

Home Page: https://crates.io/crates/bevy_replicon

License: Apache License 2.0

Rust 100.00%
bevy multiplayer netcode replication server-authoritative

bevy_replicon's People

Contributors

aceeri avatar actuallyhappening avatar bendzae avatar chatalot1 avatar dependabot[bot] avatar dgsantana avatar jonastar avatar noahshomette avatar paul-hansen avatar pitibouchon avatar randomexplosion avatar rj avatar rsrasmu2 avatar shatur avatar ukoehb avatar umatriz avatar vixenka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

bevy_replicon's Issues

More modularity

Hello! Awesome project, looks very much like what I'd like to use (or implement myself), except I need to use a different networking layer for my game.

I figured I could simply use the replication_core - but that is also dependent on renet. Would be great if this project was working around simply a trait anyone could implement as a bring-your-own networking layer kind of thing.

In addition, the project is currently structured as a single crate. I suspect I will have issues with building it to wasm for the web due to the dependencies on Send/Sync stuff and other things that don't work in wasm - so even if I won't use them I suspect it won't build.


Would you consider restructuring the code in such a way that:

  1. It allows pluggable networking backends, with renet implemented out of the box as one of them.
  2. The core replication logic is usable without the networking at all, as a self-contained package. The networking layer can work on top of the replication mechanisms, by driving the inner workings however needed.
  3. The core logic (and possibly the networking interface itself) can be usable in wasm - so it can run in the browser.

The rationale for (2) in addition to (1) is that in the browser environment, it could be possible that the data exchange is not even done via the network in common sense - maybe it uses message posting or some other odd way (i.e. serializing the exchanges via HTTP requests and SSE responses). In these situations, it would be hard to implement a trait that looks something like a networking API, but it would be possible to drive the core logic via the data obtained "somehow".

Stops replicating after a bit in 0.2 but works fine in 0.1

Description

Replicating a transform seems to stop randomly in the new 0.2 version.

We decided to try this out for the Bevy Jam 3, can see the bug demonstrated in our early prototype: https://github.com/paul-hansen/bevy-jam-3

Steps to replicate:

git clone https://github.com/paul-hansen/bevy-jam-3
cd bevy-jam-3
git checkout 1ed281e
cargo run -- server
cargo run -- client

Select the server window and use the wasd keys to move bevy icon. It works for a bit, but then randomly will stop replicating the position changes, no errors are logged and it never logs that the client disconnected.

Simply changing the bevy_replicon version back to 0.1.0 like we did in paul-hansen/bevy-jam-3@8a679e2 makes it work fine again.

Off topic

Thanks for making this, the API looks really similar to what I've been thinking about making. Really excited to see where it goes!

Running at higher FPS than MaxTickRate drops events?

I'm running at 60 FPS but the default MaxTickRate is ony 30. This seems to cause server events to be dropped. I think the problem is that the events are only around for one Update cycle but the server system that requests them only runs half of the time.
I tried removing the in_set condition on the sending_system and that fixed it. I think that it was fixed because the system now runs on every Update cycle.

Throttle inactive clients

If a client is not sending acks but they are still connected, then the server should gradually throttle replication updates. Otherwise a malicious client can abuse replication to waste server resources.

An easy method would be 'double the tick gap between updates every X updates' (with X as config) and reset when a non-stale ack shows up. So for X = 3 and an ack in tick 0, you'd get updates on ticks: 1, 2, 3, 5, 7, 9, 13, 17, 21.

EDIT: The throttling needs to take into account client reconnects, since their ack tick will reset to zero. It may also be worthwhile to throttle clients who reconnect too frequently.

Manual diff fragmentation

Currently we sending one message per diff and rely on the Renet fragmentation. If the message is large, then Renet divides it into several parts.
But if packet loss is present, then the loss of even one part results in the loss of the entire diff.
To avoid this we should send diff per-entity. To achieve this we need to track last acknowledged tick per entity.

Serialize into a single continuous buffer

Currently we have two buffers for each client (for each channels). But it would be much more efficient to write everything into a single buffer and store data positions for clients. This way data will be serialized only once no matter how much clients we have.

Value-diff compression

Some networked games try to compress diffs between updates by networking only the value difference at a component level. Replicon doesn't currently support that.

Here is a first-draft idea for how to support it:

If we pass the change-limit tick + current tick into serializers, then the serializer can have a local that tracks historical state and performs a diff internally. The deserializer would also need to track historical state and apply diffs to the correct older value. We might be able to provide a pre-baked value-diff compressor for types that implement certain traits.

Note that supporting this would require changes to the shared copy buffer, since different clients could get different component serializations.

Hiesenbug: Runtime server initialisation

Thanks so much for this awesome crate!

I have adapted the simple_box example to use runtime server / client initialisation and am encountering a really strange bug, where consistently one 'part' of my program seems to run at 'the wrong time' so that the server side of bevy_replicon doesn't actually replicate any entities to clients, and the other 'part' of my program seems to run 'at the correct time' so that the bevy_replicon server works just fine. I will continue to be more specific as I strip away more layers.

I have been frustrated with this for a few days now, and was wondering what errors I might not know about when I initialise my NetcodeServerTransport and RenetServer resources in the Update or OnEnter(T) schedule versus the Startup schedule. The examples use the Startup schedule because it decides if its is a server / client / solo by reading CLI args, I envisage my game to be able to freely switch between server client and solo modes at runtime.

How can I debug bevy_replicon? Is it designed for this use case, dynamically switching and connecting to different servers? (Also, I am getting a segfault when running on nightly compiler immediately after running so I'm using the stable toolchain)

I just upgraded to bevy_replicon = "0.16".

Client reconnect cleanup

The current replicon code does not clean up despawns/component removals on a client after the client reconnects, for despawns/removals that happened while the client was disconnected.

The client needs an additional reconnect_system which despawns all entities with Replication before the first call to diff_receiving_system after a client reconnects.

Removals in `PostUpdate`

We use Evets::iter_current_update_events to iterate collect all component removals. But according to the docs this will miss removals that happen after this system runs. We need to either document it or add a system in Last schedule to collect missed removals (or a system in Last that runs in debug and asserts there are no removals).

Support for custom renet transports

I'm currently trying to get this to work with a couple of custom renet transports I wrote for running a client inside wasm.

After some debugging, I found out that the client systems will only .run_if(client_connected()) where the client_connected() run condition is from renet, that run condition will only return true if the built-in renet client transport is connected and not any custom ones.

ServerState/ClientState conflict with disabling plugins

The methods add_server_event_with() and add_client_event_with() depend on both ServerState and ClientState. This means if you disable the server or client plugins then the program will crash on update with error 'resource not found' for the missing state.

The doc test demoing plugin disabling doesn't actually try to update the app. Here is an updated test that fails:

# use bevy::prelude::*;
# use bevy_replicon::prelude::*;
# let mut app = App::new();
app.add_plugins(MinimalPlugins).add_plugins(
    ReplicationPlugins
        .build()
        .disable::<ClientPlugin>()
        .set(ServerPlugin { tick_policy: TickPolicy::MaxTickRate(60) }),
);

# #[derive(Debug)]
# struct X;
# app.add_server_event_with::<X, _, _>(||{}, ||{});
# app.update();

Replication priority

In some games (mainly open-world games), it is useful to prioritize updating of some entities over others in order to manage bandwidth constraints.

Here is a sketch of how to integrate priority with bevy_replicon.

Sketch

Instead of true/false for client visibility, set a priority between 0 and 1.0. A priority below 0.01 means 'don't replicate', and a priority over 0.99 means 'always replicate'.

Add a global throttling resource that records a range of replication frequencies per-client: [min frequency, max frequency]. This range can be manually adjusted by users (we need to expose as many stats as possible about client acks/latency and bandwidth usage). We or someone can write a bevy_replicon_throttle crate that provides automatic frequency adjustments (hypothetically - I don't plan to write it).

  • Example algorithm: on bandwidth throttle, lower the min frequency until a max range delta is reached, then start lowering the min and max frequencies such that their delta = max range delta * (max frequency / starting max frequency). If the min frequency reaches a minimum, stop lowering it and only lower the max frequency. If both reach the min frequency then ?? start throttling RepliconTick updates ?? (the problem is if frequencies are controlled per-client then we don't want to throttle non-slow clients)
  • Note that we currently set replicon's resend time to 0 for the init channel. This can also theoretically be controlled (e.g. set up multiple update channels with different resend times, or make a PR to renet to dynamically control resend times).

In replication, always spawn and insert new components for entities that are visible (priority >= 0.01). Only update entities at a frequency computed from their priority mapped onto the [min freq, max freq] range.

Open question: how to map replication frequencies onto replicon TickPolicy?

  • Manual: ?
  • EveryFrame: ?
  • MaxTickRate: ?

Replication bidirectionalty.

I am using the editor to see live the changes from the custom component.
If I change the enabled boolean from the server, it will replicate to the client.
But when I change it on the client, it does not replicate back to the server.

Is bidirectionality supposed to work? I assumed it should, otherwise I am not sure how this plugin is supposed to be used?

Here is a sample and cargo deps, custom component is Compo, ran both server/client with first cargo run server and then cargo run client.

[dependencies]
bevy = {version = "0.10.1", features = ["dynamic_linking"]}
bevy_editor_pls = "0.4.0"
bevy_replicon = "0.3.0"
clap = { version = "4.2.3", features = ["derive"] }
use bevy::prelude::*;
use bevy_editor_pls::EditorPlugin;
use bevy_replicon::{
  prelude::*,
  renet::{ClientAuthentication, RenetConnectionConfig, ServerAuthentication, ServerConfig},
};
use clap::Parser;
use std::{
  net::{Ipv4Addr, SocketAddr, UdpSocket},
  time::SystemTime,
};

const PORT: u16 = 1234;
const PROTOCOL_ID: u64 = 1;
const MAX_CLIENTS: usize = 1;

fn main() {
  App::new()
    .init_resource::<Cli>()
    .add_plugins(DefaultPlugins)
    .add_plugin(EditorPlugin::default())
    .add_plugins(ReplicationPlugins)
    .replicate::<Compo>()
    .add_startup_system(setup)
    .run();
}

#[derive(Debug, Parser, PartialEq, Resource)]
#[command(author, version, about, long_about = None)]
pub enum Cli {
  Server,
  Client,
}

impl Default for Cli {
  fn default() -> Self {
    Self::parse()
  }
}

#[derive(Component, Default, Reflect)]
#[reflect(Component)]
pub struct Compo {
  pub enabled: bool,
}

pub fn setup(mut commands: Commands, cli: Res<Cli>, network_channels: Res<NetworkChannels>) {
  let send_channels_config = network_channels.server_channels();
  let receive_channels_config = network_channels.client_channels();
  let connection_config = RenetConnectionConfig {
    send_channels_config,
    receive_channels_config,
    ..Default::default()
  };
  let current_time = SystemTime::now()
    .duration_since(SystemTime::UNIX_EPOCH)
    .unwrap();
  let server_addr = SocketAddr::new(Ipv4Addr::LOCALHOST.into(), PORT);
  match *cli {
    Cli::Server => {
      let socket = UdpSocket::bind(server_addr).unwrap();
      let server_config = ServerConfig::new(
        MAX_CLIENTS,
        PROTOCOL_ID,
        server_addr,
        ServerAuthentication::Unsecure,
      );

      let server =
        RenetServer::new(current_time, server_config, connection_config, socket).unwrap();
      commands.insert_resource(server);

      commands.spawn((Compo { enabled: true }, Replication {}));
    }
    Cli::Client => {
      let client_id = current_time.as_millis() as u64;
      let socket = UdpSocket::bind((Ipv4Addr::LOCALHOST, 0)).expect("localhost should be bindable");
      let authentication = ClientAuthentication::Unsecure {
        client_id,
        protocol_id: PROTOCOL_ID,
        server_addr,
        user_data: None,
      };

      let client =
        RenetClient::new(current_time, socket, connection_config, authentication).unwrap();
      commands.insert_resource(client);
    }
  }
}

Use `bevy` subcrates, ie `bevy_ecs`

This crate looks awesome and exactly what I need, only some of the targets for my project are tiny microcontrollers so im being pretty conservative regarding dependencies.

Would you be open to using bevy_ecs, bevy_app, bevy_reflect etc instead of the kitchen sink bevy crate? This should also speed up compile times and make rust-analyzer quicker.

Priority Accumulator for partial and selective replication of entities

Background

This is to summarise a design discussion from a discord chat today, relating to migrating from my custom renet netcode to replicon.

In my game I send spawn messages from server to client on the frame they happen, but only send regular state updates (position, velocity, etc) at a slower rate (10hz) My clients predict everything and reconcile accordingly when updates arrive.

Currently this isn't possible with replicon. If I set the server tickrate to 10hz, my spawns will also be broadcast at 10hz. No good for bullets, which need to appear on clients asap.

This proposal also moves towards replicon deciding how many packets to send per tick, by deciding what goes into each individual packet (~1200bytes or whatever). Sending one large buffer and letting renet fragment it is fine for some games, but loss of one fragment renders the entire update useless. Problem is worse for games with a lot of state data requiring multiple packets. They might be better served by replicon crafting multiple single packets each with replication data for a subset of entities, that can be applied individually.

Priority Accumulator

Described in this gaffer article under the "Priority Accumulator" heading, it could work like this at a high level:

  • Every replicated entity gets an PriorityAccumulator(f32) component.
  • Higher values mean that entity is more deserving of being sent to the clients in an update packet.
  • As time passes, the accumulator can increase gradually.
  • Game logic can increase the accumulator of an entity, for example if a player collides or interacts with it.
  • Newly spawned entities can start with a Very Large accumulator

How the server builds a packet to send

  • When creating a packet of replication data to send to clients, sort all entities by largest priority accumulator first.
  • Work down the sorted list, serializing and writing to the packet buffer until the packet buffer is full (ie, within the MTU).
  • Send packet
  • Set priority accumulator to 0.0 once an entity's data has been included in a packet.
  • No reason to stop at just 1 packet - keep going down the sorted list and write a second packet. You can dynamically adjust the bandwidth by deciding how many packets to send each tick.

Each packet contains all required component data for a subset of entities, so would need per-entity acks
See also: #16

On top of this, it should be possible to have a higher level API like "always send spawns every tick, but otherwise i'm happy with 10hz state updates". But with a reasonable bandwidth budget, and allowances for bursting higher due to lots of spawns, there probably wouldn't be any need to fix the rate of updates, as long as freshly spawned entities started with a very high accumulator.

Small worlds, or when partial state updates are unwanted

If the number of entities is small enough that it fits in a packet, or the game isn't compatible with partial state updates, you could skip sorting by accumulator and just include everything in one large buffer and let renet split it (if needed) like it does at the moment.

Optimize world diff collection

Problem

World diff collection involves:

  • iterating over all replicated entities and their replicated components to identify which components need to be sent to clients
  • allocating intermediary 'world diffs' for each client, which contain all the diffs they will be sent this tick

Solution

The first problem can be mitigated with per-entity change detection, so you don't need to iterate over components in entities that haven't changed in a while.

Here are three ideas to mitigate the second problem (as discussed on discord).

  1. Serialize diffs directly into per-client buffers instead of allocating intermediary representations. Easy to implement, fast. Can work with diff fragmentation by allocating a new buffer representing a new packet whenever the current buffer is packed (with some byte shuffling to manage entity boundaries).
  2. Build a world diff history cache that has per-tick diffs for replicated entities (only most recent changes are cached). Client packets are built by directly serializing into buffers from the diff cache. Moderate difficulty to implement, faster than current code but slower than (1) because the cache must be allocated. Also supports diff fragmentation.
  3. Bundle-based replication using auto-generated systems to collect diffs. Hard to implement, equivalent to (1). Also supports diff fragmentation.

In conclusion, (1) seems like a safe bet.

Minor documentation error

I believe there is an error in the documentation for the variants of VisibilityPolicy. I have checked over the source code to confirm this (to the best of my ability).

The documentation describing the behaviour of VisibilityPolicy::Blacklist and VisibilityPolicy::Whitelist appear to be the wrong way around.

Optimize world diff consumption

Problem

Clients deserialize world diffs into a pile of allocated bits (vectors and map nodes).

Solution

Use a span or similar byte magic to access serialized world diffs directly.

Support `bevy 0.12`

This is an awesome crate, and believe it or not is actually blocking me from upgrading to bevy 0.12 :(
I'll try to do a simple upgrade, anything I should know about @Shatur?

Fixed-list clients

Many games have a predefined client list. In that case you can assign client ids as indices into the client list, and access clients on the server in O(1) instead of O(n) or O(logn).

This is relevant for client visibility, because to set the visibility of an entity on a client you need to access that client in the client list. Setting visibility can be considered the hot path, since you may need to update visibility frequently for many entities.

Options:

  • Server plugin config to enable a fixed-list client optimization.
  • Feature-level compile flag to avoid branching + server plugin config to set the number of clients.

Note that this would require changes to how connection status is tracked.

Rooms

Sometimes you replicate only part of the world. Commonly used for things like fog of war or card games.
To solve this @koe from Bevy discord server suggested to implement rooms.
Assign each entity a room ID (inspired by naia's rooms). Each client is a member of an arbitrary number of rooms (e.g. using a hashset of ids). By default entities are in room โ€˜0โ€™ which means global (or maybe have no room IDs). All other room ids are user-defined. Replicon just needs a resource to track client room membership, and some updates to the replication logic to only replicate an entity for a given client if they are in the same room (and also to โ€˜despawnโ€™ an entity if it stops being visible to a client).

Update message acks cleanup

The manual fragmentation PR #116 changes how component updates are acked. The server tracks pending acks, and removes them when an ack appears. This is currently a memory leak since the client isn't guaranteed (or expected) to ack all component updates, which are sent over an unreliable channel.

We should add a cleanup protocol that automatically discards stale pending acks after a period of time (e.g. 2x the server's client timeout).

Panic despawning entities when simulating poor network conditions

Description

Using clumsy to simulate a throttled network condition, I get a panic with this error when entities are despawned

thread 'main' panicked at 'server should send valid entities to despawn: EntityNotFound(24v0)', 
bevy_replicon-0.2.1\src\client.rs:148:22

What I expected

I think this should likely be an error message instead of a panic as this seems to be a recoverable situation in most cases. If the entity doesn't exist on the client anymore, then the mission is already accomplished. Having it log an error still makes sense though as it could be a sign that you are running your despawning system on the client when it should only be run on the server and it shouldn't occur under normal network conditions.

Steps to replicate

Download and run clumsy with these setttings:
image

git clone https://github.com/paul-hansen/bevy-jam-3.git
cd bevy-jam-3
git checkout ef33179
cargo run --features "bevy_editor_pls" -- --listen 127.0.0.1
cargo run --features "bevy_editor_pls" -- --connect 127.0.0.1

Press and hold spacebar to fire bullets, eventually the client will panic.

Other thoughts

It handled other network conditions really well!

Possible that this is a sign that something else is going on that could be fixed, it does seem a bit odd that throttle would end up with a despawn request being received twice.

Feel free to close this as won't fix if you think this is correct behavior, it's not a big deal, just wanted to report it in case it could help make this lib more robust.

Manually set global last-changed tick

There is a category of games that have limited conditions where the server world will be modified. For those games, it is feasible to manually track the last tick where something in the world was modified (or an event sent to clients). Once the last world-change-tick has been acked, it is a waste of CPU time to scan the ECS world for individual changes.

It would be nice if we could manually define/inject the last world-change-tick to the server replication loop, either on a global or per-client basis. We can then short-circuit replication if a client has acked that tick. Moreover, even if a client has not acked a tick, if we have cached replication buffers for the tick range (last acked, last world-change], then we don't need to scan the world (just send the buffers again).

2 questions

  1. enable discussions for this repo.
  2. is this networking lib good for a fps shooter type game?

Init message header

Since we moved despawns and removals to the middle of init messages, array header trimming is less effective. We should add a 1-byte header to the start of init messages that contains bit flags indicating which sections are present in the message. This will shave several bytes off most messages.

Document protocol

Add documentation summarizing the replication protocol, with performance and synchronization characteristics.

Event cleaning guarantees?

Hi,
I'm using your wonderful plugin in my game and so far almost everything was perfect.
However, today I noticed that my on_client_event condition fires my system twice (two consecutive frames) and on the second run in the system I get empty event reader.
The only two possible reasons I can think of for this strange behavior is either:

  1. I do not read properly and the event queue doesn't clean until the next frame where I try to read from it but too late. I use client_event.iter().next().unwrap() to read an event.
  2. After being read in the first invocation event queue does not clean on time, so on the next frame my condition fires again, and when my system runs the queue is already clean.

So the question is: can I expect ClientEvent to be empty after being read at the beginning of the next frame? And also am I right that network events are cleaned totally the same way as bevy events?

Panic when using ParentSync

The following test will sometimes cause a panic:

#[test]
fn despawn_replication_hierarchy() {
    let mut server_app = App::new();
    let mut client_app = App::new();
    for app in [&mut server_app, &mut client_app] {
        app.add_plugins((
            MinimalPlugins,
            ReplicationPlugins.set(ServerPlugin::new(TickPolicy::Manual)),
        ))
        .replicate::<TableComponent>();
    }

    common::connect(&mut server_app, &mut client_app);

    let server_entity = server_app.world.spawn((Replication, TableComponent)).id();

    server_app.add_systems(
        Update,
        move |mut commands: Commands, mut has_run: Local<bool>| {
            if *has_run {
                return;
            }
            *has_run = true;

            // Should be inserted in `Update` to avoid sync in `PreUpdate`.
            commands.entity(server_entity).with_children(|parent| {
                parent.spawn((Replication, ParentSync::default()));
            });
        },
    );

    server_app.update();
    client_app.update();

    let client_entities = client_app
        .world
        .query_filtered::<Entity, With<Replication>>()
        .iter(&client_app.world)
        .count();

    let client_entities_with_parent = client_app
        .world
        .query_filtered::<Entity, (With<Replication>, With<Parent>)>()
        .iter(&client_app.world)
        .count();

    assert_eq!(client_entities, 2);
    assert_eq!(client_entities_with_parent, 1);

    despawn_with_children_recursive(&mut server_app.world, server_entity);

    server_app.update();
    client_app.update();

    let entity_map = client_app.world.resource::<NetworkEntityMap>();
    assert!(entity_map.to_client().is_empty());
    assert!(entity_map.to_server().is_empty());
}

This is the panic backtrace:

thread 'despawn_replication_hierarchy' panicked at 'Entity 1v0 does not exist', src/replicon_core.rs:394:31
stack backtrace:
   0: rust_begin_unwind
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:593:5
   1: core::panicking::panic_fmt
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panicking.rs:67:14
   2: bevy_ecs::world::World::entity_mut::panic_no_entity
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:285:13
   3: bevy_ecs::world::World::entity_mut
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:290:21
   4: bevy_replicon::replicon_core::WorldDiff::deserialize_to_world::{{closure}}::{{closure}}
             at ./src/replicon_core.rs:394:25
   5: bevy_ecs::world::World::resource_scope
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
   6: bevy_replicon::replicon_core::WorldDiff::deserialize_to_world::{{closure}}
             at ./src/replicon_core.rs:363:13
   7: bevy_ecs::world::World::resource_scope
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
   8: bevy_replicon::replicon_core::WorldDiff::deserialize_to_world
             at ./src/replicon_core.rs:362:9
   9: bevy_replicon::client::ClientPlugin::diff_receiving_system::{{closure}}
             at ./src/client.rs:51:17
  10: bevy_ecs::world::World::resource_scope
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
  11: bevy_replicon::client::ClientPlugin::diff_receiving_system
             at ./src/client.rs:49:9
  12: core::ops::function::FnMut::call_mut
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:166:5
  13: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &mut F>::call_mut
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:294:13
  14: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn() .> Out>>::run::call_inner
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:203:21
  15: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn() .> Out>>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:206:17
  16: <bevy_ecs::system::exclusive_function_system::ExclusiveFunctionSystem<Marker,F> as bevy_ecs::system::system::System>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:103:19
  17: bevy_ecs::schedule::executor::multi_threaded::MultiThreadedExecutor::spawn_exclusive_system_task::{{closure}}::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/multi_threaded.rs:592:21
  18: core::ops::function::FnOnce::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5
  19: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
  20: std::panicking::try::do_call
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
  21: __rust_try
  22: std::panicking::try
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
  23: std::panic::catch_unwind
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
  24: bevy_ecs::schedule::executor::multi_threaded::MultiThreadedExecutor::spawn_exclusive_system_task::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/multi_threaded.rs:591:27
  25: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::future::future::Future>::poll
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:296:9
  26: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:42
  27: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
  28: std::panicking::try::do_call
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
  29: __rust_try
  30: std::panicking::try
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
  31: std::panic::catch_unwind
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
  32: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:9
  33: async_executor::Executor::spawn::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.5.1/src/lib.rs:145:20
  34: async_task::raw::RawTask<F,T,S,M>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-task-4.4.0/src/raw.rs:563:17
  35: async_task::runnable::Runnable<M>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-task-4.4.0/src/runnable.rs:782:18
  36: async_executor::Executor::tick::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/async-executor-1.5.1/src/lib.rs:210:9
  37: bevy_tasks::thread_executor::ThreadExecutorTicker::tick::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/thread_executor.rs:105:39
  38: bevy_tasks::task_pool::TaskPool::execute_scope::{{closure}}::{{closure}}::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:503:45
  39: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::future::future::Future>::poll
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:296:9
  40: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:42
  41: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
  42: std::panicking::try::do_call
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
  43: __rust_try
  44: std::panicking::try
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
  45: std::panic::catch_unwind
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
  46: <futures_lite::future::CatchUnwind<F> as core::future::future::Future>::poll
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:626:9
  47: bevy_tasks::task_pool::TaskPool::execute_scope::{{closure}}::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:506:77
  48: <futures_lite::future::Or<F1,F2> as core::future::future::Future>::poll
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:526:33
  49: bevy_tasks::task_pool::TaskPool::execute_scope::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:509:41
  50: bevy_tasks::task_pool::TaskPool::scope_with_executor_inner::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:420:85
  51: futures_lite::future::block_on::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:89:27
  52: std::thread::local::LocalKey<T>::try_with
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:270:16
  53: std::thread::local::LocalKey<T>::with
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:246:9
  54: futures_lite::future::block_on
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/futures-lite-1.13.0/src/future.rs:79:5
  55: bevy_tasks::task_pool::TaskPool::scope_with_executor_inner
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:374:13
  56: bevy_tasks::task_pool::TaskPool::scope_with_executor::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:318:17
  57: std::thread::local::LocalKey<T>::try_with
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:270:16
  58: std::thread::local::LocalKey<T>::with
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/thread/local.rs:246:9
  59: bevy_tasks::task_pool::TaskPool::scope_with_executor
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_tasks-0.11.2/src/task_pool.rs:307:9
  60: <bevy_ecs::schedule::executor::multi_threaded::MultiThreadedExecutor as bevy_ecs::schedule::executor::SystemExecutor>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/multi_threaded.rs:190:9
  61: bevy_ecs::schedule::schedule::Schedule::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/schedule.rs:235:9
  62: bevy_ecs::world::World::try_run_schedule::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1851:55
  63: bevy_ecs::world::World::try_schedule_scope
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1782:21
  64: bevy_ecs::world::World::try_run_schedule
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1851:9
  65: bevy_app::main_schedule::Main::run_main::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_app-0.11.2/src/main_schedule.rs:146:25
  66: bevy_ecs::world::World::resource_scope
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1344:22
  67: bevy_app::main_schedule::Main::run_main
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_app-0.11.2/src/main_schedule.rs:144:9
  68: core::ops::function::FnMut::call_mut
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:166:5
  69: core::ops::function::impls::<impl core::ops::function::FnMut<A> for &mut F>::call_mut
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:294:13
  70: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn(F0) .> Out>>::run::call_inner
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:203:21
  71: <Func as bevy_ecs::system::exclusive_function_system::ExclusiveSystemParamFunction<fn(F0) .> Out>>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:206:17
  72: <bevy_ecs::system::exclusive_function_system::ExclusiveFunctionSystem<Marker,F> as bevy_ecs::system::system::System>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/system/exclusive_function_system.rs:103:19
  73: <bevy_ecs::schedule::executor::single_threaded::SingleThreadedExecutor as bevy_ecs::schedule::executor::SystemExecutor>::run::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/single_threaded.rs:98:21
  74: core::ops::function::FnOnce::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5
  75: <core::panic::unwind_safe::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/panic/unwind_safe.rs:271:9
  76: std::panicking::try::do_call
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:500:40
  77: __rust_try
  78: std::panicking::try
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panicking.rs:464:19
  79: std::panic::catch_unwind
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/std/src/panic.rs:142:14
  80: <bevy_ecs::schedule::executor::single_threaded::SingleThreadedExecutor as bevy_ecs::schedule::executor::SystemExecutor>::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/executor/single_threaded.rs:97:27
  81: bevy_ecs::schedule::schedule::Schedule::run
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/schedule/schedule.rs:235:9
  82: bevy_ecs::world::World::run_schedule::{{closure}}
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1865:51
  83: bevy_ecs::world::World::try_schedule_scope
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1782:21
  84: bevy_ecs::world::World::schedule_scope
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1836:9
  85: bevy_ecs::world::World::run_schedule
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.11.2/src/world/mod.rs:1865:9
  86: bevy_app::app::App::update
             at /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_app-0.11.2/src/app.rs:244:13
  87: replication::despawn_replication_hierarchy
             at ./tests/replication.rs:288:5
  88: replication::despawn_replication_hierarchy::{{closure}}
             at ./tests/replication.rs:237:36
  89: core::ops::function::FnOnce::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5
  90: core::ops::function::FnOnce::call_once
             at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library/core/src/ops/function.rs:250:5

Switch to bundle-based replication rules

Instead of specify each component to replicate, require to insert Replication and use not_replicate_if for exclusion we should could specify bundles and their markers for replication. For example:

app.replicate::<CityBundle>();

#[derive(Bundle)]
struct CityBundle {
    transform: Transform,
    city: City,
    ...
}

// Could be a macro?
impl ReplicationMarker for CityBundle {
    fn replication_marker(world: &World) -> TypeId {
        TypeId::of::<City>()
    }
}

So instead of iterating over all entities with Replication we could iterate over all entities with special markers and replicate only components from the list.

This will require more check on archetype:
https://github.com/lifescapegame/bevy_replicon/blob/ecb264477e353bda6550603b5fdd5e50c5d53b7b/src/replication_core.rs#L130
But will completely remove this check:
https://github.com/lifescapegame/bevy_replicon/blob/ecb264477e353bda6550603b5fdd5e50c5d53b7b/src/server.rs#L180

I would expect this approach to work faster and be more user-friendly.

Support client predicted entities

As part of migrating my custom netcode to replicon, i'm trying to sketch out how to handle client-predicted entities.

Here's how I imagine it working with replicon:

  • client presses shoot and immediately spawns a bullet with client entity of "CE".
  • client sends shoot command to server, with a "hey, btw i spawned this entity already, with Entity = CE"
  • server processes the client's shoot command, spawns the bullet with Entity = SE, and inserts to the Res<ClientPredictedEntitiesMap>, mapping {(SE, client_id) --> CE}
  • when the server is building diffs, in server collect_changes, if the entity being written to the buffer
    is in the ClientPredictedEntitiesMap map, we include the predicted entity value in the buffer too ("CE").
  • This is probably a case of writing an Option<Entity> to the ReplicationBuffer, and writing it after the entity_data.
    An Option being a 1-bit cost if the predicted entity is None.
  • in the client deserialize_component_diffs, we read the Option<predicted_client_entity>
  • it's passed in to let mut entity = entity_map.get_by_server_or_spawn(world, entity, predicted_client_entity);
    and injected into the server_to_client map, to avoid spawning, if a predicted entity exists on the client.

That would allow clients to predict spawns and reconcile predicted entities with the server version.

Looking for a sanity check before I write more code.. sound sensible?

Compilation error when having serde_json as a dependency

error[E0283]: type annotations needed
  --> /home/jonas/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_replicon-0.11.0/src/client.rs:57:29
   |
57 |                         let end_pos = message.len().try_into().unwrap();
   |                             ^^^^^^^
...
85 |                         if cursor.position() == end_pos {
   |                                              -- type must be known at this point
   |
   = note: multiple `impl`s satisfying `u64: PartialEq<_>` found in the following crates: `core`, `serde_json`:
           - impl PartialEq for u64;
           - impl PartialEq<serde_json::value::Value> for u64;
help: consider giving `end_pos` an explicit type
   |
57 |                         let end_pos: /* Type */ = message.len().try_into().unwrap();
   |                                    ++++++++++++

Note that serde_json is included with default bevy features (bevy_gltf)

Panic if you try to remove RenetClient or RenetServer resources after registering events

Description

If you try to remove RenetClient or RenetServer resources after registering client or/and? server events.

Example of the panic when removing RenetServer:

thread 'Compute Task Pool (9)' panicked at 'Resource requested by bevy_replicon::network_event::client_event::receiving_system<leafwing_input_manager::action_state::ActionDiff<stellar_squeezebox:
:player::PlayerAction, stellar_squeezebox::network::NetworkOwner>> does not exist: renet::server::RenetServer', C:\Users\Paul\.cargo\registry\src\github.com-1ecc6299db9ec823\bevy_ecs-0.10.1\src\system\system_param.rs:555:17

What I Expected

I expected this to not panic and instead drop any resources needed to close the connection. This would allow setting up a new connection, for example when the user want's to stop playing one game they joined and join another or host their own.

Steps to replicate

git clone https://github.com/paul-hansen/bevy-jam-3.git
git checkout cb451e27
cargo run -- --listen 127.0.0.1
cargo run -- --connect 127.0.0.1

On either the client or the server press escape and "y" to accept. It will panic here trying to run the event systems saying the client or server resource does not exist.

I have a working patched version of bevy_replicon. If you want to test with that version, do the same but checkout 4217a6d5 instead. (Our game's main branch is using this patch too) Instead of a panic you should see a dialog that allows you to join or create a new game.

Event ordering

The server event sending_system and the client event receiving_system are both configured in set OnUpdate(ServerState::Hosting). This means a server response to a client event received by the server in tick A cannot be sent out until tick B (the same is true for a client responding to a server event). The default sending/receiving systems have private visibility too, meaning they can't be configured to run at other times.

Possible alternative:

  • add ServerSet::ReceiveEvent and ServerSet::SendEvent
  • use .run_if(in_state(ServerState::Hosting)) for server endpoints, and .run_if(in_state(ClientState::Connected)) for client endpoints

What do you think? This way the server sets can be configured.

Why drain mapped client events?

On this line, events are drained before being mapped and sent. I believe this is done for efficiency's sake (to avoid having to clone the events before mapping) but it is inconsistent with the behaviour of the other default systems for network events. Non-mapped client events are not drained (only read), and neither are server events. This may lead to confusing behaviour.
In my case, it meant that my system intended to run on the client and read events of a type added with App::add_mapped_client_event never found any, as they had been drained to send to the server.

I would like to make the behaviour consistent across the default network event sending systems, with my preference being that events are not drained in any of them. The current functionality of the sending system for mapped events may be more efficient, but maybe could be provided under a different function (e.g. App::add_mapped_client_event_drained).

I am willing to create a pull request implementing these changes myself, just thought I would make this issue first in case maintainers disagree with this direction.

Optimize despawn tracker

Currently the despawn tracker calls .retain() on a hashmap of cached replicated entities every tick to identify despawns. This is far from ideal, especially since iterating a hashmap is slow.

Instead, we can add a component to replicated entities that contains a Sender<Entity> and implement a custom Drop on that component to send the entity's Entity to a receiver that is polled once per tick. This way we don't need to track a despawn map, so overall the memory footprint should be similar and the amortized cost should be much lower.

ServerEvents can be missed on clients using FixedUpdate

because of how bevy clears event queues it currently happens that server events can fail to be received by clients using FixedUpdate.

you might write to the EventWriter, and then a few frames tick past before FixedUpdate runs again, by which time the event is long gone. Double buffering keeps it for 2 ticks only.

On your client, if you want to receive the server events in a FixedUpdate system, you might not see them all (even though replicon receives them and writes them to the EventWriter).

here (server_event.rs) is where the standard add_event is used, which is what the clients consume via a system using normal EventReaders.

to make this work for fixedupdate clients, that would need to be a manually created Events resource too, like are used for the serverside events (the ones with ToClients<> wrappers).

but then the question is: where do we clear the event queues?
we don't really know if the client is using fixedupdate..

I could change it to a manually created/cleared Events, and put a system to clear it in in Last. That would work the same for people who aren't using FixedUpdate.

Then we'd need a setting in the replicon client plugin: using_fixed_update=true which doesn't add the event clearing system, but makes the consumer add it themselves to fixedupdate.

How does that sound?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.