GithubHelp home page GithubHelp logo

lunatic-rs's People

Contributors

bkolobara avatar dependabot[bot] avatar hurricankai avatar kosticmarin avatar markintoshz avatar mpope9 avatar neurocyte avatar pinkforest avatar roger avatar somuchspace avatar squattingsocrates avatar teymour-aldridge avatar thehabbos007 avatar tqwewe avatar withtypes avatar zhamlin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lunatic-rs's Issues

Add method `block_until_shutdown` to `Supervisor`

It will just block until the Supervisor is shut down.

When it's called the process will send a self-reference to the supervisor and wait on an answer forever. The supervisor will in case of shutdown or failure send a message to the process to unblock it.

RUSTSEC-2021-0145: Potential unaligned read

Potential unaligned read

Details
Status unsound
Package atty
Version 0.2.14
URL softprops/atty#50
Date 2021-07-04

On windows, atty dereferences a potentially unaligned pointer.

In practice however, the pointer won't be unaligned unless a custom global allocator is used.

In particular, the System allocator on windows uses HeapAlloc, which guarantees a large enough alignment.

atty is Unmaintained

A Pull Request with a fix has been provided over a year ago but the maintainer seems to be unreachable.

Last release of atty was almost 3 years ago.

Possible Alternative(s)

The below list has not been vetted in any way and may or may not contain alternatives;

  • is-terminal
  • std::io::IsTerminal nightly-only experimental

See advisory page for additional details.

lunatic::TcpListener accept() blocks ?

$ lunatic --version
lunatic 0.9.0
$ grep lunatic < pywactor-backend/Cargo.toml 
lunatic = "0.9.1"

Is the lunatic::net::TcpListener::accept() supposed to block the runtime ?

My use-case is to bind() and listen() to multiple local addresses and tearing down/up served ip:ports dynamically

I was able to bind() and listen() to multiple addresses each done in init() under impl AbstractProcess for HttpServer

Problem however as soon as I put accept() into a loop where ever the whole runtime - not just the child - seems to hang on the first accept() call ?

I initially put the above directly into start() via fn init() but then I tried it separately in request() via handle()

impl ProcessRequest<HttpServerCommand> for HttpServer {
    type Response = u32;

    fn handle(state: &mut Self::State, _: HttpServerCommand) -> u32 {
        loop {
            if let Ok((tcp_stream, _peer)) = state.tcp_listener.accept() {
                let http = Http::start(tcp_stream, Some("hmmm"));
            }
	}
    }
}

repro:

git clone https://github.com/pinkforest/pywactor.git
cd pywactor/pywactor-backend
cargo run

The above should report Listen @ for both 9191 and 9192 ?
Connecting both 9191 and 9192 works but 9192 is not accepting connections in a way that I can't handle 9192

     Running `lunatic target/wasm32-wasi/debug/pywactor-backend.wasm`
Listening on addr: 127.0.0.1:9191
Listening on addr: 127.0.0.1:9192
handle() Listen @ 127.0.0.1:9191 
handle() Entered loop @ 127.0.0.1:9191

Lunatic test harness

The #[lunatic::test] macro turns a test into a process, but it still uses the default test harness underneath and is not perfect.

The biggest issue comes from the fact that rust compiled to WebAssembly uses panic=abort, meaning that every time a panic occurs the process is terminated and there is no unwinding or panic catching capability. This can lead to silent errors. Let us have a look at the following test output:

running 6 tests
test message_custom_type ... ok
test message_integer ... ok
test message_resource ... ok
test message_vector ... ok
test request_reply ... ok
test timeout ... thread '<unnamed>' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `2`', tests/messaging_test.rs:102:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
     Running tests/process_test.rs (target/wasm32-wasi/debug/deps/process_test-20848690c2dbf9e1.wasm)

running 4 tests
test compute_limit ... ok
test link_with_tags ... ok
test memory_limit ... memory allocation of 1200000 bytes failed
ok
test spawn_link ... ok

test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

The timeout test failed on a assert_eq(1, 2), but an assert failure will cause a panic and will terminate the execution of all the other tests following and not print any statistics how many passed or failed. It's still possible to notice that the test failed, but with a lot of tests and output it can be easily missed.

Even we can wrap tests into processes and use links to check for failures, I'm not aware of a way to report a failure without a panic. Tests can also return a Result to indicate success or failure, but under the hood rust will still use panics in the default test harness.

I think that the best way forward would be the way many testing frameworks for embedded Rust work, where panic=abort also applies (like defmt-test). This would require us to develop a specific testing framework around lunatic's capabilities.

I'm opening this issue just to kick off the discussion about finding "the perfect" testing story around Rust code on lunatic.

`lunatic_sqlite_api` breaks rust-analyzer (vscode) in a cargo workspace

OS: Windows 10
rustc: 1.69.0
rust-analyzer: v0.3.1506
vscode: 1.77.3

I am developing a project using Lunatic,

I have a cargo workspace containing:

  • a submillisecond + lunatic-rs server
  • a client that connects to the server and does not compile to wasm
  • a project that contains shared code (only relevant since all projects in the workspace use this project)

When I have the entire workspace open, rust-analyzer emits the following errors and rust-analyzer stops highlighting errors:
Capture

This is due to this line in lunatic-rs and the lib.rs in lunatic not matching up.

This can be solved by adding a .vscode/settings.json file like so:

{
    "rust-analyzer.cargo.target": "wasm32-wasi"
}

but then that breaks my client project, since that project is currently only configured to run on desktop targets.

Attempting to add compilation gates like this to lunatic-rs/src/sqlite/query.rs:

#[cfg(not(target_arch = "wasm32"))]
use lunatic_sqlite_api as bindings;

and exposing the relevant functions in lunatic/crates/lunatic-sqlite-api/src/sqlite_bindings.rs does not work, as the function arities are different.

Support `Mailbox` serializer type param in `spawn!` macro

The spawn! and spawn_link! macros accept a Mailbox<T> in the pattern match, but don't support specifying a custom Serializer as the second param with Mailbox<T, Json>.

spawn!(|mailbox: Mailbox<String>| { }); // Works
spawn!(|mailbox: Mailbox<String, Json>| { }); // Fails, due to invalid syntax in macro

Primitive for reader processes

Hi! I've been working on some lunatic based projects and noticed that creating "writer" processes which listen to the mailbox and either respond to requests or accept messages is really easy with the new AbstractProcess primitive. Yet I find that "reader" processes that listen not to the mailbox but to some other source (e.g. a TcpStream) are only available via function-based processes instead of trait-based ones. So what I'm proposing is to include another trait like LoopingProcess which simply adds another function like run(mailbox, state: Self::State). Could even be another method on the AbstractProcess trait which defaults to listening to the mailbox

Code for a reader process could then look like this:

impl AbstractProcess for ClientProcess {
    type Arg = TcpStream;
    type State = Self;

    fn init(this: ProcessRef<Self>, stream: Self::Arg) -> Self::State {
        // ... set up stuff for the state (e.g. writer, coordinator, metrics_recorder, packet_reader)
        // ...
        ClientProcess {
            this: this.clone(),
            coordinator,
            writer,
            metrics_recorder,
            packet_reader
        }
    }

    fn run(state: Self::State) {
        loop {
            match state.packet_reader.read() {
                Ok(message) => {
                    state.metrics_recorder.track_new_packet();
                    println!("Received packet {:?}", message);
                    state.writer.respond("Some response");
                }
                Err(err) => panic!("A decoding error ocurred: {:?}", err),
            };
        }
    }
}

In my opinion this has the following benefits:

  1. It allows us to keep logic to manipulate/read state within the ClientProcess struct
  2. Allows for easier reasoning about the logic
  3. We don't have to manually spawn a process and send the context because that's taken care of the trait/struct
  4. Makes testing easier because it encourages one to keep highly cohesive code together in a more natural way

Actually, I already forked the lib and tried to change this (albeit in a dirty way) myself. Here's the commit where I added a run method:
main...SquattingSocrates:extend-abstract-process

I can create a PR later if we agree on implementing it this way

Make `Process` Public

Process should be public, right? There is currently no way to have an AbstractProcess receive a reference to a Mailbox process to send messages to.

If dependencies use different versions of the `lunatic` compilation fails

The error looks something like this:

= note: rust-lld: error: duplicate symbol: _lunatic_spawn_by_index
          >>> defined in <project>/target/wasm32-wasi/debug/deps/liblunatic-5be2e6a3677e3780.rlib(lunatic-5be2e6a3677e3780.lunatic.39a91549-cgu.2.rcgu.o)
          >>> defined in <project>/target/wasm32-wasi/debug/deps/liblunatic-9414a7eb552b5238.rlib(lunatic-9414a7eb552b5238.4nhn8gp7amf2rmyg.rcgu.o)

And it's related to exporting the _lunatic_spawn_by_index function as an entry point for processes. So multiple of them can exist if different versions of lunatic are part of the same compilation target.

A solution could be to include the version number as part of the function name, that way two different versions would not conflict.

Spawning processes should allow for borrowed captured data

Currently, trying to spawn a process requires you to clone captured data if you'd like to use it after the spawn. But because encoding data only requires a & reference, cloning should not be required.

Eg:

#[derive(Serialize, Deserialize)]
struct Foo {} // Doesn't implement clone

fn main() {
    let foo = Foo {};
    
    Process::spawn(&foo, |foo, _: Mailbox<()>| { ... }); // Fails, Deserialize is not implemented for `&Foo`, only `Foo`.
    Process::spawn(foo, |foo, _: Mailbox<()>| { ... });
    Process::spawn(foo, |foo, _: Mailbox<()>| { ... }); // fails, because foo was moved
}

It would be nice if spawn worked with both &foo and foo, so the change would not be breaking in any way.

Socket error on write (os error 10053)

Hi, I'm seeing an error while building upon the tcp-echo example. When writing to the stream, I cannot also read from it. It causes the underlying socket to close on error.

Custom { kind: Other, error: An established connection was aborted by the software in your host machine. (os error 10053) }

At first I thought this was because I copied the stream for separate reading and writing in two different tasks-- likely a bad idea; however, that wasn't the only situation I found the issue. Below is a basic example, the server simply sends before waiting to receive. This seems to cause the error on my OS, is this unintended behavior or am I using the net module wrong?

use lunatic::{net, spawn_link, Mailbox, Process};
use std::io::{BufRead, BufReader, Write};


#[lunatic::main]
fn main(_: Mailbox<()>) {
    let port = "6000";
    let addr = "127.0.0.1";
    let endpoint = addr.to_string() + ":" + port;
    let listener = net::TcpListener::bind(endpoint.to_owned()).unwrap();
    println!("Listening on addr: {}", listener.local_addr().unwrap());

    let child = spawn_link!(@task | endpoint | {
        let client = net::TcpStream::connect(endpoint);
        match client {
            Ok(mut stream) => { },
            Err(err) => { println!("client socket error: {:?}", err); return }
        }
    });

    while let Ok((tcp_stream, peer)) = listener.accept() {
        println!("client connected {:?}", peer);
        Process::spawn(tcp_stream, handle);
    }
}

fn handle(mut tcp_stream: net::TcpStream, _: Mailbox<()>) {
    println!("tx {:?}", tcp_stream.clone().write(b"hi")); // this causes the read Err below

    let mut buf_reader = BufReader::new(tcp_stream.clone());
    loop {
        println!("waiting");
        let mut buffer = String::new();
        match buf_reader.read_line(&mut buffer) {
            Ok(size) => {
                println!("rx: {:?} {:?}", buffer, buffer.contains("exit"));

                if buffer.contains("exit") || size == 0 {
                    return;
                }
                
                tcp_stream.write(buffer.as_bytes());
            },
            Err(err) => { println!("handler error: {:?}", err); return } // point of error where socket is aborted
        }
    }
}

Actual output:

Listening on addr: 127.0.0.1:6000
client connected 127.0.0.1:64207
tx Ok(2)
waiting
handler error: Custom { kind: Other, error: An established connection was aborted by the software in your host machine. (os error 10053) }

Build details (lunatic compiled today 05/02/22):

rustc 1.60.0 (7737e0b5c 2022-04-04)
binary: rustc
commit-hash: 7737e0b5c4103216d6fd8cf941b7ab9bdbaace7c
commit-date: 2022-04-04
host: x86_64-pc-windows-msvc
release: 1.60.0
LLVM version: 14.0.0

abstract_process Macro send_after Design

Currently, the abstract_process macro only creates wrapper methods for send but not send_after.

// E.g. instead of doing
counter.send(Inc(2));
// we can just call
counter.increment(2);
// but if we want to use send_after, we have to write something like this
counter.send_after(__MsgWrapperIncrement(2), Duration::from_secs(1));

There are two ways to design the send_after wrapper methods.

// 1. Add new wrapper methods
counter.increment_after(2, Duration::from_secs(1));
// 2. Or use the builder-like pattern
counter
    .after(Duration::from_secs(1))
    .increment(2);

I am heavily leaning towards the second design but I would love to know what you think.

Make AbstractProcess start and shutdown failable

Erlang's gen server allows to start and shutdown process to fail so we should allow such behaviors too.

The current implementation will cause start to block forever with the following code

struct A;

impl AbstractProcess for A {
    type Arg = ();
    type State = A;

    fn init(_: ProcessRef<Self>, _: ()) -> A {
        panic!();
    }
}

A::start((), None);

Distributed Example Runtime Error

Running cargo run --example distributed gives the following error

thread 'main' panicked at 'index out of bounds: the len is 0 but the index is 0', examples/distributed.rs:28:50
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[2022-07-23T18:19:31Z WARN  lunatic_process] Process 1 failed, notifying: 0 links
    			    (Set ENV variable `RUST_LOG=lunatic=debug` to show stacktrace)

Easier way to implement AbstractProcess

Currently, implementing ProcessMessage and ProcessRequest for AbstractProcess is quite verbose and requires a lot of boilerplate code. What if we can create new AbstractProcesss in the style of Elixir? Instead of writing impls and specifying return types manually, we simply do something like this:

struct Counter {
    count: u32,
}

#[derive(serde::Serialize, serde::Deserialize)]
struct Inc;
#[derive(serde::Serialize, serde::Deserialize)]
struct Count;

#[abstract_process]
impl Counter {
    #[init]
    fn init(_process: ProcessRef<Self>, count: u32) -> Self {
        Self { count }
    }

    #[terminate]
    fn shutdown(self) {
        println!("Shutting down with state {}", self.count);
    }

    #[process_message]
    fn increment(&mut self, _: Inc) {
        self.count += 1;
        self.check_count();
    }

    #[process_request]
    fn count(&self, _: Count) -> u32 {
        self.count
    }

    fn check_count(&self) {
        if self.count > 5 {
            println!("count exceeded 5!");
        }
    }
}

Pros:

  • Having the method implementations closer to each other makes the code much more compact and readable.
  • Allow users to specify methods to take &self when appropriate instead of forcing all handlers to take &mut self.
  • Can easily transform the current implementation to the new style.
  • Can easily apply AbstractProcess behaviors to existing Rust structs
  • Maybe it is possible to allow autocompletion to present valid Message structs for a given process type.

Cons:

  • Using procedural macros might lead to a longer compile time.
  • It is currently unclear where to write the docstring (above the message structs or the methods).

Any feedback is welcomed!

RUSTSEC-2023-0052: webpki: CPU denial of service in certificate path building

webpki: CPU denial of service in certificate path building

Details
Package webpki
Version 0.22.0
Date 2023-08-22

When this crate is given a pathological certificate chain to validate, it will
spend CPU time exponential with the number of candidate certificates at each
step of path building.

Both TLS clients and TLS servers that accept client certificate are affected.

This was previously reported in
<briansmith/webpki#69> and re-reported recently
by Luke Malinowski.

rustls-webpki is a fork of this crate which contains a fix for this issue
and is actively maintained.

See advisory page for additional details.

Encode / Decode local buffer before sending to VM

The current JSON serializer implementation uses serde_json::to_writer and serde_json::from_reader functions for encoding and decoding.

This means each "token" in a JSON message will be sent as an individual message to the VM with the lunatic::host::api::message::write_data function.
For example, the JSON {"name":"John Doe","age":22} would be sent as 15 separate calls to write_data:

{
"
name
"
:
"
John Doe
"
,
"
age
"
:
22
}

It would probably be much more efficient to use serde_json::to_vec, and then MessageRw {}.write(...).

Mailbox should be the first argument in `spawn` and `spawn_link`

The Mailbox prameter in the spawn and spawn_link functions is expected to be the last parameter.

I think it might make more sense for it to be first, since it's always required?
Similar to how typically optional/default arguments are placed last in many languages, similar to the type definition of Mailbox<T, S = Bincode>, with T being required, and S being optional.

I think a side effect of this, would be that the spawn! and spawn_link! macros could accept multiple captured variables. Currently they only support one, but it could be many:

spawn!(|mailbox: Mailbox<String>, cap1, cap2, cap3| { ... })

With mailbox being last, I don't think this is really possible.

It's a pretty minor change, and would be a breaking change sadly, but I thought I'd open an issue anyway.

Support generics and `where` clauses in `#[abstract_process]` macro

If the #[abstract_process] macro is used on an impl with any generics or where clauses, it fails.

It might be nice to support this, though it may be a little complicated.

struct GenericProcess<T>(T);

#[abstract_process]
impl<T> GenericProcess<T>
where
    T: Clone
{ ... }

Should expand to:

impl<T> lunatic::process::AbstractProcess for GenericProcess<T>
where
    T: Clone
{ ... }

impl<T> GenericProcessHandler for lunatic::process::ProcessRef<GenericProcess<T>>
where
    T: Clone
{ ... }

// ...

Abstract process panics when using different Serializer in `RequestHandler`

Given the following code:

use lunatic::{
    process::{AbstractProcess, ProcessRef, Request, RequestHandler, StartProcess},
    serializer::Json,
};

struct MyProcess;

impl AbstractProcess for MyProcess {
    type State = Self;
    type Arg = ();

    fn init(_: ProcessRef<Self>, _arg: ()) -> Self::State {
        MyProcess
    }
}

impl RequestHandler<i32, Json> for MyProcess {
    type Response = i32;
    fn handle(_state: &mut Self::State, _request: i32) -> Self::Response {
        1
    }
}

fn main() {
    let my_process = MyProcess::start((), None); // Ok
    my_process.request(1); // Panic
}

Lunatic panics with the following:

thread '' panicked at 'called Result::unwrap() on an Err value: DeserializationFailed(Bincode(Custom("invalid value: integer 1699881595, expected variant index 0 <= i < 3")))', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/lunatic-0.10.0-alpha.1/src/mailbox.rs:162:47

It seems like Bincode is still being used somewhere, even though my request is only implemented for json.
Request::<_, Json>::requst(&my_process, 1); also doesn't seem to work.

Is this a bug in Lunatic? Or is there something I'm doing wrong here?

Getting started guide

Running projects on top of Lunatic introduces a few additional steps to your usual cargo setup.

It would be great to have a step by step explanation how to configure everything so that you can just run cargo run and cargo test, but everything is compiled to Wasm and executed on Lunatic.

In most cases this will be just adding a specific .cargo/config file.

Allow specifying configuration & tag when spawning from WasmModule

It'd be useful to allow specifying the usual options for processes spawned from WasmModule.
Config especially to limit resources, for me especially heap size, but the other options are likely handy in other scenarios.
I haven't looked deeper into failure recovery & using Tags, but it seems like usually the way to handle failures is attaching a Tag to subprocesses and handling the MailboxResult::LinkDied messages. This isn't possible with processes spawned from WasmModule.

Having a quick look at the implementation this seems like a minor change. Looks like the only difference between WasmModule::spawn* and Process::spawn* is that WasmModules don't use host::spawn but host::api::process::spawn, effectively making WasmModules only spawn on the local node, but it's still possible to pass in all the parameters, just like host::spawn would do in a non-distributed environment.

RUSTSEC-2020-0168: mach is unmaintained

mach is unmaintained

Details
Status unmaintained
Package mach
Version 0.3.2
URL fitzgen/mach#63
Date 2020-07-14

Last release was almost 4 years ago.

Maintainer(s) seem to be completely unreachable.

Possible Alternative(s)

These may or may not be suitable alternatives and have not been vetted in any way;

See advisory page for additional details.

Will it work well with async/await ecosystem?

It seems that it’s unnecessary for a stackful coroutine implementation to mark a function as async, but async/await was standardized in rust and many libraries will build on top it. Is it compatible with lunatic?

Higher level (`struct` based) process abstraction

Defining and spawning processes are fundamental tasks you do when working with lunatic. Naturally, we want to make the developer experience around them as pleasant as possible. I would like to introduce a new higher-level Rust API for defining processes that is easier to use, provides powerful abstractions (like hot-reloading) and works nicely with Rust's type system. Before going into details of this proposal I would like to take a step back just to explain how the current system works and how we got there.

Lunatic allows you to point to a function in your Rust application and spawn a process from it. This is how Erlang/Elixir works too; and it's a really simple yet powerful tool.

However, lunatic is at the same time a "system" to run any WebAssembly module as a process. This means that it can dynamically load .wasm files written in different languages, spawn processes and send messages to them. Contrary to spawning a process from the currently running module, we can't have any type system guarantees about what messages the spawned process can receive. We may not even have access to the source code of the loaded WebAssembly module.

In the rest of this text I'm solely going to focus on the act of spawning processes from the currently running module where we actually can utilise Rust's type system.

History

From the first days of using Rust with lunatic, I have envisioned that you can just simply spawn processes from functions, the same way you can do it in Erlang/Elixir. A big obstacle here is the fact that Rust is a strongly typed language and Erlang/Elixir are dynamic. How do you fit a concept of processes and messages into the type system? How could you send different types of messages to a process and have the type system catch errors during compilation?

My first approach just mimicked the channels approach that is used in other Rust libraries and in Go. This means that the message type was bound to the channel and a process could capture many channels on startup. This is not ideal, as it moves away from the single mailbox principle and suddenly you have as many "mailboxes" as you have channels. Also, you can't simultaneously wait on multiple channels, as their return values may be of different type. You would always end up capturing one channel and wrapping all the different message types in a super-type enum. This resulted in me getting rid of the channels approach and just having a single mailbox that is received as an argument of the process entry function. This is basically what we have today.

I'm generally satisfied with the current approach and believe that it gives you a simple way to spawn one-off processes and compile time errors if you try to send messages of wrong types to a process (that can't handle them).

Proposal

This "function as a process" approach starts to fall apart once you have more complex processes with complicated behaviours. For example, you can't force a process to return you a message. There is no type level support with the current system to enforce that if a process receives one type of message it should respond with a specific type. Ideally we would like to be able to express this kinds of contracts with the Rust type system.

Erlang/Elixir has a higher level abstraction, the GenServer (generic server). You can implement two kinds of behaviours:

  • handle_call - You received a request and are supposed to respond with a reply.
  • handle_cast - Another process sent you a message, but is not waiting for a response.

I would like to bring the same concept to Rust's lunatic library.

Example

Another library in the Rust ecosystem already figured out a good approach on modelling such behaviours inside Rust's type system, Actix. If we represent a process state as a struct we can define different message handlers on it. The new API would look something like this:

// A message
#[derive(Serialize, Deserialize)]
struct Sum(usize, usize);

// The process state
struct Calculator;

impl lunatic::Process for Calculator {
    type Context = Context<Self>;
}

// A handler for `Sum` messages
impl HandleCall<Sum> for Calculator {
    type Result = usize; // <- response type

    fn handle(&mut self, msg: Sum, ctx: &mut Context<Self>) -> Self::Result {
        msg.0 + msg.1
    }
}

fn main() {
    let addr = Calculator.start();
    let res = addr.send(Sum(10, 5));

    match res {
        Ok(result) => println!("SUM: {}", result),
        _ => println!("Communication to the actor has failed"),
    }
}

In this example we get compile time guarantees that all the messages sent & received are of the correct type. We also can force the process to respond with an appropriate type (Self::Result) when a message of a specific type (Sum) is sent. If we left the response out, the code would not even compile.

Hot reloading

The same way Erlang's GenServer makes it easier to do hot-reloading of Erlang processes, once we have more structure (pun intended) around the processes we can also introduce process lifecycles that make it possible to accomplish hot reloading. We can enforce that the process state implements Serialize + Deserialize and on code changes we just serialize the process state and deserialize it as part of the new implementation.

The Process trait could provide default implementations if the state structure stayed the same, but should also allow developers to define "state transitions" to new versions:

enum Action {
    Reset,    // The process is re-spawned with a new state.
    HotUpdate // Hot-reload the process and try to reuse the previous state. 
}

impl lunatic::Process for State {
    type Context = Context<Self>;

    fn update_behaviour() -> Action { Action::HotUpdate }
    fn update(old_state: Data) -> Result<Self, UpdateError> {
        // old_state contains a serialized version of the previous `Self`.
        // The implementation of this method needs to deserialize it and create a new
        // version of `Self`.
    }
}

This would also make it possible to move processes between machines. If the state can be serialized it can be moved to another node and a process could be bootstrapped there from it.

All of these features don't require any changes to the lunatic runtime and can be completely implemented as a library on the currently existing primitives. I think that this is an important characteristic of lunatic, we can keep the underlaying runtime lean, simple and performant, but build really powerful abstractions on top of it.

Summary

I believe that by adding this kind of API we can lean much harder on Rust's type system to enforce correctness. At the same time it nicely mimics Erlang's proven approach.

This is not meant to replace the function based API, it's more of an augmentation. It represents a philosophy on how to structure your state, requests, responses and handle lifecycle (code updated). The function based API still gives you full flexibility to programatically handle messages and is really convenient when creating processes from small closures (timeouts, etc.).

UDP networking support

The lunatic vm exposes host functions for working with UDP. They just need to be wrapped into higher level rust types, similar to TCP.

ProcessConfig::new can fail, but will not report this

This is quite the dangerous bug for users that assume they are spawning processes with reduced privileges, but this will not be the case.
ProcessConfig::new is expected to return a config with no privileges, but, if the calling process does not itself have the privilege of creating configs, will simply return -1, indicating that privileges are inherited from the parent instead.

In my opinion ProcessConfig::new should return a Result<ProcessConfig, _>. Most process would simply expect this, in which case the missing privilege can simply be added, but no security hole is created.

`send` functions should take a reference of message `&M`

Sending messages should not require ownership of the message being sent.

For example, the send method on Process has the following signature:

pub fn send(&self, message: M);

This should be taking a reference of M:

pub fn send(&self, message: &M);

Though this would be a breaking change.

One solution which may keep it as non-breaking would be to take impl AsRef<M>.
Though I'm not sure if every implementation of AsRef is compatible to ser/de in bincode. There is also the possibility of the Borrow trait, though I'm also not sure if everything implemented is compatible with bincode.

`@task`s should use a closure to allow for return keyword and `?` operator

The following code fails due to the ? operator, and the return keyword:

spawn_link!(@task || {
    let data = some_fn()?;
    
    if foo {
        return Err(...);
    }

    Ok(data)
});

Since it is not in a function when expanded.

This can be easily resolved by wrapping the expanded body in a self executing closure:

(move || { $body })()

Async Request Handlers

Feature Request

It would be nice if abstract process request handlers could return some simple deferred/promise/future type while retaining the synchronous API for clients.

The use case I have in mind is a DB connection pool similar to Erlang's Poolboy. A pool manager launches a set of workers which each own a DB connection. Clients check out a worker ref from the manager then send the worker requests until they return the worker to the supervisor with a check in message.

When there are no available workers the manager must return something so it doesn't block its mailbox servicing loop. Instead of returning Option or Result and making the client retry it would be preferable for the manager to return a deferred type which it could stash in a queue. Later, when a worker becomes available, the deferred is resolved with the newly available worker reference. This triggers a callback which was set on the deferred by the outer abstract process handler when the initial request handler returned. The callback just extracts the return value from the deferred type and sends a normal response with the real return type.

While the server API is asynchronous the client API remains synchronous. The macro would parse and extract the real return type so the clients wouldn't see the deferred/promise/future return type.

Another possibility is having the deferred response created by the outer handler and passed in as an argument to the user supplied handler instead of being created and returned there. Then it can be passed to sub-processes with the callback already set, allowing abstract processes to easily use sub-processes to service requests.

example:

struct Pool{
    workers: Vec<ProcessRef<Worker>>,
    waiters: Vec<DeferredResponse<ProcessRef<Worker>>>,
};

#[abstract_process]
impl Pool {
    #[init]
    fn init(_: ProcessRef<Self>, num_workers: u32) -> Self {
       let workers = start_workers(num_workers);
       let waiters = Vec::new();
       Self(workers, waiters);
    }

    #[handle_request_async]
    fn check_out(&mut self) -> DeferredResponse<ProcessRef<Worker>> {
       let deferred = DeferredResponse::new();
       self.workers.pop() match {
         None => self.waiters.push(deferred),
         Some(worker) => deferred.set_value(worker),
       }
       deferred
    }

    #[handle_message]
    fn check_in(&mut self, worker: ProcessRef<Worker>) {
       self.waiters.pop() match {
           None => self.workers.push(worker),
           Some(deferred) => deferred.set_value(worker),
       }
    }
}

#[lunatic::main]
fn main(_: Mailbox<()>) {
    let pool = lookup("DBPool");
    let worker:ProcessRef<Worker> = pool.check_out();

    worker.begin();
    worker.query("drop table STUDENTS;");
    worker.commit();

    pool.check_in(worker);
}

Add a dynamic WorkerPool primitive

The current implementation of Supervisor relies on tuples to manage children. Even though this makes sense in order to support different types of children nodes it's limiting in that there's no way to dynamically scale the amount of children, which is a must in a modern web-based system. Therefore I'm suggesting the addition of a WorkerPool primitive which would take elements of one type and scale them based on a parameter/config. I imagine it would look like this:

impl WorkerPool<T>: Supervisor + Supervisable<?>
where
    Self: Sized,
    T: AbstractProcess
{
    type Arg: Serialize + DeserializeOwned;
    type Children: Supervisable<Self>;
    
    // not 100% sure on the api surface yet
    fn new(worker_count: usize, initializer: impl Fn() -> Children) -> Self;
    fn capacity(&self) -> usize; // get current capacity
    fn scale_by(&mut self, amount: i32); // scale children up or down
    fn get_worker(&self) -> WorkerType;
    fn release_worker(&self, worker: WorkerType);
}

This approach should allow to create a managed pool of workers while still having the ability to combine multiple different types of processes under one supervisor like this:

struct CoordinatorSup;
impl Supervisor for CoordinatorSup {
    type Arg = ();
    // Start a pool of `Counters` as well as one `Logger` and monitor them for failures.
    type Children = (WorkerPool<Counter>, Logger);

    fn init(config: &mut SupervisorConfig<Self>, _: ()) {
        // If a child fails, just restart it. Uses the same strategies as the regular `Supervisor`
        config.set_strategy(SupervisorStrategy::OneForOne);
        // Start each `Counter` with some state, don't know how this should look like
        config.children_args(
            ??? // pass arguments to list children, maybe with a sublist of a static function,
            (0, None) // regularly passing process config to `Logger`
        );
    }
}

Another thing that would be useful is for the pool to be able to receive the same messages that the children can and then forward them to the children. Shouldn't be that hard, although it creates some issues with the current approach to generating code in the macro abstract_process: Since in abstract_process we generate a new trait and implement the functions on the ProcessRef<P> we would need to do the same for the WorkerPool for better ergonomics. Not sure how to do that though. Maybe something like this:

struct WorkerPool<P> {
    list: lunatic::process::ProcessRef<P>,
}

impl<T, M> lunatic::process::RequestHandler<M> for WorkerPool<T>
where
    T: lunatic::process::RequestHandler<M>,
    M: Serialize + DeserializeOwned,
{
    type Response = T::Response;
    fn handle(state: &mut Self::State, request: M) -> Self::Response {
        state.list.next().request(request); // dummy code
    }
}

Improve `#[lunatic::main]` & `#[lunatic::test]` macro error reporting

If there is an error (syntax, signature mismatch, ...) inside of the main function wrapped by the #[lunatic::main] macro, the compiler & rust-analyzer will mark the whole file as invalid and report:

`main` function not found in crate `x`
consider adding a `main` function to `src/main.rs`

This happens because the macro returns before generating the main function. And it becomes impossible to figure out what the actual error is. Would love to have the actual error reported here, something like:

Function `main` is missing an argument
consider adding it `fn main(mailbox: Mailbox<T>)`

I would suggest just check out what other crates do in this case (e.g. #[tokio::main]) and copy their approach of error handling.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.