GithubHelp home page GithubHelp logo

zesterer / flume Goto Github PK

View Code? Open in Web Editor NEW
2.2K 24.0 77.0 630 KB

A safe and fast multi-producer, multi-consumer channel.

Home Page: https://crates.io/crates/flume

License: Apache License 2.0

Rust 100.00%
rust concurrency channel

flume's Introduction

Flume

A blazingly fast multi-producer, multi-consumer channel.

Cargo Documentation License actions-badge Casual Maintenance Intended

use std::thread;

fn main() {
    println!("Hello, world!");

    let (tx, rx) = flume::unbounded();

    thread::spawn(move || {
        (0..10).for_each(|i| {
            tx.send(i).unwrap();
        })
    });

    let received: u32 = rx.iter().sum();

    assert_eq!((0..10).sum::<u32>(), received);
}

Why Flume?

  • Featureful: Unbounded, bounded and rendezvous queues
  • Fast: Always faster than std::sync::mpsc and sometimes crossbeam-channel
  • Safe: No unsafe code anywhere in the codebase!
  • Flexible: Sender and Receiver both implement Send + Sync + Clone
  • Familiar: Drop-in replacement for std::sync::mpsc
  • Capable: Additional features like MPMC support and send timeouts/deadlines
  • Simple: Few dependencies, minimal codebase, fast to compile
  • Asynchronous: async support, including mix 'n match with sync code
  • Ergonomic: Powerful select-like interface

Usage

To use Flume, place the following line under the [dependencies] section in your Cargo.toml:

flume = "x.y"

Cargo Features

Flume comes with several optional features:

  • spin: use spinlocks instead of OS-level synchronisation primitives internally for some kind of data access (may be more performant on a small number of platforms for specific workloads)

  • select: Adds support for the Selector API, allowing a thread to wait on several channels/operations at once

  • async: Adds support for the async API, including on otherwise synchronous channels

  • eventual-fairness: Use randomness in the implementation of Selector to avoid biasing/saturating certain events over others

You can enable these features by changing the dependency in your Cargo.toml like so:

flume = { version = "x.y", default-features = false, features = ["async", "select"] }

Although Flume has its own extensive benchmarks, don't take it from here that Flume is quick. The following graph is from the crossbeam-channel benchmark suite.

Tests were performed on an AMD Ryzen 7 3700x with 8/16 cores running Linux kernel 5.11.2 with the bfq scheduler.

Flume benchmarks (crossbeam benchmark suite)

License

Flume is licensed under either of:

flume's People

Contributors

aeledfyr avatar bwoods avatar coolreader18 avatar cwfitzgerald avatar dylan-dpc avatar ericmcbride avatar gauthamastro avatar hippobaro avatar hobofan avatar imberflur avatar inetic avatar kyrias avatar lrazovic avatar lu-zero avatar lunacookies avatar maboesanman avatar matklad avatar mbrubeck avatar mullr avatar natsukagami avatar nivkner avatar phil-opp avatar restioson avatar rofrol avatar rzheka avatar sherlock-holo avatar stonks3141 avatar tavianator avatar tesuji avatar zesterer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flume's Issues

Memory leak caused by polling `RecvStream`

RecvFut pushes a hook thing onto a queue when polled and not ready:

wait_lock(&self.receiver.shared.chan).waiting.push_back(hook);

Normally, this is all tidied up when the RecvFut is dropped, but RecvStream will hold onto, and keep polling a RecvFut as long as you have the stream open, which can lead to unbounded memory growth.

Below is a simple test case to reproduce. All we do is poll a stream continuously in a loop, waiting for messages to arrive (while assuming here that messages are coming in from other sources, too, so that we keep polling).

If you run this, you'll see the memory usage in top increase quickly into hundreds of megs.

main.rs

use futures::StreamExt;

#[tokio::main]
async fn main() {
    let (tx, rx) = flume::unbounded::<()>();
    let mut rx_stream = rx.into_stream();

    loop {
        tokio::select! {
            _ = rx_stream.next() => {

            },
            _ = std::future::ready(()) => {

            }
        }
    }
}

Cargo.toml

[package]
name = "flume-memleak-test"
version = "0.1.0"
edition = "2018"

[dependencies]
flume = "0.10.9"
futures = "0.3.17"
tokio = { version = "1.7.0", features = ["full"] }

Why does sending require T: Unpin?

It is perfectly valid for SendFut to be Unpin even if T isn't. As long as you provide no way to pin project from SendFut to T, having a Pin<&mut SendFut> does not imply that T has been pinned.

Sender into_sink() panics when used with futures forward.

A sender sink created with into_sink() panics when used with futures stream::forward().
Using it in an async closure with fold works fine still. (see out commended code below)

if let Some(SendState::QueuedItem(hook)) = self.hook.as_ref() {

Trace:

thread 'tokio-runtime-worker' panicked at 'called Option::unwrap() on a None value', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/flume-0.9.1/src/async.rs:146:40
stack backtrace:
0: rust_begin_unwind
at /rustc/dd7fc54ebdca419ad9d3ab1e9f5ed14e770768ea/library/std/src/panicking.rs:483:5
1: core::panicking::panic_fmt
at /rustc/dd7fc54ebdca419ad9d3ab1e9f5ed14e770768ea/library/core/src/panicking.rs:85:14
2: core::panicking::panic
at /rustc/dd7fc54ebdca419ad9d3ab1e9f5ed14e770768ea/library/core/src/panicking.rs:50:5
3: core::option::Option::unwrap
at /root/.rustup/toolchains/nightly-aarch64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/option.rs:386:21
4: <flume::async::SendFuture as core::future::future::Future>::poll
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/flume-0.9.1/src/async.rs:146:23
5: <flume::async::SendSink as futures_sink::Sink>::poll_ready
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/flume-0.9.1/src/async.rs:187:9
6: <futures_util::sink::map_err::SinkMapErr<Si,F> as futures_sink::Sink>::poll_ready
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.6/src/sink/map_err.rs:39:9
7: <futures_util::stream::stream::forward::Forward<St,Si,Item> as core::future::future::Future>::poll
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.6/src/stream/stream/forward.rs:59:24
8: <futures_util::stream::stream::Forward<St,Si> as core::future::future::Future>::poll
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.6/src/lib.rs:111:13
9: tokio::runtime::task::core::Core<T,S>::poll::{{closure}}
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.3.0/src/runtime/task/core.rs:173:17
10: tokio::loom::std::unsafe_cell::UnsafeCell::with_mut
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.3.0/src/loom/std/unsafe_cell.rs:14:9
11: tokio::runtime::task::core::Core<T,S>::poll
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.3.0/src/runtime/task/core.rs:158:13
12: tokio::runtime::task::harness::Harness<T,S>::poll::{{closure}}
at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.3.0/src/runtime/task/harness.rs:107:27

Test code: (excerpt form my test suite)
(build with tokio v0.3 released recently)

use tokio::runtime::Runtime;
use futures::future::{self, FutureExt};
use futures::stream::{self, Stream, StreamExt};
  
use test::Bencher;


const BLOCK_COUNT:usize = 1_000;
  
const THREADS:usize = 8;
  
const CHANNEL_BUFFER:usize = 100;

 type Message = usize;
  fn new_message(val:usize) -> Message {
      val
  }
  
  fn dummy_stream() -> impl Stream<Item=Message> {
      stream::unfold(0, |count| async move {
          if count < BLOCK_COUNT {
              let buffer = new_message(count);
              Some((buffer, count + 1))
          } else {
              None
          }
      })
  }
  
#[bench]
fn channel_buf100_flume(b: &mut Bencher) {
    let runtime = Runtime::new().expect("can not start runtime");
    b.iter(|| {
        let (send_task, recv_task) = {
        use futures::SinkExt;
        let (tx, rx) = flume::bounded::<Message>(100);

        /*
        let send_task = dummy_stream()
            .fold(tx, |mut tx, item| async move {
                let r = tx.send_async(item).await;
                r.expect("Receiver closed");
                tx
            });
        */
        let send_task = dummy_stream()
            .map(|i| Ok(i))
            .forward(tx.into_sink().sink_map_err(|e| {
                panic!("send error:{:#?}", e)
            }));

        let recv_task = rx
            .into_stream()
            .for_each(|item| async move {
                test_msg(item);
            });
        (send_task, recv_task)
        };
        runtime.spawn(send_task);
        runtime.block_on(recv_task);
    });
}

Update nanorand to 0.5.1

There's an issue with versions < 0.5.1 not properly generating random numbers: rustsec/advisory-db#525.

Also, please reconsider the switch from rand to nanorand. There was no benchmarking done to see if flume is actually faster with nanorand.

Add oneshot channel type

Would you be open to adding a separate oneshot channel implementation? I haven’t done any benchmarking, but I assume that this could save some heap allocations as only one value has to be stored. Maybe this idea could be extended to include any constant channel capacity using the stable subset of const generics?

Draining Channel

Hi, thanks for the great work.

Looking into using this, just wondering if it was possible to drain a bounded channel? In my app some of the consumers could fall behind and just wanted to clear all currently queued items and start fresh.

Many thanks Jack

Spinlock exponential backoff has no cap — can hang all threads that touch the lock

flume/src/lib.rs

Lines 301 to 310 in 489c015

loop {
for _ in 0..10 {
if let Some(guard) = lock.try_lock() {
return guard;
}
thread::yield_now();
}
thread::sleep(Duration::from_nanos(1 << i));
i += 1;
}

This should have a max backoff. If the thread holding the lock is suspended, especially if it's suspended for a while (e.g. if it's a low priority thread, or gets paused in a debugger, or the system is laggy ...), every other thread will continue exponentially increase their backoff. Then, when the thread holding the lock finally gives up, it doesn't matter, since everybody else has hung.

Fixing this should just be a matter of setting a maximum sleep duration. I'd probably put it at 2ms max but honestly it's arbitrary and a better solution is not to use a spinlock for something like this.

The part where I tell you not to use a spinlock

You can skip this and just fix the issue if you aren't interested in switching. I don't have the energy or time to debate it, and I don't use flume so I don't really care. The real issue is libstd's mutex being bad enough to push you to this, which I truly hope we eventually fix (it's gotten better though).

In general: you shouldn't use a spinlock here (yes, it's still a spinlock if you call thread::yield_now). It's probably always a bad idea to use them in generic code (it can be okay for very concrete data cases but mostly is a mistake in userspace)... but especially given the size of the critical sections in flume (which include... calls into the memory allocator? 🙀), yeah, it seems likely to cause problems.

Calling yield_now like this does not make things better, and mostly just helps out benchmarks, since in those you're very likely to switch to either the thread that has the lock, or to another thread that will immediately switch away, and you don't have very many thread priorities.

You can easily find people telling you not to do this and why if you search for stuff like "sched_yield spinlock" (Linus, unsurprisingly, has some heated words about it that I won't link due to toxicity), but at the end of the day it's just not really a good thing to do and basically just wastes time hammering the kernels runqueue before going to sleep for probably far too long, and degrading UX. (This is true on basically all OSes)

I get why you don't use a Mutex on unix, although I suspect it might have gotten better since you tried, but it's still not great. Ideally, an MCS-style adaptive mutex would be good, as you don't need features like timed wait, but IDK of any off the shelf¹.

Anyway, follow this advice or don't — I don't use flume, but I've seen this kind of thing cause problems before. You should fix the backoff cap either way though.


¹ Closest I know of: parking_lot_core has one that it uses internally. It's not public (nor would it be reasonable to depend on parking_lot for just it), but you could use it as a model — I think you'd probably want to switch to using std::thread::park/std::thread::unpark over the thread_parker stuff though.

That said, obviously you can't write a mutex without unsafe code, (especially not one that performs well). I personally think that is less important than behaving well on modern OSes, but I guess people can disagree about that, and it's probably not a discussion that has a true correct answer.

Using std::sync::Mutex universally let you continue to refrain from unsafe though.

Remove src/bin/perf.rs or move it somewhere else

Unfortunately this makes cargo install flume to install perf binary which would override system perf coming from linux kernel..

Would be nice if you could move it to tests, examples or somewhere else.

Example showing request + response

I know it's pretty custom and not a typical usecase but I think it'd be pretty cool to show the power of the library to show how to do exact request + response. aka, tx, wait for rx back, but don't risk getting the rx from a different tx out of order.

Add close()

Hi, Thanks for your nice work. Sometimes we need to close the channel to prevent adding new items, and there are some cases that we cannot just drop the sender, so we need to explicitly close the channel without dropping it. Is there any plan to add the close() method?

Consider switching off `spin`

So, apparently spin is unmaintained. cargo deny mentions a few alternatives, though.

error[RUSTSEC-2019-0031]: spin is no longer actively maintained
    ┌─ /×××××××/Cargo.lock:103:1
    │
103 │ spin 0.5.2 registry+https://github.com/rust-lang/crates.io-index
    │ ---------------------------------------------------------------- unmaintained advisory detected
    │
    = The author of the `spin` crate does not have time or interest to maintain it.
      
      Consider the following alternatives (both of which support `no_std`):
      
      - [`conquer-once`](https://github.com/oliver-giersch/conquer-once)
      - [`lock_api`](https://crates.io/crates/lock_api) (a subproject of `parking_lot`)
        - [`spinning_top`](https://github.com/rust-osdev/spinning_top) spinlock crate built on `lock_api`
    = URL: https://github.com/mvdnes/spin-rs/commit/7516c80
    = spin v0.5.2
      └── flume v0.7.1
          └── ×××××××

2020-05-29 16:19:01 [ERROR] encountered 1 errors

`recv_async` returning Disconnected in some highly-concurrent situations even though `send` succeeds

I've been experiencing a situation where I feel like flume is misbehaving. On a channel, I can verify I've received an Ok(()) from a sender, and receive a disconnect error from flume on the corresponding receiver. This is the smallest example I could create.

Dependencies:

[dependencies]
tokio = { version = "1", features = ["full"] }
flume = { version = "0.10" }
futures = "0.3"
use flume::Sender;

#[tokio::main]
async fn main() {
    let tasks = (0..10).map(|_| sending_loop());

    futures::future::join_all(tasks).await;
}

async fn sending_loop() {
    for _ in 0..1000usize {
        sender_test().await
    }
}

async fn sender_test() {
    let (sender, receiver) = flume::bounded(1);

    tokio::spawn(sending(sender));

    receiver.recv_async().await.unwrap()
}

async fn sending(sender: Sender<()>) {
    sender.send(()).unwrap();
}

Running this on my machine, a 8-core/16-thread Ryzen, I regularly get the output: "thread 'main' panicked at 'called Result::unwrap() on an Err value: Disconnected', src/main.rs:21:33" which corresponds to the recv_async() line.

If this test is written to spawn a lot of tasks that call sender_test in parallel all at once, it doesn't seem to trigger the behavior. However, introducing multiple loops calling sender_test repeatedly causes the behavior change.

I couldn't figure out how to simplify this example further.

To work around the issue, you can clone the sender being passed into sending(), but this prevents actual disconnections from happening.

Update 1: It appears this was introduced between 0.9.2 and 10.0. I'm not able to reproduce this behavior with 0.9.2.

Values received out-of-order in a single-threaded async single-producer single-consumer scenario (involving tokio::select).

Single producer, single consumer, using send_async and recv_async. Everything runs on the "current thread" tokio's runtime. One small twist here is the usage of tokio::select!.

Here's a short repro:

fn main() {
    let rt = tokio::runtime::Builder::new_current_thread().build().unwrap();
    rt.block_on(test());
}

async fn test() {
    let (tx, rx) = flume::bounded(4);
    tokio::select! {
        _ = producer(tx) => {},
        _ = consumer(rx) => {},
    }
}

async fn producer(tx: flume::Sender<usize>) {
    for i in 0..100 {
        tx.send_async(i).await.unwrap();
    }
}

async fn consumer(rx: flume::Receiver<usize>) {
    let mut expected = 0;
    while let Ok(value) = rx.recv_async().await {
        //  !!! This assert usually fires after a few values
        assert_eq!(value, expected);
        expected += 1;
    }
}

Same experiment with tokio::sync::mpsc::channel() works fine:

async fn test_tokio_channel() {
    let (tx, rx) = tokio::sync::mpsc::channel(4);
    tokio::select! {
        _ = producer_tokio(tx) => {},
        _ = consumer_tokio(rx) => {},
    }
}

async fn producer_tokio(tx: tokio::sync::mpsc::Sender<usize>) {
    for i in 0..100 {
        tx.send(i).await.unwrap();
    }
}

async fn consumer_tokio(mut rx: tokio::sync::mpsc::Receiver<usize>) {
    let mut expected = 0;
    while let Some(value) = rx.recv().await {
        assert_eq!(value, expected);
        expected += 1;
    }
}

flume 0.8.4 panics on parallel recv_async()

Hi, Thanks for the nice project!
During creating an application with flume I found a panic in debug build.

full codes are https://github.com/hatoo/flume-panic

My Environment

  • WSL2 Ubuntu20.04
use futures::stream::FuturesUnordered;
use futures::StreamExt;

#[tokio::main]
async fn main() {
    let (tx, rx) = flume::unbounded();
    tokio::spawn(async move {
        let n_sends = 100000;
        for _ in 0..n_sends {
            tx.send_async(()).await.unwrap();
        }
        println!("send end");
    });

    let mut futures_unordered = (0..250)
        .map(|_| async {
            while let Ok(()) = rx.recv_async().await
            /* rx.recv() is OK */
            {}
        })
        .collect::<FuturesUnordered<_>>();

    while futures_unordered.next().await.is_some() {}
    println!("recv end");
}

The program sends many values to the channel and receives via parallel workers with recv_async() and panics in my environment. The log for RUST_BACKTRACE=full is pasted in https://github.com/hatoo/flume-panic.
Also, I've found that it works correctly if change recv_async().await to recv().

So I'm guessing there is an issue in recv_async().

Thanks.

Feature: broadcast

Your crate is almost ideal among that I tried (crossbeam-channel and tokio::sync::mpsc). The tastiest features are recv_async and send_async. It would be very nice to have the broadcast feature (i.e. by cloning the receiver by some other method like clone_for_broadcast() or something). Because there are situations (like mine) where broadcasting is necessary and all in one library would be very nice.

Possibly poor scheduling timeout on macOS and Windows

We have to disable three timeout tests in #17.
I tried to increase error bound up to 100 milliseconds but the bound exceeded to 150ms.

Some other attempts failed:

  • run the timeout tests in release mode only
  • Run in release mod with --test-threads 4

Expected Behavior: We want the timeout tests in Windows and macOS completed in bounds.

Consider adding into_iter()

The current api has into_stream, and would be nice to have the same functionality on the sync side as well.

MPMC support

#14 (and the mpmc branch) includes experimental support for cloning Receiver. However, performance is not great. This issue exists to discuss ways to improve this.

Selector::poll polls only the first selection

Just looking at the code, I noticed this line:

self.next_poll = (((self.next_poll as u64 + 1) * self.selections.len() as u64) >> 32) as usize;

The poll method is supposedly polling all selections in a pseudorandom order but self.next_poll is always zero unless self.selections.len() >= 2^32. I'm not sure what the code meant to be.

Make Selector reusable

A common pattern is to use Selector in an infinite loop.
Now a user need to reconstruct a new Selector in each iteration because wait consumes it.
This can have performance impact in high load cases.

Is it possible to make wait borrowing self?

Support broadcast channel

Right now the cloning of receivers does not emit values for each receiver but only for a single one.

Can we add a broadcast method which emits the value (with a Clone trait bound?) for each receiver which is connected at the time of sending?

Disconnecting a channel should, perhaps, drop unreceived messages

This may or may not be a problem in general; in my case it caused a deadlock. Scenario:

struct Command {
    sender: flume::Sender<()>
}

impl Command {
    fn new() -> (Self, flume::Receiver<()>) {
        let (tx, rx) = flume::bounded(1);
        Command { sender: tx }, rx
    }

    fn run(&mut self) {
        do_something();
        self.sender.send(()).unwrap();
    }
}

fn run_command_thread(receiver: flume::Receiver<Command>) {
    for mut command in receiver {
        command.run();
    }
}

fn main() {
    let (tx, rx) = flume::unbounded();
    std::thread::spawn(move || {
        run_command_thread(rx);
    });
    for i in 0..5000 {
        let command_rx, command = Command::new();
        tx.send(command).unwrap();
        // wait for command to finish; in some scenarios this blocks forever.
        command_rx.recv().unwrap();
    }
}

This is a pretty standard pattern for running commands in another thread and getting back a result.

Problem is if the command thread panics before reading all commands, the main thread might be blocking on command_rx.recv() forever, because the corresponding sender is stuck in a Command in the channel and will never ever be read because the command thread is dead.

The solution is to have a timeout on command_rx.recv() and check if the command thread (via tx.is_disconnected()), but... perhaps more broadly it makes sense to say that if there are no receivers left, anything in the channel gets dropped, because no one will ever be touching that again anyway. Or maybe not. Worth considering at least.

Futures support

I'm currently looking at what an API might look like for async support. At the most basic, I'd want a async fn on_recv and async fn on_send, but it would also be nice to have a method producing an impl async_std::stream::Stream.

Is this in line with the project's goals?

Check if all senders are disconnected

The iterators returned by Receiver::try_iter() and Receiver::drain() do not differentiate between the channel being empty and all senders having been dropped. So when either of those iterators end, there is no way to tell if the channel is permanently closed or if it is only temporarily empty. Would it be possible to add a public function, perhaps Receiver::finished() -> bool to read the private Receiver.finished field? That way, a receiver can properly react if the sender thread unexpectedly panics.

One more thing, shouldn't the documentation here actually say "Whether all senders have disconnected and there are no messages in any buffer"?

/// Whether all receivers have disconnected and there are no messages in any buffer

Future structs don't have `#[must_use]` attributes

While it isn't strictly necessary, adding #[must_use] to the futures returned by async methods helps to prevent simple errors of forgetting to await the returned future.

For example, this code causes no warnings, but never actually sends the message on the channel:

let (tx, rx) = flume::bounded(1);
tx.send_async("Hello, world!");

Idea: Show an example in README for select

First of all, I wanted to say thank you for making this library, flume made a transition from Go mindset to Rust multithreading super easy. I really would love a section on select that shows how to listen to multiple channels and switching between which ones arrive first. When I first started looking at this library, my brain was like "How do I do Go's switch statement over channels in Rust?", maybe could make it a bit clearer for other people and make flume stand out in showing how easy it is to do a powerful technique.

Maybe the example could be spawning two threads that send out different kinds of data, and show how select was meant to handle the various scenarios that could happen.

Single-item latest-only notification channel where writer overwrites atomically rather than blocking

As discussed with @zesterer on Zulip:

I'd like to have a channel very similar to a one-item bounded channel, except that when the writer goes to send, if the channel already has an item in it, the writer atomically replaces the item rather than blocking. Readers would block as usual if they receive on an empty channel.

If item 1 is in the channel, and a reader receives in parallel with a writer sending item 2, one of two things would happen: either the reader gets item 1 and the channel contains item 2, or the reader gets item 2 and the channel is empty.

I would use this for notification-style channels where only the latest message is useful, such as "the file is now N bytes long".

LICENSE files are missing

Hello,

Please add files with full licenses text (MIT and Apache-2.0), it is required by those licenses.

Thanks!


I would appreciate a new release.

Add a changelog

Hi there and thanks for the project!

I noticed that this project unfortunately does not have a changelog nor release notes nor git tags. That makes it fairly inconvenient to update this dependency. Just now, cargo outdated told me that I was using flume 0.8.x and that flume 0.9 was released already. And as far as I see it, the best way for me to find out what changed between those versions (in particular, what breaking changes), is to manually look through the commits. That's not optimal.

I know from experience that maintaining a CHANGELOG.md or writing release notes is kind of annoying, but it's really worth a ton for users of the library.

catch_unwind and Flume channels

Hi,

I'm trying to catch panics in some code with Flume:

    pub fn add_allocation(&self, address: usize, size: usize) {
        // simplified from real code.
        catch_unwind(|| {
            self.sender.send(
                AddAllocationCommand {
                    address,
                    size,
                }.into(),
            ).unwrap_or(());
        });
    }

This gives the following error:

error[E0277]: the type `UnsafeCell<flume::Chan<TrackingCommandEnum>>` may contain interior mutability and a reference may not be safely transferrable across a catch_unwind boundary
     --> src/api.rs:359:9
      |
  359 |         catch_unwind(|| {
      |         ^^^^^^^^^^^^ `UnsafeCell<flume::Chan<TrackingCommandEnum>>` may contain interior mutability and a reference may not be safely transferrable across a catch_unwind boundary
      |
     ::: /home/itamarst/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std/src/panic.rs:430:40
      |
  430 | pub fn catch_unwind<F: FnOnce() -> R + UnwindSafe, R>(f: F) -> Result<R> {
      |                                        ---------- required by this bound in `catch_unwind`
      |
      = help: within `&SendToStateThread`, the trait `RefUnwindSafe` is not implemented for `UnsafeCell<flume::Chan<TrackingCommandEnum>>`
      = note: required because it appears within the type `spin::mutex::spin::SpinMutex<flume::Chan<TrackingCommandEnum>>`
      = note: required because it appears within the type `spin::mutex::Mutex<flume::Chan<TrackingCommandEnum>>`
      = note: required because it appears within the type `flume::Shared<TrackingCommandEnum>`
      = note: required because it appears within the type `alloc::sync::ArcInner<flume::Shared<TrackingCommandEnum>>`
      = note: required because it appears within the type `PhantomData<alloc::sync::ArcInner<flume::Shared<TrackingCommandEnum>>>`
      = note: required because it appears within the type `Arc<flume::Shared<TrackingCommandEnum>>`
      = note: required because it appears within the type `flume::Sender<TrackingCommandEnum>`
      = note: required because it appears within the type `SendToStateThread`
      = note: required because it appears within the type `&SendToStateThread`
      = note: required because of the requirements on the impl of `UnwindSafe` for `&&SendToStateThread`
      = note: required because it appears within the type `[closure@src/api.rs:359:22: 368:10]`
  
  error: aborting due to previous error; 1 warning emitted
  1. Is this actually a problem? I can work around this AssertUnwindSafe... but that's a bad idea if it really isn't unwind safe.
  2. If it is unwind safe, probably the relevant code should have the UnwindSafe trait.
  3. If it is not unwind safe... any suggestions? I need to really really make sure no panics make it out, due to being called from FFI boundary (I don't want to abort the process).

Consider implementing a way to try_send without consuming the message

I find myself in a situation where I want to try to send a message in a channel and if buffer is full I want to send it to another "overflow" channel
Right now it's impossible to do without wasteful cloning because try_send consumes the message

This can be done either by returning original message in Result's Err when there was an error or by implementing a similar Permit API as in tokio::sync::mpsc
see https://docs.rs/tokio/1.0.1/tokio/sync/mpsc/struct.Permit.html

I'm personally in favor of Permit API as if done right it would allow to do other things, like getting a bunch of Permits(or one bigger Permit, depending on how we implement it) and sending a bunch of messages in one go without waiting
It could probably make sending multiple messages in one go more efficient

On the other hand I could wrap the message into Arc this would solve wastefulness of the cloning I do
But the Permit API, similar to tokio's could be useful for other things I described

Checkpoint Charlie?

The docs to flume::bounded() (https://docs.rs/flume/0.9.2/flume/fn.bounded.html) say:

flume/src/lib.rs

Lines 897 to 900 in 89ce729

/// Like `std::sync::mpsc`, `flume` supports 'rendezvous' channels. A bounded queue with a maximum capacity of zero
/// will block senders until a receiver is available to take the value. You can imagine a rendezvous channel as a
/// 'Checkpoint Charlie'-style location at which senders and receivers perform a handshake and transfer ownership of a
/// value.

What are you referring to with 'Checkpoint Charlie'-style location?

To me, this sounds like more like a 'Glienicker Brücke'-style location, see https://en.wikipedia.org/wiki/Glienicke_Bridge.

I don't know whether there were handshakes involved (I wasn't there), but it sounds more like Glienicker Brücke than Checkpoint Charlie to me ....

Is there a reason not to provide a `Sender.subscribe() -> Receiver` method?

As far as I can tell, it should be pretty easy to provide such a method, maybe something like

impl<T> Sender<T> {
    pub fn subscribe(&self) -> Receiver<T> {
        self.shared.receiver_count.fetch_add(1, Ordering::Relaxed);
        Receiver { shared: self.shared.clone() }
    }
}

Is there a reason not to provide this? Does flume guarantee that if all receivers are dropped then send() will always fail after that, or something similar to that?

tokio::sync::broadcast has a similar API, and I've found it to be useful occasionally.

Edit: ah, nevermind, I do see that they call disconnect_all() when all receivers/senders have dropped.

Idea: channel_match! macro

channel_match!(
   rx => |x:X| println!("{}",x)
   rx => |y:Y| println!("{}",y)
)
enum SelectMatch2<Q,S>{
   A(Q)
   B(S)
}
...

{
let a = |x:X| println!("{}",x)
let b =  |y:Y|1 println!("{}",y)
let result = flume::Selector::new()
    .recv(&rx0, |x| SelectMatch2<X,Y>::A(x) )
    .recv(&rx1, |x| SelectMatch2<X,Y>::B(x) )
    .wait();
match x 
   SelectMatch2<X,Y>::A(x) => a(x),
   SelectMatch2<X,Y>::B(x) => b(x),
};
}

There might be some generic magic I don't know about that could make this even simpler.

Documentation Request: Does dropping all senders prevent draining a channel?

A quick skim of the code looks to me like disconnected errors are only triggered once the channel is drained, but the documentation suggests otherwise:

pub fn try_recv(&self) -> Result<T, TryRecvError>
Attempt to fetch an incoming value from the channel associated with this receiver, returning an error if the channel is empty or all channel senders have been dropped.

Question: Is message order through a channel preserved?

If I have two async tasks (Producer1, Consumer1) operate on a bounded channel, is it normal for the consumer to receive messages out-of-order?

  • I understand it may be necessary for MPMC (or MPMC that performs well), but wanted to double-check that message order is not guaranteed/preserved by design for any flume channel configurations (e.g. SPSC/MPMC, async/sync, bounded/unbounded).

Thanks-

error[E0658]: or-patterns syntax is experimental

It looks like v0.10.6 is using the new or patterns syntax which is set to be stabilized in Rust 1.53.

However, it no longer builds on stable Rust (i.e. 1.52):

   Compiling flume v0.10.6
error[E0658]: or-patterns syntax is experimental
Error:   --> /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/flume-0.10.6/src/lib.rs:90:14
   |
90 |         let (Self::Full(msg) | Self::Disconnected(msg)) = self;
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: see issue #54883 <https://github.com/rust-lang/rust/issues/54883> for more information

error[E0658]: or-patterns syntax is experimental
Error:    --> /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/flume-0.10.6/src/lib.rs:128:14
    |
128 |         let (Self::Timeout(msg) | Self::Disconnected(msg)) = self;
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: see issue #54883 <https://github.com/rust-lang/rust/issues/54883> for more information

Was this intentional?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.