GithubHelp home page GithubHelp logo

futures-codec's People

Contributors

anakos avatar cbs228 avatar dignifiedquire avatar fogti avatar goto-bus-stop avatar katyo avatar kerollmops avatar kestrer avatar matthunz avatar mmstick avatar najamelan avatar nemo157 avatar povilasb avatar ryankurte avatar thomaseizinger avatar tomaka avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

futures-codec's Issues

Closing incoming stream doesn't yield None from Framed

It looks like EOF isn't being handled in the Stream implementation, if the underlying AsyncRead returns 0 bytes it will just be polled again and again in an infinite loop, sticking in a dbg! gives a stream of:

[src/framed_read.rs:88] Pin::new(&mut this.inner).poll_read(cx, &mut buf) = Ready(
    Ok(
        0,
    ),
)
[src/framed_read.rs:88] Pin::new(&mut this.inner).poll_read(cx, &mut buf) = Ready(
    Ok(
        0,
    ),
)

Add ability to put an upper limit on message sizes

To be able to use this codec on top of network protocols to communicate with untrusted clients, the codec layer should have the feature to be optionally able to discard a message (and returning an I/O error) if the buffered undecoded content is larger than a library-user specified size. This limit should be imposed not only on next (= recv), but on send too (althrough it should be non-fatal in that case, e.g. the underlying stream could be used to try to send another message); If next fails bc of a too big message, no further messages can be retrieved (althrough that depends on the encoding used. If the codec allows to skip-ahead to the beginning of the next message, the error would be non-fatal. the codec should have a method which returns a bool indicating if it supports reading messages after one message was to big.) But even if next fails and can no longer retrieve messages, send should be able to send further messages (e.g. to signal to the client that the connection is aborted because of a too big message).

Backpressure on FramedWrite senders

@najamelan, this issue follows up on discussion from #11.

This issue explores the need to apply "backpressure" in the Framer. Backpressure reduces the speed of a process which is producing data to that of the process which is consuming the data. In synchronous code, this is accomplished by blocking. In async code, this is accomplished by preventing input when the I/O is not yet ready for more. If backpressure is not applied, data will buffer forever—possibly until the sending system exhausts its available memory.

At present, FramedWrite::start_send() does not impose back-pressure on clients. Back-pressure is only applied during FramedWrite::poll_write() and FramedWrite::poll_flush(). The latter two poll calls may return Poll::Pending if the I/O has not yet completed. This will impose back-pressure on the sender.

There is an edge-case which is not handled by the present FramedWrite: bottomless streams. A bottomless stream always returns the next item and will never return None. Applications which may use bottomless streams include "live-streamed" media, which has no definite end.

Sink::send_all() will read an entire Stream and submit it all at once to the Sink. The main loop indicates that the Sink will only be flushed when either:

  1. The source Stream is exhausted; OR
  2. The source Stream blocks with Pending.

If the Stream never does either of these things, then the loop will run forever. I have a demonstration of this on my feature/write_backpressure branch. In this test, my stream is limited so that it won't use all of your memory, but it does prove that all the data is provided to the I/O at once, at the end.

This kind of scenario might unfold when you are trying to send a file that you can read very quickly or send data that generates in memory. As long as the source never Pendings, the data will buffer forever, and the FramedWrite won't even try to start sending it to the I/O.

Tokio solves this issue by limiting the amount of data that start_send() will accept before a flush is attempted.

I would like to propose adopting a similar behavior for FramedWrite, but with a twist: There is no need to flush the entire buffer. Instead, we should just try to AsyncWrite::poll_write() what we can. This should help guarantee that progress is made. We may need a caller-adjustable "high water mark" which sets the critical buffer length at which we start to do this.

Cross platform tests and examples for async network code now without having to TCP!

I just released a new crate that allows testing and examples without TCP, it should be more convenient than Cursor. I mention it here as I imagine this could interest you all:

https://github.com/najamelan/futures_ringbuf

I also have a serde/cbor codec ready. I will try to publish that soon, but kind of hoping to see a new release of futures_codec with the bug fix. I will let you know here when I publish cbor codec. It will allow sending arbitrary rust structs as long as they can be serialized with serde.

@cbs228

Unreasonable io::Error constraint

The constraint on Decoder::Error (type Error: From<io::Error> - https://github.com/matthunz/futures-codec/blob/master/src/decoder.rs#L11) is, at least to me, unreasonable. This should be any kind of decoding error, given not all decoding errors are in fact related to std::io::Error.
One way of going about this would be to always return Result<Self::Item, Self::Error>, which would allow e.g. Option<Packet> or Result<Packet, DecodeError>, but having an Option which would always be Some is simply not something which should be accepted in such a design, if the error type might not be whatever type one would want.

Unify framed transports with Transport

This crate has some redundancy in order to provide a good abstraction (such as implementing AsyncRead for Fuse). However this does make for a nice api (e.g. FramedWrite cant be a Stream).

I propose transport as a solution to this. Most of the adapters would be inside one struct which would enable us to reuse methods and trait implementations (like Sink and Stream).

You could then make a framed stream with Transport::framed(io, codec) or Transport::framed_write(io, encoder)

The branch is here:
https://github.com/matthunz/futures-codec/tree/transports

You guys have an opinion on this?
@najamelan @cbs228 @Nemo157

Compatibility with tokio_util::codec

I found this crate pretty helpful for running codecs when I would like to play without tokio. But currently there is missing some methods, which stops me to use it.
My pr #42 solves this issue.

Handle "bytes remaining in stream" in FramedRead

I'm working on a streaming decoder for debian control files (among other OS-related files and IPC), which have their entries separated by empty lines, except for the final package in a list, which uses EOF to indicate the end of the last entry.

It seems there isn't a way to define this sort of behavior here -- that any remaining buffer can be treated as the final output for decoding.

The issues can be replicated from

https://github.com/pop-os/deb-control

By running one of the example tests:

cargo run --example testing

Returning Ok(None) ends the stream?

I'm working on a Server-Sent Events library that uses futures_codec to decode an event stream line by line. Some servers send heartbeat messages as comments (lines starting with :), which should be discarded by the decoder. So it's possible to get a stream of input like:

:heartbeat signal

[10 seconds later]
:heartbeat signal

[10 seconds later]
:heartbeat signal

After parsing the :heartbeat signal lines and getting to the end of the current buffer, my parser returns Ok(None), because there were no actual event messages in the input. At this point, FramedRead will return Poll::Ready(None) instead of Poll::Pending, thus ending the stream.

This test simulates the behaviour by emitting single bytes from an AsyncRead impl: goto-bus-stop@3bebe0c
The as represent event messages, and bs represent comments that should be ignored. When reaching a b, the decoder returns Ok(None) and the stream is ended.

Is this the intended behaviour? I think ending the stream is a legitimate use case, but I'm not sure how to implement a parser that doesn't emit a message for every input correctly. I thought I could work around it by not consuming the comment lines from the BytesMut input until there is an actual event message, but that would require re-parsing those comment lines on every call to decode(), and the buffer.is_empty() check will return an error because the buffer isn't empty after the call to decode().

For my use case, returning Poll::Pending if the decoder return Ok(None) would be a solution. The decoder itself can then no longer decide to close the stream, unless the return value is changed from Option<Self::Item> to something like

enum DecodeResult<T> { // T = Self::Item
    Item(T),
    Pending, // no item here, need more data
    End, // done, close the input stream
}

Linescodec can panic if the buffer is full

LinesCodec uses BytesMut::put where BytesCodec uses extend_from_slice. BytesMut::put can panic if the buffer is full. Ideally a codec lib has some way to provide back-pressure (maybe extend the buffer the first time and later returning Poll::NotReady from Sink::poll_ready until there is sufficient space, or at least a read has happened?)

Brief, I don't know what the best solution is, it will take some thought, but currently neither FramedWrite2 nor Encode provides a mechanism to prevent a memory leak, and currently can panic.

Going further it might be good to let the client choose the max size of the buffer.

Publish new version

Can you publish a new version of this crate to crates.io? I'd like a new release which contains #18.

It might also be a good idea to tag released versions with annotated tags (git tag -a …). GH will auto-detect these as "releases" and add them to your releases page.

Publish new version to crates.io

Hi, the bytes upgrade means that you lost backwards compatibility with old bytes versions. Please publish a new version as soon as possible.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.