GithubHelp home page GithubHelp logo

Comments (5)

carllerche avatar carllerche commented on July 22, 2024

I believe that this describes two separate issues.

  • Only reconnecting when there is a request to send.
  • The inability to purge canceled requests from the queue.

Reconnect

Thinking more about it. I am leaning towards refining Service::poll_ready with the following:

  • An error is terminal. The service will never work again.
  • The fn should only be called when there is a request to send. This means that if the request "goes away", poll_ready should not be called anymore.

This means that, if there are no requests in the buffer, poll_ready would not be called and the connection would not be established.

The reason that I like this is because it models what would happen if you did a select over the request being canceled and the service's ready future. It also encapsulates the poll_ready concept and avoids leaking out of the abstraction.

The problem is that the connection logic might end up being half way there (socket connected and TLS handshake in progress), then poll_ready is no longer called and the handshake never finishes. Either this is OK (the remote will timeout) or there there could be a task spawned to drive the connection to completion regardless if poll_ready is called. I would also say that this problem exists today w/ the current Service API.

Buffer

The second problem is the queue used by Buffer does not eagerly release resources for canceled requests. I'm not exactly sure what you are proposing.

There are ways to deal with it w/o going w/ a Mutex<Vec<_>>, but of course, we shouldn't over optimize for perf w/o numbers. Is the queue holding on to memory a real issue today? I would think it isn't a ton of memory. It is also worth noting that the mpsc queue algorithm can support iteration on the consumer side too w/o a ton of trouble.

from tower.

seanmonstar avatar seanmonstar commented on July 22, 2024

This morning after thinking about it some more, I started to lean more in this direction:

  • Reconnect: In the proxy, we can treat the error from Reconnect::poll_ready as terminal, just as you mentioned. We would log that it happened, and then create a fresh Reconnect service.
  • Buffer: The memory waste itself isn't the larger issue, just something I noticed while reading through the source. However, as long as we can't iterate the waiters, we don't know if we should keep polling the inner Service::poll_ready.

Trying to combine the two needs a bit of a dance. So that the inner service in the proxy doesn't keep creating new Reconnect services and polling them, and trashing on error over and over, we would only want to poll the new Reconnect if there is still a request waiting. However, we don't really know if there is, and so what do we return from poll_ready?

If Buffer were to iterate its waiters in poll_ready, and pop any canceled, then we could potentially just task::current().notify() in the inner service, and let the Buffer only poll it again if there are still requests waiting.

from tower.

carllerche avatar carllerche commented on July 22, 2024

Ah, sorry, there was another bit that I forgot to say in my previous comment.

The Buffer task implementation would always pop one request from the queue, then start calling poll_ready on the inner service. It would also call poll_cancel (or whatever it is) on the response oneshot. This way, it knows if the request is canceled. If the request is canceled, it pops another request from the queue. If all requests are canceled, this will effectively drain the queue.

from tower.

seanmonstar avatar seanmonstar commented on July 22, 2024

Yes, that would help if there is a general timeout applied to all requests, since it's unlikely that a request at the front of the queue is not timed out, while another further back is.

However, it doesn't take into consideration if a request is canceled for another purpose, such as in the proxy the server connection could be closed (since we coalesce requests to the same target from different connections), or we could have gotten a RST_STREAM frame for it.

from tower.

jonhoo avatar jonhoo commented on July 22, 2024

@seanmonstar has this since been fixed? if not, could you re-iterate the issues after #72 landed?

from tower.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.