tokio-rs / tokio-service Goto Github PK
View Code? Open in Web Editor NEWThe core `Service` trait in Tokio and support
License: Apache License 2.0
The core `Service` trait in Tokio and support
License: Apache License 2.0
By example, some rpc service with persistent connections, typical work is
At step two we created auth token and bound to socket (all socket's have bounded HashMap, where state saves), how to impl that via Service?
Decide what to do about #11.
Is there a reason for why the Service
trait does not use a mutable reference to self
?
Line 157 in c56afde
Let's say I'd like to modify a client struct:
struct Client {
transaction_id : u16
}
impl Service for Client {
// type Request = ...
// ...
fn call(&self, req: Self::Request) -> Self::Future {
self.transaction_id += 1; // <---- this does not work of course :(
// ...
}
}
This elaborates on rust-lang/futures-rs#213 (comment)
Would you be able to elaborate on "I don't think it can be useful in the current form"?
Sure. I think we discussed that in chat, but let me summarize my opinion.
If you're implementing a client:
There is no way for client implementation to notify caller that service is ready again
There is no guarantee that poll_ready()
is called before call()
, so you end up with two options, both render poll_ready()
useless (i.e. you can do the same with SinkError
and without poll_ready()
):
a) arrange your call method to be able to accept requests anyway (i.e. queue it internally)
b) always return an error when your service is not ready, so caller repeats request on its own
My own use case at the moment of the last discussion was: I have a pool of connection to the database and I want to send my next request to a less loaded connection. It might be implemented using different traits like:
trait InFlight { fn get_requests(&self) -> u64; }
trait BufferSizes { fn get_bytes_buffered(&self) -> u64; }
And I might implement a connection pool with different load-balancing strategies:
struct DistributeByNumberOfInflightRequests<S: Service + InFlight>(S);
struct DistributeByBytesBuffered<S: Service + BufferSizes>(S);
struct DistributeRoundRobin<S: Service>(S); // for any service
If you're implementing a server, it's unclear when a transport might call poll_ready()
, I can see two options:
Still, when we already read a request there is no guarantee that poll_ready()
returns true, so transport layer have to be prepared to queue at least this one request.
At a glance, it's still cool to stop accepting connection on heavy load, but the fact that there is no notification of when the service is ready again makes it's very error-prone. I.e. the most obvious way to handle NotReady is to timeout and try again, but if you look at the system that is able to process 100k requests per second at a steady rate, it may have their queues filled up 100 times a second. Which means just adding a timeout of anything larger than 10 ms may starve service usage for an arbitrarily large period of time. I mean such a system is very unpredictable, it may stop accepting connections using 10% CPU or alike.
Just adding a notification mechanism to poll_ready()
is not useful too, because it's just an equivalent of the queueing requests somewhere.
On the other hand, it's possible to make a pushback mechanism based on statistics. I.e. create a middleware that counts the fraction of requests which got SinkErrors and applies a rate-limit for a number of accepted connections per second if there were more than 10% of errors in the last second.
And a per-connection pushback is usually handled by tracking the number of in-flight requests.
Sorry for the long write-up, but at the end of the day:
poll_ready()
poll_ready()
can be added later with a default implementation that is always ready, like it currently is in every example out thereThe NewService trait and the single method it provides are responsible for creating new service instances.
I think it would be more intuitive if it were named Factory
or ServiceFactory
, etc.
So while creating a new trait that is base on the tokio_service::Service
on, I run into some problems. My current code (in short form) looks like:
// Some crate stuff here
// ZapError = std::io::Error
// ZapResult = future::Ok<Response, io::Error>
// Full source: https://github.com/oldaniel/zap
trait Controller {
type Request = Request;
type Response = Response;
type Error = ZapError;
type Future = ZapResult;
fn call(&self, req: Request) -> ZapResult;
}
struct HelloWorld;
impl Controller for HelloWorld {
let mut resp = Response::new();
resp.body("Hello World!");
resp.ok()
}
}
fn main() {
let addr = "0.0.0.0:8080".parse().unwrap();
let mut server = Server::new(Http, addr);
server.threads(8);
server.serve(move || Ok(HelloWorld));
}
The error I ran into was:
error[E0277]: the trait bound `HelloWorld: tokio_service::Service` is not satisfied
--> examples/hello-world.rs:21:12
|
21 | server.serve(move || Ok(HelloWorld));
| ^^^^^ the trait `tokio_service::Service` is not implemented for `HelloWorld`
|
= note: required because of the requirements on the impl of `tokio_service::NewService` for `[closure@examples/hello-world.rs:21:18: 21:40]`
error: aborting due to previous error
So how can I create a custom trait based on tokio_service::Service
?
As FramedIo
now Stream + Sink
the server Service
is also basically Stream + Sink
.
Here is how I can make a server example (also note I don't need any boxing to reply with a future):
let (stream, sink) = socket.framed(Codec).split();
sink.send_all(
stream
.and_then(|original_line| {
Timeout::new(Duration::new(5, 0), &handle).unwrap()
.map(move |()| format!("after timeout: {}", original_line))
})
)
And pipelining with a limit of 5 inflight request is done in one line using buffered
combinator:
let (stream, sink) = socket.framed(Codec).split();
sink.send_all(
stream
.map(|original_line| {
Timeout::new(Duration::new(5, 0), &handle).unwrap()
.map(move |()| format!("after timeout: {}", original_line))
})
.buffered(5)
)
For client protocols the code might be more complex, but presumably client protocol is a Sink
which embeds futures::oneshot::Complete
in it, or somesuch, I need more experimentation with that.
And I think this solves 90% of the backpressure story this way.
I think adding a Filter abstraction may be useful.
Given that Tower duplicates/replicates a lot of the functionality expressed here, should we simply deprecate this crate? Leave it up, but add the word deprecated
somewhere in the repo description and/or README.
Nobody seems to ask about it in Gitter, and code-wise it's clearly stagnated.
/cc @carllerche
Not sure of the long-term plans but it would be nice if this project was published as a crate.
At the moment I can't see a clear direction of tokio-service
and tokio-proto
. Some people talk about tower
to be the replacement of tokio-service
(I'd disagree) and there are statements like these:
Tower is going to be the replacement for
tokio-service
.
I'm a bit confused now.
I really don't care about naming but if all the crates do the same we should synergize the power into one crate. If they are different we should precisely define the differences.
So would you mind to help out to clear up the situation?
Hey there, I'm curious what pattern I should employ to write unit tests for services and cannot find any examples here or elsewhere. Can someone point me at an example?
Currently, there is no way for a service implementation to signal that it is "closed". For a client, this means that the remote has terminated the socket. For a server, this means that the service doesn't wish to process any further connections on the socket (under load or server shutdown, etc...)
Finagle provides this functionality by having a status
fn that returns either Ready
/ Busy
/ Closed
.
One option could be to add a poll_status
fn, but i am not sure where that would leave poll_ready
.
I was messing around with abstractions on top of Service, and I'm not sure I understand it but it seems to me that there may be a rather deep problem with the API.
(This is basically an ellaboration of the fears I've held for a while that without impl Trait
in traits or at least ATCs, futures will not really work.)
The high level problem is this: Service::Future
is essentially required to have no relationship in lifetime to the borrow of self
in call. A definition that linked those two lifetimes would require associated type constructors.
This means that you cannot borrow self
at any asynchronous point during the service, only while constructing the future. This seems bad!
Consider this simple service combinator, which chains two services together:
struct ServiceChain<S1, S2> {
first: S1,
second: S2,
}
impl<S1, S2> Service for ServiceChain<S1, S2>
where
S1: Service + Sync + 'static,
S2: Service<Request = S1::Response, Error = S1::Error> + Sync + 'static,
S1::Future: Send,
S2::Future: Send,
{
type Request = S1::Request;
type Response = S2::Response;
type Error = S1::Error;
type Future = futures::future::BoxFuture<Self::Response, S1::Error>;
fn call(&self, request: Self::Request) -> Self::Future {
self.first.call(request)
.and_then(move |intermediate| self.second.call(intermediate))
.boxed()
}
}
Is it intentional that Service's future type is defined so that it could outlive the service being borrowed? Is a service supposed to have a method that constructs the type needed to process the request (separately for each request) and passes it into the future?
Chaining futures and streams can suffer from a sort of related problem which I was forced to solved with reference counting.
i need begin an postgresql transaction when new connection comming, and commit or rollback when the connection close
A good chunk of this file is to just "map" a service: https://github.com/tokio-rs/tokio-line/blob/c9295fe73847fc31243ee7f7ed86e0bf6a843594/src/service.rs#L62
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.