Working group dedicated to improving the foundations of async I/O in Rust
Please visit our rendered page for more information!
Working group dedicated to improving the foundations of Async I/O in Rust
Home Page: https://rust-lang.github.io/wg-async/
License: Apache License 2.0
Working group dedicated to improving the foundations of async I/O in Rust
Please visit our rendered page for more information!
async
everywhere or adding block_on
calls. But, at least in tokio, block_on
calls cannot execute from async threads, so this can lead to panics if that code that is using block_on
ever winds up in an async context.More details from my conversation with @Mark-Simulacrum
block_on
A possible idea for a new project: ChatteRS, an IRC Client (application development)
This isn't a PR, because I think it needs a bit of discussion first. Is this type of project unique enough?. But if we agree, I'm happy to convert this into a PR
"ChatteRS" is an IRC client (designed as an application to be run as a desktop application). This application might be purely text-based, or it could have a graphical user interface
This IRC client doesn't aim to have the worlds fanciest features, or the slickest interface. But it strives to have a solid, small, easy-to-read codebase that can be easily developed, maintained, and extended over time.
When thinking about a new project, asking "how is it different than the others" is probably the most important point. I think I see the following differences: there isn't a need for a custom runtime or custom futures, or anything really performance sensitive, this isn't a library, but an application that will do a "async-like" things (like reading from the network, getting user input). So it's likely to lean heavily on the ecosystem (crates.io) to provide a lot of functionality.
There's nothing fundamentally important about this project being an IRC client (I think any type of "desktop application" could fit in here), but I thought that an IRC client was a familiar example to a lot of people, and it represents a certain type of real-world application that often just "glues" together existing pieces of functionality, with some fairly simple "business logic" or "application logic" sitting in the middle. So with this in mind, one possible way to focus/target is to focus on this "glue" aspect -- can rust async easily glue together a bunch of different async libraries into 1 cohesive application?
This new project is probably most similar to the existing "YouBuy (Traditional Server Application)" project in that both don't need custom/tailored runtimes, both don't need tight control of performance, both want to rely heavily on the crates.io ecosystem. I do sometimes hear complaints like "rust async only cares about network servers", so I wonder if some type of desktop application might help assuage those concerns (though maybe "SLOW" and "DistriData" already fill that role well enough)
Lastly, I wonder if each project should have an FAQ entry titled "Why is this project written in async rust?" where we can explain why we think rust is a good fit for this particular project. Each of the current projects probably already have a successful real-world version that's not written in rust (indicating that rust isn't the only suitable language for each of these projects). For this "ChatteRS" project, I think the answer to "why is this project written in async rust" is something like: rust's strong type system and focus on correct code can help the project reduce the number of bugs. Even though the project is not performance-focused, rust general reputation of being "not wasteful" with resources is useful. The strength of the crates.io ecosystem is also a big draw here
Alan is building a simple library for interacting with his favorite movie tracking app "numbersquard". Numbesquard exposes their data over a restful API. Alan, new to Rust, searches for an HTTP library and quickly gets sucked into long debates about async/sync IO, Surf vs reqwest vs curl, tokio vs async-standard, etc. Alan is so filled with doubt that he doesn't even write a line of code.
This story is similar in many ways to #54, #49, and #45, but HTTP is so common and this particular situation so relevant to so many projects that I think it needs to be called out explicitly.
Hi,because for some embedded devices, memory resources are limited. Is there any no-std asynchronous runtime planned for rust?
It's very easy to write inherent async fn
s on a type, but if using them inside a trait method is much harder. For example, this pseudo code doesn't work, and making it work isn't intuitive:
impl Service for Handler {
type Future = WhatTypeGoesHere;
fn call(&self, req: Request) -> Self::Future {
async move {
self.count(req).await;
Ok(self.construct_response())
}
}
}
async fn
when implementing a trait.async fn
or async
block.Old pages:
Decide where we are going to host the new page, update it, and point to it from this repo.
Alan was working on a endpoint which would be making a large amount of async requests, and decided to do a join operation to await them in parallel. After a bunch of time spent googling, he realized that these operations weren't part of the language or std, but implemented in his runtime. He used the join!
macro to poll them in parallel, but was disappointed to see that the tooling he was accustomed to, such as rustfmt or rust-analyzer didn't work nicely with this macro. He fell back to using the join3
method in the futures crate instead, since he didn't need that many types. However, as the endpoint grew, he is currently at the limit and wishes this limit didn't exist.
In order to write reliable and performant async code today, the user needs to be aware of which functions are blocking, and either:
Determining if a function may block is non-trivial: there are no compiler errors, warnings or lints, and blocking functions are not isolated to special crates but rather unpredictably interspersed with non-blocking, async-safe synchronous code. As an example, most of std
can be used in async code, but much of std::fs
and std::net
cannot. To make matters worse, this failure mode is notoriously hard to detect: it often compiles and runs fine when the executor is under a small load (such as in unit tests), but can cause severe application-wide bottlenecks when load is increased (such as in production).
For the time being, we tell our users that if a sync function uses IO, IPC, timers or synchronization it may block, but such advice adds mental overhead and is prone to human error. I believe it's feasible to find an automated solution to this problem, and that such a solution delivers tangible value.
I am not qualified to say, and I would like your thoughts! A couple of possibilities:
#[may_block]
that can be applied on a per-function level.Typically, blocking is either the result of a blocking syscall or an expensive compute operation. This leaves some ambiguous cases:
std
involve expensive computation by any reasonable definition.std::sync::Mutex::lock
is blocking only under certain circumstances. The must_not_await
lint is aimed at preventing those circumstances (instead of discouraging its use altogether).TcpStream::write
blocks by default, but can be overridden to not block using TcpStream::set_nonblocking
.println!
and eprintln!
macros may technically block. Since they lack async-equivalents and rarely block in practice, they are widely used and relatively harmless.Ambiguity aside, there are many clear-cut cases (e.g. std::thread::sleep
, std::fs::File::write
and so on) which can benefit from a path forward without the need for bike-shedding.
If fn foo() { bar() }
and bar()
is blocking, foo()
is also blocking. In other words, blocking is transitive. If we can apply the annotation transitively, more cases can be detected. OTOH, this can be punted for later.
When dynamic dispatch is used, the concrete method is erased. Should annotations be applied to trait method declaration or in the implementation?
The detection should only apply in "async contexts". Does the compiler already have a strict definition for that? Examples from the top of my head:
poll
method of a custom future impl, or the poll_fn
macro.We need an async form of the Read and Write traits.
There are multiple versions of this in the ecosystem:
futures: AsyncRead, AsyncWrite
tokio: AsyncRead, AsyncWrite
async_std: Read, Write
Relevant design decisions:
ReadBuf
(rust-lang/rust#78485), should we include a method without it like Read
hasGrant access to this repo and associated projects to the right set of people.
Alan wants to intermix data processing and he finds it difficult. He misses Kotlin's support for coroutines. Barbara laments the lack of structured concurrency or Rayon-like APIs.
@farnz opened a really interesting issue rust-lang/futures-rs#2387:
This would make a great status quo story!
We've found a nasty footgun when we use
FuturesUnordered
(orbuffered
etc) to get concurrency from a set of futures.Because
FuturesUnordered
only polls its contents when it is polled, it is possible for futures lurking in the queue to be surprised by a long poll, even though no individual future spends a long time inpoll()
. This causes issues in two cases:
When interfacing with an external system via the network; if you take a result from the stream with
while let Some(res) = stream.next().await
and then do significant wall-clock time inside the loop (even if very little CPU time is involved because you're awaiting another network service), you can hit the external system's timeouts and fail unexpectedly.When using an async friendly semaphore (like Tokio provides), you can deadlock yourself by having the tasks that are waiting in the
FuturesUnordered
owning all the semaphores, while having an item in a.for_each()
block afterbuffer_unordered()
requiring a semaphore.https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f58e77ba077b40eba40636a4e32b5710 shows the effect. Naïvely, you'd expect all the 10 to 20ms sleep futures to complete in under 40ms, and the 100ms sleep futures to take 100ms to 200ms. However, you can see that the sleep threads complete in the timescale expected and send the wakeup to the future that spawned them, but some of the short async
sleep_for
futures take over 100ms to complete, because while the thread signals them to wake up, the loop is.await
ing a long sleep future and does not get round to polling the stream again for some time.We've found this in practice with things where the loop body is "nice" in the sense that it doesn't run for very long inside its
poll
function, but the total time spent in the loop body is large. The futures being polled byFuturesUnordered
do:async fn do_select<T>(database: &Database, query: Query) -> Result<Vec<T>> { let conn = database.get_conn().await?; conn.select_query(query).await }
and the main work looks like:
async fn do_work(database: &Database) { let work = do_select(database, FIND_WORK_QUERY)?; stream::iter( work .into_iter() .map(|item| do_select(database, work_from_item(item)).await) .buffered(5) .for_each(|work_item| do_giant_work(work_item)).await; }
do_giant_work
can take 20 seconds wall clock time for big work items. It's possible forget_conn
to open the connection (which has a 10 second idle timeout) for each Future in thebuffered
set, send the first handshake packet, and then returnPoll::Pending
as it waits for the reply. When the first of the 5 in thebuffered
set returnsPoll::Ready(item)
, the code then runsdo_giant_work
which takes 20 seconds. Whiledo_giant_work
is in control, nothing re-polls thebuffered
set of Futures, and so the idle timeout kicks in server-side, and all of the 4 open connections get dropped because we've opened a connection and then not completed the handshake.We can mitigate the problem by using
spawn_with_handle
to ensure that thedo_select
work happens whenever thedo_giant_work
Future awaits something, but this behaviour has surprised my team more than once (despite enough experience to diagnose this after the fact).I'm not sure that a perfect technical solution is possible; the issue is that
FuturesUnordered
is a sub-executor driven by the main executor, and if not polled, it can't poll its set of pending futures. Meanwhile, the external code is under no obligation to poll theFuturesUnordered
in a timely fashion. Spawning the futures before putting them in the sub-executor works because the main executor then drives them, and the sub-executor is merely picking up final results, but futures have to be'static
lifetime to be spawned.
I want to include (a) all people who commented on issues, (b) all people who opened PRs, and (c) all people who joined an AVD writing session!
Not having async closures can mean needing to resort to macros to create certain (fairly common) types of combinators (ones where a closure that returns a future runs into lifetime issues).
We experienced this in pantsbuild/pants#11548, and I ended up writing a macro to put what would have been a parameter to a closure-returning-a-future or async-closure directly on the stack instead: pantsbuild/pants#11759
As drafted in #10
This issue exists to collect links to blog posts that seem like status quo story ideas. Please provide the link and whatever other details you can that might help people to know what it's about.
This issue is not an end point! It's more of a "work queue". The idea is for folks to read the blog posts and identify potential status quo user stories and then open up fresh issues with those stories.
Please check the box next to a blog post link (and leave a comment) if you have done that!
Suggested format for each comment:
* [ ] BLOG_POST_URL
> Some quote from blog post that gives the general idea
We want to make sure that we have stories that cover the full range experiences. This list is not yet exhaustive, but is a good start from @estebank and @nikomatsakis :
Topic | Issues | PRs | Stories |
---|---|---|---|
Developing libraries for use in many different environments | #95 #54 #49 | ||
Targeting embedded systems or those with very narrow requirements | #85 #92 | ||
Maintaining systems into production | #76 #75 #69 | #99 | |
Configuring and tuning systems for high performance | #87 | #129 | |
Combining parallel processing of data with I/O | #107 | ||
Missing cool features from other languages that maybe Rust should have (JS, C#, Clojure, Kotlin, whatever) | #107 | ||
Ergonomics of using async Rust | #105 #82 | #104, #99 #129 | |
Learning async Rust when you already know Rust | #94, #104 | ||
Learning async Rust when you don't already know Rust | #95 | ||
Onboarding new employees into a system built using async Rust | #104 | ||
Integrating with other async runtimes, esp. in other languages | #67 | ||
Supporting systems that integrate multiple reactors | #87 | ||
Migrating an existing sync crate to async | |||
Supporting both sync and async apis in the same crate | |||
Writing a custom async executor | #128 | #115 |
(We will expand this issue body with xrefs to other issues, PRs, and stories)
Some stuff can't be written in a blocking model, even a non-blocking blocking model like async/await. By blocking I mean that when you do an async call, your coroutine blocks, and the called object is also blocked on that one piece of work until it returns asynchronously. An obvious example that doesn't fit this is a network stack layer where events come in from both below and above and also from timers. All these events have to be responded to immediately. Blocking (or logically blocking) just won't work.
Doing some kind of a "select" on the calling side solves the "only one outgoing call" problem, and the "being called blocks an object" (i.e. "only one incoming call") problem can probably be solved by having multiple proxy objects for your main object, so you don't block the main object. But this is all a very round-about way of getting the required behaviour.
So this is where the actor model comes in. I don't know whether you want to discuss the actor model in this review, but the subject keeps on coming back. As the author of Stakker crate, I am very happy to contribute to the discussion if it is of interest. Here are some subjects you might wish to cover in your review:
Different models of actor system in relation to async/await:
Impedance mismatch between async/await and actor model:
So I guess these are the questions this raises:
For example, could we make async/await suitable for actor-like tasks? The fundamental problem is that the state self
is locked during the .await
. If more than one coroutine could access self
at the same time (i.e. interleaved at yield points) then the problem of blocking the actor queue would be solved. (If this could be done with only static checks, i.e. no runtime RefCells or whatever, so much the better.) However maybe this is just completely incompatible with the async/await model, so it is just not possible. So an external actor system is the only way to handle these kinds of problems.
For example, stuff of interest related to async/await for my own low-level actor system (Stakker):
'until_next_yield
lifetime in async/await in order to safely switch in and out references to self
and the context. Or alternatively completion of the existing plans for Rust generators. This is allow several actor coroutines to efficiently interleave access to the same shared actor state.Tell me if you want me to write this up, i.e. whether this (or any parts of it) are subject areas of interest, and where in your framework for this review it should fit.
Future
objects generated by async fn
can be surprisingly large. Data held across await points makes them bloated, and this adds up from large call graphs.
I've converted a complex data processing pipeline to async, and it started crashing with stack overflow. It was surprising that stackless non-recursive async can overflow the stack. It turned out that my call graph was pretty large, essentially holding my entire application as a single state machine. In normal sync code cold paths cost nothing, but in async every .await
, even not taken, may increase Future
's size.
This problem was not easy to spot in the source code. There's no easy way to see which data is held across await points. There's no eager drop. There are no lints/warnings/profilers that help detect large Future
objects holding too much state inline.
Workaround for this (Box::pin().await
or spawning) is relatively easy, but it's non-obvious. Because there are no lints/warnings, it's not easy to know where it should be added and have compile-time assurance that it helped.
Thanks for working on this project!
https://rust-lang.github.io/wg-async-foundations/vision/characters/alan.html
I think a "Variant D: Go" would be useful here. Proposed text:
Alan develops networking programs in Go. He enjoys the simplicity and first-class treatment of concurrency, and is excited about Rust's promise of "fearless concurrency." He'd like to try Rust for more efficient use of memory and CPU, as well as its ability to do FFI well. When working with async Rust, Alan has the question: why not use coroutines / green threads / M:N threads?
I feel a lot like Alan. An annoying situation which is happening to me in async Rust is having to handle dependencies which needs different versions of the same runtime (tokio's transition to 1.0). I feel like "doing cargo's work" when I have to find the right combination of features and versions on my dependencies just to be able to run my code without panics.
Originally posted by @eduardocanellas in #70 (comment)
"Whatever they're using it for, we want all developers to love using Async Rust. " - from the manifesto of this project.
That's a problem. This project is by async enthusiasts, who seem to think that all developers should want to use async. It's a short step from there to require all developers to use async.
Async is really needed only for a specific class of programs - those that are both I/O bound and need to maintain a large number of network connections. Outside of that niche, you don't really need it. We already have threads, after all. Not everyone is writing a web service.
In my case, I'm writing a viewer for a virtual world. It's talking to a GPU, talking to multiple servers, decompressing files, talking to a window, and is compute bound enough to keep 2 to 4 CPUs busy. It will have a most a dozen threads. For this class of problem, threads are essential and async has negative value.
Already, I've dropped the "hyper"/"reqwest" crate and switched to "ureq" because "reqwest" pulls in "tokio", and that, apparently, can no longer be turned off. I'm concerned about async contamination spreading to other crates.
I'm concerned that this project may break Rust as a systems language by over-optimizing it for the software-as-a-service case.
Thanks.
Alan is accustomed to implementing services in Java He has design patterns in his mind that don't work in Rust. He also gets confused by specific things around async Rust / Rust futures. What are they?
Brief summary: Various problems arise due to the types of closures and the limitations of closure types.
fn notifyLater<F>(listener: F) where F: Fn + Clone...
is unwieldy.
Fn
, FnMut
, Clone
, Copy
, Sync
, and Send
. But at the time you make these decisions, you haven't written any code yet
Send
closure and add that to the requirements. Harder to refactor some existing code so it be used in F: Whatever
.Fn
, Clone
, Copy
, Sync
, or Send
notifyLater
has to store this closure somewhere (like in a struct
) which must now also be genericAnalogous problems for derived anonymous types, e.g. futures
Pain points like these creates pressure to use third-party crates. But this fragments the ecosystem and complicates debugging projects with different systems, e.g. dependencies
Character (if known; see [cast]): Possibly Grace, although a lot of these issues are informed by Swift background rather than C++. I'm uncertain if a different systems background justifies a new character.
Key points or morals (if known):
Let's pick the top priorities for new feature work on async/await this year.
Use this issue for submitting ideas that have their own tracking issues/RFCs. (If one doesn't exist, create one!)
However, I'd like to avoid using this issue for discussion of individual items. Let's use the Zulip topic for that instead:
Hello, I'm Barbara, I just want to know how many tokio tasks are idling at any given moment and also how much memory they use, also I want to know which tasks haven't been pulled in the last 10 minutes ok I guess I want a bunch of things
Drop
, the cleanup code to close the database handle is running in a spawned task, and there is a race condition between that task executing and other connections opening.tokio-named-pipe
crate to get a handle, but wind up casting the handle to usize
to actually pass it around (not 100% sure of the details here)Background: Barbara considers writing a new futures-compatible executor to fill a niche in the executor landscape
Aspect relevant to wg: Barbara hesitates / decides not to, for reasons that primarily are ecosystem and interop related
I'm writing this as an issue before writing the story because it's not clear to me if this is a distinct story as it is a mirror of a lot of the other stories that already have been well expressed. It may very well be a duplicate.
This is a real life story: I was reading #87 and realized that a fair-ish1 futures-compatible multi-threaded executor for ?Send futures would be perfect for web servers. However, there are ecosystem hurdles that make this substantially more challenging than writing the code:
Shiny opinionated future: Any character can start with a simple/standard executor and swap it out for another one without having to rewrite their entire application. Replacing libraries with ones specific to their new executor will often provide a performance benefit, but can be done over time as needed, not as a mandatory wholesale change when switching executors. Example: Alan starts with async-std's executor and a bunch of executor-independent libraries, writes a substantial application against those types and then decides he really wants the perf characteristics of tokio, so he switches just the executor and sees a benefit for his application. Later on, he finds some time to replace the networking stack with a tokio-tuned one, and sees further benefits. Everything else still is using the neutral executor-independent libraries that provide lowest-common-denominator performance. Similarly, if in a different codebase Alan gets a compiler warning about Send types but isn't sure that spawn_local has the right perf characteristics for his application, he can swap out the executor and everything else still works. Obviously either of these switches to applications-specific executors wouldn't be as seamless to reverse.
Is it worth writing this as one or more stories?
1 fair-ish: creates new !Send futures on whichever runtime thread has the least of them, or some other similar strategy, acknowledging that this is more likely to become lopsided than a work-stealing multithreaded executor of Send futures would be, especially if the spawned tasks take an unpredictable wall-time duration. On that note, however, an optional duration_hint function on the Future trait might be useful to executors
The FAQ entries today use nested lists, but those are annoying when you want to embed code blocks. A better choice would be use ###
sub-headings (or ####
, as the case may be). These would also be linkable.
To make this happen you would need to basically riprep through the repo to convert the existing "Frequently Asked Questions" lists.
I don't honestly know! But people keep telling me async stack traces are hard to debug! Help me understand! Send me examples!
As a user with a Java background, I'm used to the Netty/Vert.x approach to async where blocking the event loop is very bad, and almost certainly results in a performance hit, or the application being stuck.
This is really well detailed in the Vert.x doc, in The Golden Rule - Don’t Block the Event Loop paragraph:
We already know that the Vert.x APIs are non blocking and won’t block the event loop, but that’s not much help if you block the event loop yourself in a handler.
If you do that, then that event loop will not be able to do anything else while it’s blocked. If you block all of the event loops in Vertx instance then your application will grind to a complete halt!
So don’t do it! You have been warned.
Examples of blocking include:
Thread.sleep()
Waiting on a lock
Waiting on a mutex or monitor (e.g. synchronized section)
Doing a long lived database operation and waiting for a result
Doing a complex calculation that takes some significant time.
Spinning in a loop
If any of the above stop the event loop from doing anything else for a significant amount of time then you should go immediately to the naughty step, and await further instructions.
So… what is a significant amount of time?
How long is a piece of string? It really depends on your application and the amount of concurrency you require.
If you have a single event loop, and you want to handle 10000 http requests per second, then it’s clear that each request can’t take more than 0.1 ms to process, so you can’t block for any more time than that.
The maths is not hard and shall be left as an exercise for the reader.
If your application is not responsive it might be a sign that you are blocking an event loop somewhere. To help you diagnose such issues, Vert.x will automatically log warnings if it detects an event loop hasn’t returned for some time. If you see warnings like these in your logs, then you should investigate.
Thread vertx-eventloop-thread-3 has been blocked for 20458 ms
Vert.x will also provide stack traces to pinpoint exactly where the blocking is occurring.
I've seen this message pop up in several Vert.x projects, always for good reasons. In other projects, not necessarily using Vert.x, I've found it to be a good habit to add checks along those lines:
Thread.currentThread()
) to panic early.I've never seen this kind of error messages in Rust, and I'm not confident with my ability to use the right APIs in the right context. How can I make sure this doesn't come and bite me?
Set priorities and decide how to rank them for 2020, then publish this somewhere.
Relevant links:
await
point is effectively a cancellation point, and people don't always write "cancellation-safe" code. I'd love to hear more concrete examples of where this happens.std::io::Read
or std::io::Write
, and wants it to work in an async context with minimal fuss.Read
/Write
traits, it works on streams. (Where "stream" is used colloquially here.)Read
/Write
traits seems inelegant and doesn't seem to scale with respect to maintenance effort.Read
/Write
impls, but it may come with additional costs that one might not want to pay.One example here is the flate2
crate. To solve this problem, they have an optional dependency on a specific async runtime, tokio, and have impls for AsyncRead
and AsyncWrite
that are specific to async runtime.
Another example is the csv
crate. The problem has come up a few times:
Its author (me) has not wanted to wade into these waters because of the aforementioned problems. As a result, folks are maintaining a csv-async
fork to make it work in an async context. This doesn't seem ideal.
This is somewhat related to #45. For example, AsyncRead
/AsyncWrite
traits that are shared across all async runtimes might do a lot to fix this problem. But I'm not sure. Fundamentally, this, to me, is about writing I/O adapters that don't care about whether they're used in a sync or async context with minimal fuss, rather than just about trying to abstract over all async runtimes.
Apologies in advance if I've filled out this issue incorrectly. I tried to follow the others, but maybe I got a bit too specific! Happy to update it as appropriate. Overall, I think this is a really wonderful approach to gather feedback. I'm completely blown away by this!
Also, above, I said this was "almost" Barabara, because this story doesn't actually require the programmer to write or even care about async code at all.
Alan is accustomed to JavaScript promises. He has design patterns in mind that don't work in Rust; he also gets confused by specific things around Rust futures. What are they?
We saw rust-lang/rust#65875 closed and requiring an RFC. The working group should probably pick this up.
Alan is building a library but he is having a hard time figuring out how to do certain basic pieces of functionality. For example:
These bits of functionality require a 3rd party crate? But which one?! futures
, futures-util
, futures-io
? Some functionality he needs seems to be in tokio
, but he's not using tokio
! Grrrr...
futures
crate happens to have futures::select!
, but it can often be a bit hard to use. tokio::select!
can be nicer but what if you don't want to use the tokio runtime?We are going to start accepting shiny future stories! We need to do some prep work:
First of all, I'm not quite sure if these kinds of stories are relevant, but most of the status quo stories highlight the issues of the current state, while there are also good things that came forth from it, which might be important to highlight in order not to lose them.
If these kinds of stories are also useful, I can myself try to write it out, but I first wanted to see if it is useful.
Alan(/Niklaus?) has been writing asynchronous C# code for a while now. Plop down an await
here, change the return type to Task
, and so on. His code works and seems to perform fine.
On his journey through async Rust, he learned about some of the internals about Futures/Executors/... and gained a deeper level of understanding about what asynchronous code really is.
Now when writing async C#, he reasons more about what is really happening, and has become the "goto async C# expert" of his team, not necessarily because he learned more about async C#, but because he learned about async Rust.
Proposal: to add a lint similar to must-use that can help to identify values that ought not to be held across an await
(or other yield), such as mutex guards, and warn about them.
i've heard a number of people cite per-thread executors as a good model for high efficiency. There are issues like Context
implementing Send+Sync. Are people doing this? Does it work? :)
As a beginner in Rust, I would like to add to this thread with our real-life experience. We are currently facing issues which make me relate to this story (and are preventing us to switch to Rust):
We are trying to rewrite some of our services from Python to Rust and are looking to achieve the following:
What we did not succeed to do so far is:
FuturesUnordered
container would handleFor reference, the stackoverflow question where I was looking for help.
Originally posted by @rgreinho in #95 (comment)
One challenge would be how to debug live lock-ish issues between services using futures. On fuchsia pretty much everything is asynchronous. So when you have some client interacting with some other service, they both have executors driving the futures. So:
While fuchsia has some tools to help debug issues like this, the main technology we have to triage live-ish locks like this to set timeouts on our calls, and annotat most call sites with logs. Inevitably we forget some locations, and it becomes difficult to understand what happened.
My dream to try to debug issues like this would if there was some mechanism where we could ask the executor to wake up every pending future, then dump a backtrace of the future stack. Something like that would make it easy to detect when’s future is stuck waiting for something to happen.
Originally posted by @erickt in #69 (comment)
@nikomatsakis I'm not too sure if that story gathers my experience with Rust. I've not published a library yet, but I have consumed all the most popular ones. It seems that Alan and Barbara are "producers" (sorry, probably not a good word to describe people) of libraries.
My urge to write in this thread comes from the point of view of the "consumer". It's very likely that the library Alan and Barbara are writing has a limited scope. That in itself is a luxury, especially if they aim for one thing and one thing only. "Consumers" of libraries, on the other hand, will have a very long list of dependencies to work with: http client, http server, pub/sub routines, websockets, ORMs and database connections, caching in remote stores, etc.
In that regards, it surprised me the conversation about whether to use async or not. In the story I have in mind it's an absolute given. No one in their right mind would write a synchronous web server like it's the nineties
@eminence I'd be more than happy to write a sub-story. Maybe sub-stories could be linked to more than one (trust and http client)?. I think my story would begin with something like:
Originally posted by @cortopy in #95 (comment)
Set up a basic framework for tracking ongoing efforts. Make an issue for documenting that framework.
Relevant links
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.