GithubHelp home page GithubHelp logo

rust-lang / wg-async Goto Github PK

View Code? Open in Web Editor NEW
363.0 48.0 86.0 4.66 MB

Working group dedicated to improving the foundations of Async I/O in Rust

Home Page: https://rust-lang.github.io/wg-async/

License: Apache License 2.0

JavaScript 0.44% Shell 28.17% Rust 71.39%
rust async

wg-async's Introduction

wg-async

Working group dedicated to improving the foundations of async I/O in Rust

Please visit our rendered page for more information!

wg-async's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wg-async's Issues

bridging sync/async in "hobby code" where you don't care that much

  • Brief summary:
    • Hacking on a simple client that has to make api calls but where performance isn't much of a priority; many of the libraries are written in an async style, but sometimes you want to make those calls in a location where you can't conveniently make the code async (e.g., an iterator).
    • Currently trying to manage this situation involves either propagating async everywhere or adding block_on calls. But, at least in tokio, block_on calls cannot execute from async threads, so this can lead to panics if that code that is using block_on ever winds up in an async context.
  • Character: Barbara, this was sourced from a conversation with @Mark-Simulacrum

More details from my conversation with @Mark-Simulacrum

  • Context: building things like perf.rust-lang.org, triagebot
  • Performance is not a real issue here, convenience and expressiveness is
    • only have so much time to work on this, trying to stand something up quickly
  • Want a way to make calls to various web services, connect to databases, or do other parts of I/O
  • Would be happy with sync, but the libraries and things are in async
  • Didn't really want to have to pick a runtime, doesn't care much, mostly wants things to Just Work. In the end picked tokio as the only widely known tool at the time, as well as the one that is compatible with things like defaults in hyper.
  • Sometimes find themselves in a synchronous context but need to do an async operation
    • e.g., implementing an iterator
  • Don't care too much about performance, so add a block_on
    • But block-on doesn't take an async block
    • So often nest a spawn inside
    • But then that code gets invoked in an async context, and the code panics
    • Frustrating -- how bad of a problem is it really?
    • Example code
  • Gets in the scenario (example) where
    • something internally has to be make an http request
    • do we make it async?
      • that forces all callers to be async
    • or sync with block-on
      • then cannot be used from an async context
    • or just sync?
      • but then need to find a different http lib (can't use reqwest/hyper)

Possible new project: IRC Client (application development)

A possible idea for a new project: ChatteRS, an IRC Client (application development)

This isn't a PR, because I think it needs a bit of discussion first. Is this type of project unique enough?. But if we agree, I'm happy to convert this into a PR


What is this?

"ChatteRS" is an IRC client (designed as an application to be run as a desktop application). This application might be purely text-based, or it could have a graphical user interface

Description

This IRC client doesn't aim to have the worlds fanciest features, or the slickest interface. But it strives to have a solid, small, easy-to-read codebase that can be easily developed, maintained, and extended over time.

🤔 Frequently Asked Questions

  • What makes this project different from others?
    • [see discussion below]
  • Does this project require a custom tailored runtime?
    • No, pretty much any off-the-shelf runtime can work here
  • How much of this project is likely to be built with open source components from crates.io?
    • Probably as much as possible. Things like a low-level IRC library, and a GUI/TUI library are very likely to be found on crates.io and would be used in this project.
  • What is of most concern to this project?
    • Ease of development and easy-to-understand code
  • What is of least concern to this project?
    • Performance

When thinking about a new project, asking "how is it different than the others" is probably the most important point. I think I see the following differences: there isn't a need for a custom runtime or custom futures, or anything really performance sensitive, this isn't a library, but an application that will do a "async-like" things (like reading from the network, getting user input). So it's likely to lean heavily on the ecosystem (crates.io) to provide a lot of functionality.

There's nothing fundamentally important about this project being an IRC client (I think any type of "desktop application" could fit in here), but I thought that an IRC client was a familiar example to a lot of people, and it represents a certain type of real-world application that often just "glues" together existing pieces of functionality, with some fairly simple "business logic" or "application logic" sitting in the middle. So with this in mind, one possible way to focus/target is to focus on this "glue" aspect -- can rust async easily glue together a bunch of different async libraries into 1 cohesive application?

This new project is probably most similar to the existing "YouBuy (Traditional Server Application)" project in that both don't need custom/tailored runtimes, both don't need tight control of performance, both want to rely heavily on the crates.io ecosystem. I do sometimes hear complaints like "rust async only cares about network servers", so I wonder if some type of desktop application might help assuage those concerns (though maybe "SLOW" and "DistriData" already fill that role well enough)

Lastly, I wonder if each project should have an FAQ entry titled "Why is this project written in async rust?" where we can explain why we think rust is a good fit for this particular project. Each of the current projects probably already have a successful real-world version that's not written in rust (indicating that rust isn't the only suitable language for each of these projects). For this "ChatteRS" project, I think the answer to "why is this project written in async rust" is something like: rust's strong type system and focus on correct code can help the project reduce the number of bugs. Even though the project is not performance-focused, rust general reputation of being "not wasteful" with resources is useful. The strength of the crates.io ecosystem is also a big draw here

Alan picks an HTTP library

Brief summary

Alan is building a simple library for interacting with his favorite movie tracking app "numbersquard". Numbesquard exposes their data over a restful API. Alan, new to Rust, searches for an HTTP library and quickly gets sucked into long debates about async/sync IO, Surf vs reqwest vs curl, tokio vs async-standard, etc. Alan is so filled with doubt that he doesn't even write a line of code.

Optional details

  • Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
      • Alan is probably best since picking an HTTP library in many languages is a non-choice, you just use the built-in standard library implementation.
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
      *Which project(s) would be the best fit and why?
    • While SLOW might work, I think it's important that the project not feel like it actually has much to do with networking. This library helps people track their movie watching on the numbersquard platform. Concerns like async/sync, which HTTP library to use, etc. are very much implementation details and actually shouldn't really matter. Alan just wants to build the library and not care.
  • What are the key points or morals to emphasize?
    • HTTP is so ubiquitous that it's often a small and insignificant implementation detail of many libraries that on the surface have little to nothing to do with networking, I/O, etc. However, choosing which HTTP implementation to use, requires a HUGE amount of understanding of the state of the Rust async ecosystem.

This story is similar in many ways to #54, #49, and #45, but HTTP is so common and this particular situation so relevant to so many projects that I think it needs to be called out explicitly.

no-std asynchronous runtime

Hi,because for some embedded devices, memory resources are limited. Is there any no-std asynchronous runtime planned for rust?

Using an async block or function in a trait

Brief summary

It's very easy to write inherent async fns on a type, but if using them inside a trait method is much harder. For example, this pseudo code doesn't work, and making it work isn't intuitive:

impl Service for Handler {
    type Future = WhatTypeGoesHere;
    fn call(&self, req: Request) -> Self::Future {
        async move {
            self.count(req).await;
            Ok(self.construct_response())
        }
    }
}

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Any of the characters, since it's hard in general. Barbara may know how to work around it, but she likely still doesn't like it.
  • (Optional) What are the key points or morals to emphasize?
    • It's confusing that you can't write async fn when implementing a trait.
    • It's hard to return a future from a trait method that came from another async fn or async block.

Alan isn't satisfied with the `join` and `select` macros

Brief summary

Alan was working on a endpoint which would be making a large amount of async requests, and decided to do a join operation to await them in parallel. After a bunch of time spent googling, he realized that these operations weren't part of the language or std, but implemented in his runtime. He used the join! macro to poll them in parallel, but was disappointed to see that the tooling he was accustomed to, such as rustfmt or rust-analyzer didn't work nicely with this macro. He fell back to using the join3 method in the futures crate instead, since he didn't need that many types. However, as the endpoint grew, he is currently at the limit and wishes this limit didn't exist.

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • Most projects could likely be used to tell this story, since join and select can come up in a lot of diverse scenarios.
  • (Optional) What are the key points or morals to emphasize?
    • Macros are hard to deal with by tooling.
    • Ideally, variadics could be used without resorting to macros, since they are a missing part of the rust type system which can be limiting.
    • Join and select are fundamental future primitives which should be easier to find.

Detect and prevent blocking functions in async code

In order to write reliable and performant async code today, the user needs to be aware of which functions are blocking, and either:

  • Find an async alternative
  • Schedule the blocking operation on a separate thread pool (supported by some executors)

Determining if a function may block is non-trivial: there are no compiler errors, warnings or lints, and blocking functions are not isolated to special crates but rather unpredictably interspersed with non-blocking, async-safe synchronous code. As an example, most of std can be used in async code, but much of std::fs and std::net cannot. To make matters worse, this failure mode is notoriously hard to detect: it often compiles and runs fine when the executor is under a small load (such as in unit tests), but can cause severe application-wide bottlenecks when load is increased (such as in production).

For the time being, we tell our users that if a sync function uses IO, IPC, timers or synchronization it may block, but such advice adds mental overhead and is prone to human error. I believe it's feasible to find an automated solution to this problem, and that such a solution delivers tangible value.

Proposed goal

  • Offer a way to declare that a piece of code is blocking.
  • Detect accidental use of blocking functions in async contexts.
  • Display actionable advice for how to mitigate the problem.
  • Offer an override so that users and library authors who "know what they're doing" can suppress detection.

Possible solutions

I am not qualified to say, and I would like your thoughts! A couple of possibilities:

  • A new annotation, perhaps #[may_block] that can be applied on a per-function level.
  • An auto-trait or other type system integration (although this would be much more invasive).

Challenge 1: Blocking is an ambiguous term

Typically, blocking is either the result of a blocking syscall or an expensive compute operation. This leaves some ambiguous cases:

  • Expensive compute is a gray area. Factoring prime numbers is probably blocking, but where's the line exactly? Fortunately, very few (if any) functions in std involve expensive computation by any reasonable definition.
  • A method like std::sync::Mutex::lock is blocking only under certain circumstances. The must_not_await lint is aimed at preventing those circumstances (instead of discouraging its use altogether).
  • TcpStream::write blocks by default, but can be overridden to not block using TcpStream::set_nonblocking.
  • The println! and eprintln! macros may technically block. Since they lack async-equivalents and rarely block in practice, they are widely used and relatively harmless.
  • Work-stealing executors may have a higher (but not an infinite) tolerance for sporadic blocking work.

Ambiguity aside, there are many clear-cut cases (e.g. std::thread::sleep, std::fs::File::write and so on) which can benefit from a path forward without the need for bike-shedding.

Challenge 2: Transitivity

If fn foo() { bar() } and bar() is blocking, foo() is also blocking. In other words, blocking is transitive. If we can apply the annotation transitively, more cases can be detected. OTOH, this can be punted for later.

Challenge 3: Traits

When dynamic dispatch is used, the concrete method is erased. Should annotations be applied to trait method declaration or in the implementation?

Challenge 4: What is an "async context"?

The detection should only apply in "async contexts". Does the compiler already have a strict definition for that? Examples from the top of my head:

  • Directly in an async block or function.
  • Indirectly in an async block or function, e.g. in a closure.
  • Inside the poll method of a custom future impl, or the poll_fn macro.

Background reading

Alan wants structured concurrency and parallel data processing

Brief summary

Alan wants to intermix data processing and he finds it difficult. He misses Kotlin's support for coroutines. Barbara laments the lack of structured concurrency or Rayon-like APIs.

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • Not sure.
  • (Optional) What are the key points or morals to emphasize?
    • Async libraries are very focused on I/O, but many folks mention wanting improved support for parallel tasks.

Footgun with futures unordered

Brief summary

@farnz opened a really interesting issue rust-lang/futures-rs#2387:

This would make a great status quo story!

We've found a nasty footgun when we use FuturesUnordered (or buffered etc) to get concurrency from a set of futures.

Because FuturesUnordered only polls its contents when it is polled, it is possible for futures lurking in the queue to be surprised by a long poll, even though no individual future spends a long time in poll(). This causes issues in two cases:

  1. When interfacing with an external system via the network; if you take a result from the stream with while let Some(res) = stream.next().await and then do significant wall-clock time inside the loop (even if very little CPU time is involved because you're awaiting another network service), you can hit the external system's timeouts and fail unexpectedly.

  2. When using an async friendly semaphore (like Tokio provides), you can deadlock yourself by having the tasks that are waiting in the FuturesUnordered owning all the semaphores, while having an item in a .for_each() block after buffer_unordered() requiring a semaphore.

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f58e77ba077b40eba40636a4e32b5710 shows the effect. Naïvely, you'd expect all the 10 to 20ms sleep futures to complete in under 40ms, and the 100ms sleep futures to take 100ms to 200ms. However, you can see that the sleep threads complete in the timescale expected and send the wakeup to the future that spawned them, but some of the short async sleep_for futures take over 100ms to complete, because while the thread signals them to wake up, the loop is .awaiting a long sleep future and does not get round to polling the stream again for some time.

We've found this in practice with things where the loop body is "nice" in the sense that it doesn't run for very long inside its poll function, but the total time spent in the loop body is large. The futures being polled by FuturesUnordered do:

async fn do_select<T>(database: &Database, query: Query) -> Result<Vec<T>> {
    let conn = database.get_conn().await?;
    conn.select_query(query).await
}

and the main work looks like:

async fn do_work(database: &Database) {
    let work = do_select(database, FIND_WORK_QUERY)?;
    stream::iter(
        work
            .into_iter()
            .map(|item| do_select(database, work_from_item(item)).await)
            .buffered(5)
            .for_each(|work_item| do_giant_work(work_item)).await;
}

do_giant_work can take 20 seconds wall clock time for big work items. It's possible for get_conn to open the connection (which has a 10 second idle timeout) for each Future in the buffered set, send the first handshake packet, and then return Poll::Pending as it waits for the reply. When the first of the 5 in the buffered set returns Poll::Ready(item), the code then runs do_giant_work which takes 20 seconds. While do_giant_work is in control, nothing re-polls the buffered set of Futures, and so the idle timeout kicks in server-side, and all of the 4 open connections get dropped because we've opened a connection and then not completed the handshake.

We can mitigate the problem by using spawn_with_handle to ensure that the do_select work happens whenever the do_giant_work Future awaits something, but this behaviour has surprised my team more than once (despite enough experience to diagnose this after the fact).

I'm not sure that a perfect technical solution is possible; the issue is that FuturesUnordered is a sub-executor driven by the main executor, and if not polled, it can't poll its set of pending futures. Meanwhile, the external code is under no obligation to poll the FuturesUnordered in a timely fashion. Spawning the futures before putting them in the sub-executor works because the main executor then drives them, and the sub-executor is merely picking up final results, but futures have to be 'static lifetime to be spawned.

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • List some projects here.
  • (Optional) What are the key points or morals to emphasize?
    • Write some morals here.

create a vision doc thanks page

I want to include (a) all people who commented on issues, (b) all people who opened PRs, and (c) all people who joined an AVD writing session!

Async-closures or reference-parameterized-closures-returning-futures

Brief summary

Not having async closures can mean needing to resort to macros to create certain (fairly common) types of combinators (ones where a closure that returns a future runs into lifetime issues).

We experienced this in pantsbuild/pants#11548, and I ended up writing a macro to put what would have been a parameter to a closure-returning-a-future or async-closure directly on the stack instead: pantsbuild/pants#11759

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) What are the key points or morals to emphasize?
    • Working reference-parameter-taking-closures-returning-futures would be great to have, but would require fairly advanced lifetimes (GATs?) to use even once they are possible (would likely be Barbara's choice). On the other hand, async-closures are less general, but likely to be easier to use (would likely be Alan's choice).

[meta] blog post quest issue

This issue exists to collect links to blog posts that seem like status quo story ideas. Please provide the link and whatever other details you can that might help people to know what it's about.

This issue is not an end point! It's more of a "work queue". The idea is for folks to read the blog posts and identify potential status quo user stories and then open up fresh issues with those stories.

Please check the box next to a blog post link (and leave a comment) if you have done that!

Suggested format for each comment:

* [ ] BLOG_POST_URL

> Some quote from blog post that gives the general idea

[meta] Covering the full range of status-quo stories

We want to make sure that we have stories that cover the full range experiences. This list is not yet exhaustive, but is a good start from @estebank and @nikomatsakis :

TopicIssuesPRsStories
Developing libraries for use in many different environments #95 #54 #49
Targeting embedded systems or those with very narrow requirements #85 #92
Maintaining systems into production #76 #75 #69 #99
Configuring and tuning systems for high performance #87 #129
Combining parallel processing of data with I/O #107
Missing cool features from other languages that maybe Rust should have (JS, C#, Clojure, Kotlin, whatever) #107
Ergonomics of using async Rust #105 #82 #104, #99 #129
Learning async Rust when you already know Rust #94, #104
Learning async Rust when you don't already know Rust #95
Onboarding new employees into a system built using async Rust #104
Integrating with other async runtimes, esp. in other languages #67
Supporting systems that integrate multiple reactors #87
Migrating an existing sync crate to async
Supporting both sync and async apis in the same crate
Writing a custom async executor #128 #115

(We will expand this issue body with xrefs to other issues, PRs, and stories)

Wrapping C++ async APIs in Rust futures

  • Brief summary: Grace wants to wrap C++ async APIs using the C++23 executor API in Rust futures and use them from a Tokio app.
  • Character: Grace
  • Key points or morals (if known):
    Cancellation is the hard part here, because dropping a C++ task is undefined behavior while in Rust this is idiomatic. C++ tasks can close over non-owning references that are expected not to go away while the operation is in flight, so this is important. We want to make C++ async APIs easy to use with as little runtime overhead as possible.

Actor-system related questions

Brief summary

Some stuff can't be written in a blocking model, even a non-blocking blocking model like async/await. By blocking I mean that when you do an async call, your coroutine blocks, and the called object is also blocked on that one piece of work until it returns asynchronously. An obvious example that doesn't fit this is a network stack layer where events come in from both below and above and also from timers. All these events have to be responded to immediately. Blocking (or logically blocking) just won't work.

Doing some kind of a "select" on the calling side solves the "only one outgoing call" problem, and the "being called blocks an object" (i.e. "only one incoming call") problem can probably be solved by having multiple proxy objects for your main object, so you don't block the main object. But this is all a very round-about way of getting the required behaviour.

So this is where the actor model comes in. I don't know whether you want to discuss the actor model in this review, but the subject keeps on coming back. As the author of Stakker crate, I am very happy to contribute to the discussion if it is of interest. Here are some subjects you might wish to cover in your review:

Different models of actor system in relation to async/await:

  • Very high-level actor system, i.e. used for cross-machine communication. Sits way above async/await.
  • Medium-level actor system, i.e. actors implemented immediately above async/await runtime
  • Low-level actor system, i.e. close-to-the metal actor system, sits below async/await (i.e. low-level actor system acts as an executor)

Impedance mismatch between async/await and actor model:

  • An actor can have many calls outstanding on it, and also have many calls outstanding on other actors
  • Async/await only supports one call each way without bringing in extra features
  • Means that actor systems interfacing to async/await have to deal with this impedance mismatch, i.e. either compromising the actor model (e.g. blocking the whole incoming actor queue whilst a single outgoing async/await call blocks) or adding intermediate actors that wrap an async/await object and queue the calls to that object so that other actors don't have to block

So I guess these are the questions this raises:

  • How best to handle people who come to async/await trying to solve a problem which really needs a non-blocking actor system?
  • How best to support people implementing new actor runtimes either above or below async/await?
  • How best to support interfacing between actor systems and async/await, i.e. dealing with the impedence mismatch?

For example, could we make async/await suitable for actor-like tasks? The fundamental problem is that the state self is locked during the .await. If more than one coroutine could access self at the same time (i.e. interleaved at yield points) then the problem of blocking the actor queue would be solved. (If this could be done with only static checks, i.e. no runtime RefCells or whatever, so much the better.) However maybe this is just completely incompatible with the async/await model, so it is just not possible. So an external actor system is the only way to handle these kinds of problems.

For example, stuff of interest related to async/await for my own low-level actor system (Stakker):

  • Since this plans to act as an executor to interface to the async/await ecosystem, the executor-independent interface is of great interest, e.g. common traits and other means for executor-independent async/await code to talk to executors
  • To implement actor coroutines with low overhead, it needs an 'until_next_yield lifetime in async/await in order to safely switch in and out references to self and the context. Or alternatively completion of the existing plans for Rust generators. This is allow several actor coroutines to efficiently interleave access to the same shared actor state.

Optional details

  • Which character(s) would be the best fit and why?
    Niklaus: new programmer from an unconventional background
  • What are the key points or morals to emphasize?
    • Need to guide people who have a problem to solve that isn't easily solvable with async/await.
    • Need to focus on executor-independent interop between executors and async/await to grow the executor ecosystem, e.g. to allow actor system-based executors
    • Need to consider whether it's possible to smooth the interop between actor model and async/await model

Tell me if you want me to write this up, i.e. whether this (or any parts of it) are subject areas of interest, and where in your framework for this review it should fit.

Unexpectedly large Future objects

Brief summary

Future objects generated by async fn can be surprisingly large. Data held across await points makes them bloated, and this adds up from large call graphs.

I've converted a complex data processing pipeline to async, and it started crashing with stack overflow. It was surprising that stackless non-recursive async can overflow the stack. It turned out that my call graph was pretty large, essentially holding my entire application as a single state machine. In normal sync code cold paths cost nothing, but in async every .await, even not taken, may increase Future's size.

This problem was not easy to spot in the source code. There's no easy way to see which data is held across await points. There's no eager drop. There are no lints/warnings/profilers that help detect large Future objects holding too much state inline.

Workaround for this (Box::pin().await or spawning) is relatively easy, but it's non-obvious. Because there are no lints/warnings, it's not easy to know where it should be added and have compile-time assurance that it helped.

Reported here previously

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • YouBuy (Traditional Server Application)
  • (Optional) What are the key points or morals to emphasize?
    • ???

GC'd language variant: Go

Thanks for working on this project!

https://rust-lang.github.io/wg-async-foundations/vision/characters/alan.html

I think a "Variant D: Go" would be useful here. Proposed text:

Alan develops networking programs in Go. He enjoys the simplicity and first-class treatment of concurrency, and is excited about Rust's promise of "fearless concurrency." He'd like to try Rust for more efficient use of memory and CPU, as well as its ability to do FFI well. When working with async Rust, Alan has the question: why not use coroutines / green threads / M:N threads?

version incompatibility with async libraries

Brief summary:

I feel a lot like Alan. An annoying situation which is happening to me in async Rust is having to handle dependencies which needs different versions of the same runtime (tokio's transition to 1.0). I feel like "doing cargo's work" when I have to find the right combination of features and versions on my dependencies just to be able to run my code without panics.

Originally posted by @eduardocanellas in #70 (comment)

Optional details

  • Key takeaways
    • Managing nitty gritty versions is annoying
    • Have to understand more impl details than you want to, perhaps?

Avoiding async entirely

"Whatever they're using it for, we want all developers to love using Async Rust. " - from the manifesto of this project.

That's a problem. This project is by async enthusiasts, who seem to think that all developers should want to use async. It's a short step from there to require all developers to use async.

Async is really needed only for a specific class of programs - those that are both I/O bound and need to maintain a large number of network connections. Outside of that niche, you don't really need it. We already have threads, after all. Not everyone is writing a web service.

In my case, I'm writing a viewer for a virtual world. It's talking to a GPU, talking to multiple servers, decompressing files, talking to a window, and is compute bound enough to keep 2 to 4 CPUs busy. It will have a most a dozen threads. For this class of problem, threads are essential and async has negative value.

Already, I've dropped the "hyper"/"reqwest" crate and switched to "ureq" because "reqwest" pulls in "tokio", and that, apparently, can no longer be turned off. I'm concerned about async contamination spreading to other crates.

I'm concerned that this project may break Rust as a systems language by over-optimizing it for the software-as-a-service case.

Thanks.

confusion specific to Java

Brief summary

Alan is accustomed to implementing services in Java He has design patterns in his mind that don't work in Rust. He also gets confused by specific things around async Rust / Rust futures. What are they?

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer

Closure types

  • Brief summary: Various problems arise due to the types of closures and the limitations of closure types.

    • Suppose we want a callback of sorts to be notified of some event (network packet received, keystroke, internal event between subsystems)
    • Writing a function like fn notifyLater<F>(listener: F) where F: Fn + Clone... is unwieldy.
      • It becomes more unwieldy for more closure arguments
      • Lots of decisions have to be made about Fn, FnMut, Clone, Copy, Sync, and Send. But at the time you make these decisions, you haven't written any code yet
        • Easy to imagine callers will of course supply a Send closure and add that to the requirements. Harder to refactor some existing code so it be used in F: Whatever.
      • Not sure if we stabilized async closures but I believe we generally continue to expand the surface area of closure types and all their configurations which continues to drive up the cost of callback designs
    • caller has to figure out why their closure was (not) inferred to be Fn, Clone, Copy, Sync, or Send
    • notifyLater has to store this closure somewhere (like in a struct) which must now also be generic
    • Generic requirements leak from functions into implemented traits, and from contained types to containing types. This exposes implementation details that would ideally be erased outside a local scope
  • Analogous problems for derived anonymous types, e.g. futures

  • Pain points like these creates pressure to use third-party crates. But this fragments the ecosystem and complicates debugging projects with different systems, e.g. dependencies

  • Character (if known; see [cast]): Possibly Grace, although a lot of these issues are informed by Swift background rather than C++. I'm uncertain if a different systems background justifies a new character.

  • Key points or morals (if known):

    • Anonymous types are hard to use
    • The barrier to incrementally adopting async patterns or just "one" API is very high
    • Requiring coordination across an entire crate or many crates is a heavy burden

Plan 2021 Roadmap

Let's pick the top priorities for new feature work on async/await this year.

Use this issue for submitting ideas that have their own tracking issues/RFCs. (If one doesn't exist, create one!)

However, I'd like to avoid using this issue for discussion of individual items. Let's use the Zulip topic for that instead:

#wg-async-foundations > 2021 roadmap

Barbara wants insight into her running service

Brief summary

Hello, I'm Barbara, I just want to know how many tokio tasks are idling at any given moment and also how much memory they use, also I want to know which tasks haven't been pulled in the last 10 minutes ok I guess I want a bunch of things

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • [Barbara]: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • List some projects here.
  • (Optional) What are the key points or morals to emphasize?
    • Write some morals here.
  • Sources:

dropping database handles is hard

  • Brief summary: Barbara tried to use sqlx in YouBuy, but she has problems around cancellation. Ultimately it comes down to the fact that, in Drop, the cleanup code to close the database handle is running in a spawned task, and there is a race condition between that task executing and other connections opening.
  • Character: Barbara, could kind of be anyone
  • Key points or morals (if known):

using named pipes on windows (and windows IPC more generally)

  • Character: Alan
  • Brief summary:
    • Alan is working on a Windows App that uses IPC internally to communicate between processes, and named pipes in particular. The existing serves mostly don't support his needs. He has to roll his own support.
    • At some point, there were multiple versions of winapi in use (an olfder one from tokio, one from elsewhere) and it caused large compilation times; since resolved.
    • Uses the tokio-named-pipe crate to get a handle, but wind up casting the handle to usize to actually pass it around (not 100% sure of the details here)
  • Key points or morals (if known):
    • Rust's async runtimes are often quite focused on linux. They won't support a lot of windows primitives.
  • Sources:

accidentally mixing two runtimes

  • Character: could be any, let's run with Barbara
  • Brief summary: Barbara is using library X, which is based on async-std, then adds library Y, which is based on tokio. She has problems until she learns about how to start the tokio runtime correctly.
  • Key points or morals (if known):
    • It is possible to have two runtimes working at once, but things don't always work by default, and it's not always easy to understand what's going on.
  • Sources:
    • I've personally heard this story from several experienced async Rust devs.

Barbara considers the ecosystem challenges of writing a ?Send executor without providing an entire std-like interface

Brief summary

Background: Barbara considers writing a new futures-compatible executor to fill a niche in the executor landscape
Aspect relevant to wg: Barbara hesitates / decides not to, for reasons that primarily are ecosystem and interop related

I'm writing this as an issue before writing the story because it's not clear to me if this is a distinct story as it is a mirror of a lot of the other stories that already have been well expressed. It may very well be a duplicate.

This is a real life story: I was reading #87 and realized that a fair-ish1 futures-compatible multi-threaded executor for ?Send futures would be perfect for web servers. However, there are ecosystem hurdles that make this substantially more challenging than writing the code:

  • Culturally/socially, introducing more executors currently seems like it further fragments a confusing landscape
  • Because async-std is the primary futures-compatible executor, there isn't a good way to communicate about being futures-compatible but not async-std. Other crates that would work on a futures-compatible executor tend to have a tokio feature and an async-std feature, making the education story confusing for new executors. Sometimes the async-std feature pulls in async-std, and sometimes it just pulls in futures/futures-util/futures-lite. Code that pulls in async-std but doesn't spawn is probably fine to mix and match with a new executor, but that's a lot of hidden sharp edges to offer support for. Smol users who are not async-global-executor/async-std users currently have this challenge (to the extent that they exist).
  • Introducing a new standalone executor that doesn't have an entire standard-library associated with it is challenging. It's hard for people to know which crates they could use in conjunction with a standalone executor, since both of the primary executors have taught people that an executor/runtime comes along with kitchen sink std-like libraries. Smol/async-executor/async-global-executor are the exception to this, but that's not currently the dominant model. Code that's written with the assumption of a sprawling library that also has a global executor often spawns tasks without treating that as a "special" boundary that is meaningfully different from anything one might do within an async task / spawned future.
  • This one I'm the least technically sure about: Because async-trait boxes the futures, the user needs to know if their futures are Send (the default) or ?Send. This means it's difficult to use async trait futures in a library context that sometimes will be used in a Send executor but not always, as the actual Send-ness gets erased when it's object-ified.

Shiny opinionated future: Any character can start with a simple/standard executor and swap it out for another one without having to rewrite their entire application. Replacing libraries with ones specific to their new executor will often provide a performance benefit, but can be done over time as needed, not as a mandatory wholesale change when switching executors. Example: Alan starts with async-std's executor and a bunch of executor-independent libraries, writes a substantial application against those types and then decides he really wants the perf characteristics of tokio, so he switches just the executor and sees a benefit for his application. Later on, he finds some time to replace the networking stack with a tokio-tuned one, and sees further benefits. Everything else still is using the neutral executor-independent libraries that provide lowest-common-denominator performance. Similarly, if in a different codebase Alan gets a compiler warning about Send types but isn't sure that spawn_local has the right perf characteristics for his application, he can swap out the executor and everything else still works. Obviously either of these switches to applications-specific executors wouldn't be as seamless to reverse.

Is it worth writing this as one or more stories?

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer

1 fair-ish: creates new !Send futures on whichever runtime thread has the least of them, or some other similar strategy, acknowledging that this is more likely to become lopsided than a work-stealing multithreaded executor of Send futures would be, especially if the spawned tasks take an unpredictable wall-time duration. On that note, however, an optional duration_hint function on the Future trait might be useful to executors

convert FAQ entries from bulleted lists to sections

The FAQ entries today use nested lists, but those are annoying when you want to embed code blocks. A better choice would be use ### sub-headings (or ####, as the case may be). These would also be linkable.

To make this happen you would need to basically riprep through the repo to convert the existing "Frequently Asked Questions" lists.

writing a library that can be reused across many runtimes

  • Character: Barbara
  • Brief summary: Barbara tries to write SLOW in a way that it can be used across runtimes; she tries various approaches, none of which are fully satisfactory
  • Key points or morals (if known):
    • Writing a library that is generic across libraries is often possible but difficult
    • Feature flags is one option, traits are another
    • Wants to find solutions that are zero-cost
    • Most common features needed are async-read, async-write traits, timers, spawning, opening UDP/TCP sockets
  • Conversations:

debugging stack traces

Brief summary:

I don't honestly know! But people keep telling me async stack traces are hard to debug! Help me understand! Send me examples!

Optional details

  • Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • Maybe the server project? Often it is necessary to debug problems with minimal information.
  • (Optional) What are the key points or morals to emphasize?

The Golden Rule - Don’t Block the Event Loop

Brief summary

As a user with a Java background, I'm used to the Netty/Vert.x approach to async where blocking the event loop is very bad, and almost certainly results in a performance hit, or the application being stuck.
This is really well detailed in the Vert.x doc, in The Golden Rule - Don’t Block the Event Loop paragraph:

We already know that the Vert.x APIs are non blocking and won’t block the event loop, but that’s not much help if you block the event loop yourself in a handler.

If you do that, then that event loop will not be able to do anything else while it’s blocked. If you block all of the event loops in Vertx instance then your application will grind to a complete halt!

So don’t do it! You have been warned.

Examples of blocking include:

  • Thread.sleep()

  • Waiting on a lock

  • Waiting on a mutex or monitor (e.g. synchronized section)

  • Doing a long lived database operation and waiting for a result

  • Doing a complex calculation that takes some significant time.

  • Spinning in a loop

If any of the above stop the event loop from doing anything else for a significant amount of time then you should go immediately to the naughty step, and await further instructions.

So…​ what is a significant amount of time?

How long is a piece of string? It really depends on your application and the amount of concurrency you require.

If you have a single event loop, and you want to handle 10000 http requests per second, then it’s clear that each request can’t take more than 0.1 ms to process, so you can’t block for any more time than that.

The maths is not hard and shall be left as an exercise for the reader.

If your application is not responsive it might be a sign that you are blocking an event loop somewhere. To help you diagnose such issues, Vert.x will automatically log warnings if it detects an event loop hasn’t returned for some time. If you see warnings like these in your logs, then you should investigate.

Thread vertx-eventloop-thread-3 has been blocked for 20458 ms

Vert.x will also provide stack traces to pinpoint exactly where the blocking is occurring.

I've seen this message pop up in several Vert.x projects, always for good reasons. In other projects, not necessarily using Vert.x, I've found it to be a good habit to add checks along those lines:

  • When defining blocking functions, add assertions on the current thread (Thread.currentThread()) to panic early.
  • Add timers, and log warnings when tasks are running for more than a few ms.

I've never seen this kind of error messages in Rust, and I'm not confident with my ability to use the right APIs in the right context. How can I make sure this doesn't come and bite me?

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • All, potentially?
    • As a library, this may serve as a safety guard when invoking user-provided methods.
  • (Optional) What are the key points or morals to emphasize?
    • Nothing prevents async code from calling sync code — which is fine, but this will hurt performances in some cases.
    • How do we make it easier for users to detect when this happens, and why their whole app is slow?
    • How do we help libraries integrate and surface these warnings?

when cancellation goes wrong

  • Brief summary: I've heard a lot of people discuss hazards related to task cancellation -- I think this is because any await point is effectively a cancellation point, and people don't always write "cancellation-safe" code. I'd love to hear more concrete examples of where this happens.
  • Character: not sure!
  • Key points or morals (if known): not sure!

writing an I/O-related library that doesn't care about whether it's used in a sync or async context

  • Character: Barbara (almost)
  • Brief summary: Barbara tries to write a library that parses a particular kind of format in a streaming fashion that works with any implementation of std::io::Read or std::io::Write, and wants it to work in an async context with minimal fuss.
  • Key points or morals (if known):
    • Library author may not know much (or anything) about Async Rust.
    • There is nothing inherently "sync" or "async" about the actual details of the format. By virtue of working on things like the Read/Write traits, it works on streams. (Where "stream" is used colloquially here.)
    • Depending on specific async runtimes with specific async versions of the Read/Write traits seems inelegant and doesn't seem to scale with respect to maintenance effort.
    • Async Runtimes may have adapters for working with Read/Write impls, but it may come with additional costs that one might not want to pay.
    • Making such code work regardless of if it's used in a sync or async context should not require major effort or significant code duplication.

One example here is the flate2 crate. To solve this problem, they have an optional dependency on a specific async runtime, tokio, and have impls for AsyncRead and AsyncWrite that are specific to async runtime.

Another example is the csv crate. The problem has come up a few times:

Its author (me) has not wanted to wade into these waters because of the aforementioned problems. As a result, folks are maintaining a csv-async fork to make it work in an async context. This doesn't seem ideal.

This is somewhat related to #45. For example, AsyncRead/AsyncWrite traits that are shared across all async runtimes might do a lot to fix this problem. But I'm not sure. Fundamentally, this, to me, is about writing I/O adapters that don't care about whether they're used in a sync or async context with minimal fuss, rather than just about trying to abstract over all async runtimes.

Apologies in advance if I've filled out this issue incorrectly. I tried to follow the others, but maybe I got a bit too specific! Happy to update it as appropriate. Overall, I think this is a really wonderful approach to gather feedback. I'm completely blown away by this!

Also, above, I said this was "almost" Barabara, because this story doesn't actually require the programmer to write or even care about async code at all.

confusion specific to JavaScript

Brief summary

Alan is accustomed to JavaScript promises. He has design patterns in mind that don't work in Rust; he also gets confused by specific things around Rust futures. What are they?

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer

Lack of a polished common implementations of basic async helpers

Brief summary

Alan is building a library but he is having a hard time figuring out how to do certain basic pieces of functionality. For example:

  • How does he drive a collection of futures to completion in parallel?
  • How does he race futures and do something depending on which finishes first?

These bits of functionality require a 3rd party crate? But which one?! futures, futures-util, futures-io? Some functionality he needs seems to be in tokio, but he's not using tokio! Grrrr...

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • [X ] Alan: the experienced "GC'd language" developer, new to Rust
      • Alan is probably working on a high level bit of business logic and just wants nice helpers for gluing his logic together.
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) What are the key points or morals to emphasize?
    • It's a common occurrence that when creating async libraries or applications, basic functionality is spread across many different crates. Even assuming you know these crates exist (which might not be the case for newcomers to async Rust), you often still need to search through all of them to determine which one has the functionality you need.
      • For example, racing multiple futures against one another. The futures crate happens to have futures::select!, but it can often be a bit hard to use. tokio::select! can be nicer but what if you don't want to use the tokio runtime?

learning async rust for the first time

prep for shiny future

We are going to start accepting shiny future stories! We need to do some prep work:

Alan becomes a better C# programmer

First of all, I'm not quite sure if these kinds of stories are relevant, but most of the status quo stories highlight the issues of the current state, while there are also good things that came forth from it, which might be important to highlight in order not to lose them.
If these kinds of stories are also useful, I can myself try to write it out, but I first wanted to see if it is useful.

Brief summary

Alan(/Niklaus?) has been writing asynchronous C# code for a while now. Plop down an await here, change the return type to Task, and so on. His code works and seems to perform fine.

On his journey through async Rust, he learned about some of the internals about Futures/Executors/... and gained a deeper level of understanding about what asynchronous code really is.

Now when writing async C#, he reasons more about what is really happening, and has become the "goto async C# expert" of his team, not necessarily because he learned more about async C#, but because he learned about async Rust.

Optional details

  • What are the key points or morals to emphasize?
    • Rust is known for its steep learning curve, but (I think) many people can agree that once you get past that initial curve, you grow as a developer. The same might hold true for async Rust. Just as the borrow checker is something you just need to learn about if you want to write decent Rust code, there might just be an "async borrow checker" (I.e. something you just need to learn in order to effectively utilize it, I do not mean a separate borrow checker for asynchronous Rust code)

Yield-Safe Lint RFC

Proposal: to add a lint similar to must-use that can help to identify values that ought not to be held across an await (or other yield), such as mutex guards, and warn about them.

per-thread executors

Brief summary

i've heard a number of people cite per-thread executors as a good model for high efficiency. There are issues like Context implementing Send+Sync. Are people doing this? Does it work? :)

Optional details

  • (Optional) Which character(s) would be the best fit and why?
    • Alan: the experienced "GC'd language" developer, new to Rust
    • Grace: the systems programming expert, new to Rust
    • Niklaus: new programmer from an unconventional background
    • Barbara: the experienced Rust developer
  • (Optional) Which project(s) would be the best fit and why?
    • List some projects here.
  • (Optional) What are the key points or morals to emphasize?
    • Write some morals here.

processing urls in batches

Basic summary

As a beginner in Rust, I would like to add to this thread with our real-life experience. We are currently facing issues which make me relate to this story (and are preventing us to switch to Rust):

We are trying to rewrite some of our services from Python to Rust and are looking to achieve the following:

  1. Read a bunch of URLs (size varies, but about 1000 per batch)
  2. Do an HTTP GET request for each URL asynchronously
  3. Log the failures and process the results

What we did not succeed to do so far is:

  1. Send the requests by batch. If we send the 1000 requests at the same time,
    our server closes the connection and the process panics. Ideally we could
    buffer them to send at most 50 at a time. We could split the batches manually,
    but we hoped the HTTP client or the FuturesUnordered container would handle
    that for us.
  2. Handle errors. Failures should be logged and should not crash the
    process. We plan on using tracing-rs for the
    logging as it is part of the tokio stack.
  3. Implement Fibonacci or exponential retry mechanism on failure.

For reference, the stackoverflow question where I was looking for help.

Originally posted by @rgreinho in #95 (comment)

debugging live lock-ish issues between services using futures

Brief Summary

One challenge would be how to debug live lock-ish issues between services using futures. On fuchsia pretty much everything is asynchronous. So when you have some client interacting with some other service, they both have executors driving the futures. So:

  • client starts an executor
  • client adds a future to the executor
  • The future makes a call to some service.
  • the executor suspends the future and waits on a kernel object for any channel to receive a message.
  • For whatever reason the server does not reply, nor does it close the channel.

While fuchsia has some tools to help debug issues like this, the main technology we have to triage live-ish locks like this to set timeouts on our calls, and annotat most call sites with logs. Inevitably we forget some locations, and it becomes difficult to understand what happened.

My dream to try to debug issues like this would if there was some mechanism where we could ask the executor to wake up every pending future, then dump a backtrace of the future stack. Something like that would make it easy to detect when’s future is stuck waiting for something to happen.

Originally posted by @erickt in #69 (comment)

consumer async libraries and too many choices

Basic summary

@nikomatsakis I'm not too sure if that story gathers my experience with Rust. I've not published a library yet, but I have consumed all the most popular ones. It seems that Alan and Barbara are "producers" (sorry, probably not a good word to describe people) of libraries.

My urge to write in this thread comes from the point of view of the "consumer". It's very likely that the library Alan and Barbara are writing has a limited scope. That in itself is a luxury, especially if they aim for one thing and one thing only. "Consumers" of libraries, on the other hand, will have a very long list of dependencies to work with: http client, http server, pub/sub routines, websockets, ORMs and database connections, caching in remote stores, etc.

In that regards, it surprised me the conversation about whether to use async or not. In the story I have in mind it's an absolute given. No one in their right mind would write a synchronous web server like it's the nineties

@eminence I'd be more than happy to write a sub-story. Maybe sub-stories could be linked to more than one (trust and http client)?. I think my story would begin with something like:

  1. Someone visits https://www.arewewebyet.org/
  2. As a result, s/he thinks Rust is great to write a REST API because async support seems to be ready in all those libraries
  3. And then similarity with story x
  4. But also something different

Originally posted by @cortopy in #95 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.