GithubHelp home page GithubHelp logo

packtpublishing / asynchronous-programming-in-rust Goto Github PK

View Code? Open in Web Editor NEW
140.0 6.0 43.0 1.03 MB

Asynchronous Programming in Rust, published by Packt

License: MIT License

Rust 99.82% Dockerfile 0.15% Shell 0.03%

asynchronous-programming-in-rust's Introduction

Asynchronous Programming in Rust

no-image

This is the code repository for Asynchronous Programming in Rust, published by Packt.

Learn asynchronous programming by building working examples of futures, green threads, and runtimes

What is this book about?

Explore the nuances of transitioning from high-level languages to Rust with this book. Navigate potential frustrations arising from differences in modeling asynchronous program flow and recognize the need for a fundamental understanding of the topic.

This book covers the following exciting features:

  • Explore the essence of asynchronous program flow and its significance
  • Understand the difference between concurrency and parallelism
  • Gain insights into how computers and operating systems handle concurrent tasks
  • Uncover the mechanics of async/await
  • Understand Rust’s futures by implementing them yourself
  • Implement green threads from scratch to thoroughly understand them

If you feel this book is for you, get your copy today!

Instructions and Navigations

All of the code is organized into folders. For example, Chapter02.

The code will look like the following:

pub trait Future {
type Output;
fn poll(&mut self) -> PollState<Self::Output>;
}

Following is what you need for this book:

This book is for programmers who want to enhance their understanding of asynchronous programming, especially those experienced in VM’ed or interpreted languages like C#, Java, Python, JavaScript, and Go. If you work with C or C++ but have had limited exposure to asynchronous programming, this book serves as a resource to broaden your knowledge in this area. Although the examples are predominantly in Rust, the intricacies of Rust’s futures are covered in detail. So, anyone with a keen interest in learning Rust or with working knowledge of Rust will be able to get the most out of this book.

With the following software and hardware list you can run all code files present in the book (Chapter 1-10).

Software and Hardware List

Chapter Software required OS required
1-10 Rust (version 1.51 or later) Windows, macOS, or Linux

Errata

  • Page 58 (Paragraph 3, line 1): create should be crate
  • Page 10 (Paragraph 8, line 3): 240 beers should be 180 beers
  • Page 10 (Paragraph 9, line 1): 240 beers should be 180 beers
  • Page 10 (Paragraph 9, line 4): 180 beers should be 170 beers
  • Page 10 (Paragraph 10, line 1): 360 beers should be 340 beers
  • Page 11 (Paragraph 2, line 2): 230 orders should be 175 orders
  • Page 11 (Paragraph 2, line 3): 460 beers should be 350 beers
  • Page 152 (Paragraph 3, line 1): we should be We
  • Page 163 (Paragraph 4, line 2): The next coroutine/wait function is read_requests should be The next coroutine/wait function is requests
  • Page 17 (Paragraph 3, line 2): dye should be die

Related products

Get to Know the Author

Carl Fredrik Samson is a popular technical writer, and his favorite topics to write about are asynchronous programming and Rust. During a period of 3 years, Carl set out to cover topics about asynchronous programming that he felt were severely under explained and tried to explain them in an informal and easy to understand manner. The bits and pieces he wrote were popular and translated to several languages. Some even ended up as parts of the official Asynchronous Programming in Rust book. Now, he has decided to put his combined works and knowledge into a book of its own. Carl has programmed since the early 1990s, has a Master in Strategy and Finance, and he has written production software for both his own business and as a hobby for over a decade.

asynchronous-programming-in-rust's People

Contributors

cfsamson avatar chasing1020 avatar joseluis avatar kpackt avatar kquinsland avatar npuichigo avatar psalm842 avatar rajdeep-packt avatar vbauerster avatar yagehu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

asynchronous-programming-in-rust's Issues

ch01 - compilation error on aarch64 (Mac)

I'm compiling the first code example in the book on a Mac, and getting the expected compatible register or logical immediate error for the assembly code.

I searched for the error string and found people discussing similar errors specifically on aarch64.
One of the hits was a StackOverflow article with an answer suggesting to make clang use the GNU assembler rather than its integrated one. I'll do some more research and update the description if I figure it out or make more progress.

cargo run --package ch1 --bin ch1 
error: expected compatible register or logical immediate
  --> src/main.rs:20:15
   |
20 |         asm!("mov {0}, [{1}]", out(reg) res, in(reg) ptr);
   |               ^
   |
note: instantiated into assembly here
  --> <inline asm>:1:10
   |
1  |     mov x8, [x0]
   |             ^

error: could not compile `ch1` (bin "ch1") due to 1 previous error

Clarification in "parallels to process economis"

In chapter 1, Concurrency versus parallelism , "Let's draw some parallels to process economics" , "Alternative 3", it's written:

[...] you calculate that they now only just over 20 seconds on an order. You've basically eliminated all the waiting. Your theoretical throughput is now 240 beers per hour.

If its 20 seconds per hour, the theoretical throughput shouldn't be (60 * 60) / 20 = 180 beers? Am I missing something?

Incorrect output in chapter 4 examples

When running the a-epoll example from Chapter 4, it only outputs 3 responses before finishing:

RECEIVED: Event { events: 1, epoll_data: 4 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Mon, 19 Feb 2024 18:27:00 GMT

request-4
------

RECEIVED: Event { events: 1, epoll_data: 3 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Mon, 19 Feb 2024 18:27:01 GMT

request-3
------

RECEIVED: Event { events: 1, epoll_data: 2 }
HTTP/1.1 200 OK
content-length: 9
connection: close
content-type: text/plain; charset=utf-8
date: Mon, 19 Feb 2024 18:27:02 GMT

request-2
------

FINISHED

The behavior is the same in b-epoll-mio, with three responses handled before the program finishes.

I've investigated it and it appears the issue is that after the buffer for the stream has been drained, I'm receiving another event for that stream with an already empty buffer, which immediately falls through to the Ok(n) if n == 0 match arm and causes handled_events to be incremented an extra time.

I'm running the examples from Ubuntu 20.04.4 LTS in WSL. For good measure, here's my WSL version information:

> wsl --version
WSL version: 2.0.9.0
Kernel version: 5.15.133.1-1
WSLg version: 1.0.59
MSRDC version: 1.2.4677
Direct3D version: 1.611.1-81528511
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.22621.3155

Correct some Clippy lints

When executing clippy in the root directory with this command

fd Cargo.toml --exec cargo clippy --manifest-path

It finds some clippy lints, if you find it useful maybe we could correct some of them. Of course they are some which are legitimate like when explicitly set an address to u64 instead of usize because we want to support only x86_64.

Chapter 7 typos on page 163

The Errata in github is incorrect here

Page 163 (Paragraph 4, line 2): The next coroutine/wait function is read_requests should be The next coroutine/wait function is requests

it is request singular, since that's the function name.

The same error to the read_request -> request function name was made in paragraph 2. "async_main stores a set of coroutines created by read_requests in a ..." should be "async_main stores a set of coroutines created by request in a ..."

Chapter 4 code not working inside aarch64 VM

Hi!

First of all huge thanks for this awesome book!

I'm working on MacBook Pro with M1 Pro, so I'm using OrbStack to run Linux VM for code from chapter 4.

When I'm running this code from aarch64 VM I've got this error:

[ty3uk@fedora a-epoll]$ cargo run                                                                                                                                               
   Compiling a-epoll v0.1.0 (/home/ty3uk/Asynchronous-Programming-in-Rust/ch04/a-epoll)                                                                                         
    Finished dev [unoptimized + debuginfo] target(s) in 0.49s                                                                                                                   
     Running `target/debug/a-epoll`                                                                                                                                             
thread 'main' panicked at src/main.rs:30:19:                                                                                                                                    
index out of bounds: the len is 5 but the index is 43680                                                                                                                        
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

But when I run amd64 VM it runs perfectly:

[ty3uk@fedora async-rs]$ cargo run
   Compiling async-rs v0.1.0 (/home/ty3uk/async-rs)
    Finished dev [unoptimized + debuginfo] target(s) in 2.47s
     Running `target/debug/async-rs`
RECEIVED: Event { events: 1, epoll_data: 4 }
HTTP/1.1 200 OK
content-type: text/plain;charset=utf-8
Date: Wed, 21 Feb 2024 12:58:22 GMT
Content-Length: 9

request-4
------

RECEIVED: Event { events: 1, epoll_data: 3 }
HTTP/1.1 200 OK
content-type: text/plain;charset=utf-8
Date: Wed, 21 Feb 2024 12:58:23 GMT
Content-Length: 9

request-3
------

RECEIVED: Event { events: 1, epoll_data: 2 }
HTTP/1.1 200 OK
content-type: text/plain;charset=utf-8
Date: Wed, 21 Feb 2024 12:58:24 GMT
Content-Length: 9

request-2
------

RECEIVED: Event { events: 1, epoll_data: 1 }
HTTP/1.1 200 OK
content-type: text/plain;charset=utf-8
Date: Wed, 21 Feb 2024 12:58:25 GMT
Content-Length: 9

request-1
------

RECEIVED: Event { events: 1, epoll_data: 0 }
HTTP/1.1 200 OK
content-type: text/plain;charset=utf-8
Date: Wed, 21 Feb 2024 12:58:26 GMT
Content-Length: 9

request-0
------

FINISHED

What can be a reason? Different implementation of epoll on aarch64 and amd64? Or something else? Just curious.

Can you help to explain why lateout is needed in raw syscall in ch03?

I check the later chapter and documents of asm and still cannot understand why lateout is used for rsi and rdx here.

Besides, the description We do that by telling the compiler that there will be some unspecified data (indicated by the underscore) written to these registers. is not very clear since out means allocate an undefined value at the start of the asm code while underscore means value is disgarded at the end of the asm code, while the description confuse me.

#[cfg(target_os = "linux")]
#[inline(never)]
fn syscall(message: String) {
    let msg_ptr = message.as_ptr();
    let len = message.len();

    unsafe {
        asm!(
            "mov rax, 1",      // system call 1 is write on Linux
            "mov rdi, 1",      // file handle 1 is stdout
            "syscall",         // call kernel, software interrupt
            in("rsi") msg_ptr, // address of string to output
            in("rdx") len,     // number of bytes
            out("rax") _, out("rdi") _, lateout("rsi") _, lateout("rdx") _
        );
    }
}

Chapter 7 is impossible to troubleshoot.

Because we introduce invalid syntax before we run the corofy tool there is no way to know that our code is correct. Once we run the corofy tool it becomes even more difficult to troubleshoot. There has to be a better way. Maybe corofy can do some kind of cargo check before it does all of it's magic?

I'm doing the book from front to back, typing the code myself...which has led me to finding, troubleshooting, correcting, and reporting many errata....because of that my code isn't exactly the same as the code in your repos. This chapter has a few, smallish problems...but by the time we run corofy it was impossible to follow along. I was finally able to get my code to compile by retyping sections of it. Before retyping...my code had compiled at each step successfully, so there is something wrong somewhere. Once I finish this chapter, if I have time, I will try to retype all the code from start to finish again and see if I can track it down, but I think this chapter could definitely use a closer look.

Typo on page 17

"This part of the cpu is often etched on the same dye" I think you mean "die"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.