GithubHelp home page GithubHelp logo

Comments (8)

agentzh avatar agentzh commented on June 9, 2024 1

rate=20 & burst=30 means that it will only reject requests exceeding the 50 r/s rate.

I suggest you take a closer look at the leaky bucket algorithm used by both this Lua library and nginx's standard ngx_limit_req module.

from lua-resty-limit-traffic.

agentzh avatar agentzh commented on June 9, 2024

Seems like you misunderstand rate limiting. Your shell command simply issues requests serially and as fast as possible. The number of rejected requests have no direct relationship with your burst setting. Because you also have to take into account of the speed of your server serving the request and your client's own overhead.

from lua-resty-limit-traffic.

kchsieh avatar kchsieh commented on June 9, 2024

@agentzh I don't believe the performance of my desktop is the issue. Fyi, I am using my desktop as both server and client. Your document stated:

                -- limit the requests under 200 req/sec with a burst of 100 req/sec,
                -- that is, we delay requests under 300 req/sec and above 200
                -- req/sec, and reject any requests exceeding 300 req/sec.

So my setting of:

local lim, err = limit_req.new("my_limit_req_store", 20, 10)

Should just delay the 20th to 30th requests, it doesn't seems to be the case.

from lua-resty-limit-traffic.

agentzh avatar agentzh commented on June 9, 2024

No, your understanding of req rate is wrong. It is also required to fulfill the rate within a single second. The time unit is irrelevant to the rate.

from lua-resty-limit-traffic.

agentzh avatar agentzh commented on June 9, 2024

The rate defines the time interval constraint between every two successive requests. It is not calculated by the seconds at all.

from lua-resty-limit-traffic.

kchsieh avatar kchsieh commented on June 9, 2024

Okay, I am really confused now. Just to be complete, I am pasting the access.log below to show where all 31 hits are registered within the same second:

127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 503 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"
127.0.0.1 - - [24/Oct/2016:14:20:02 -0700] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.30.0"

Maybe I should rephrase my original question. I am suspecting the documentation is stated incorrectly. If you set it to be:

 local lim, err = limit_req.new("my_limit_req_store", 20, 30)

Then the behavior should match what was stated in the documentation.

from lua-resty-limit-traffic.

agentzh avatar agentzh commented on June 9, 2024

@kchsieh The access logs make little sense here since we check at the level of milliseconds while the timestamps in the access logs are only at the seconds precision.

from lua-resty-limit-traffic.

unext-wendong avatar unext-wendong commented on June 9, 2024

@agentzh Just want to confirm if my understanding about the usage of resty.limit.req is correct.

I just roughly went through the wiki page of leaky bucket and also ngx_limit_req's documentation. And it seems the resty.limit.req is using a different method than ngx_limit_req to determine if a request conforms the defined limit.

In ngx_limit_req's case, parameter delay + 1 defines the size of the bucket, and parameter rate defines the rate in which water leaks from the bucket. All the new water drops (i.e. requests) will not be taken when the bucket is full, i.e. with the bucket already having delay + 1 water drops.

For the test output here, I'm trying to understand why the second request was rejected. Seems in resty.limit.req's case, there is no bucket size, and it determines whether or not to accept the water drop by examining the rate that it arrives, e.g. by checking the interval with the previous water drop. If the rate it arrives exceeds rate + burst, it will be discarded.

Is there understanding correct? Or there is another way to explain this test output?

from lua-resty-limit-traffic.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.