GithubHelp home page GithubHelp logo

lua-resty-limit-traffic's Introduction

Name

lua-resty-limit-traffic - Lua library for limiting and controlling traffic in OpenResty/ngx_lua

Table of Contents

Status

This library is already usable though still highly experimental.

The Lua API is still in flux and may change in the near future without notice.

Synopsis

# demonstrate the usage of the resty.limit.req module (alone!)
http {
    lua_shared_dict my_limit_req_store 100m;

    server {
        location / {
            access_by_lua_block {
                -- well, we could put the require() and new() calls in our own Lua
                -- modules to save overhead. here we put them below just for
                -- convenience.

                local limit_req = require "resty.limit.req"

                -- limit the requests under 200 req/sec with a burst of 100 req/sec,
                -- that is, we delay requests under 300 req/sec and above 200
                -- req/sec, and reject any requests exceeding 300 req/sec.
                local lim, err = limit_req.new("my_limit_req_store", 200, 100)
                if not lim then
                    ngx.log(ngx.ERR,
                            "failed to instantiate a resty.limit.req object: ", err)
                    return ngx.exit(500)
                end

                -- the following call must be per-request.
                -- here we use the remote (IP) address as the limiting key
                local key = ngx.var.binary_remote_addr
                local delay, err = lim:incoming(key, true)
                if not delay then
                    if err == "rejected" then
                        return ngx.exit(503)
                    end
                    ngx.log(ngx.ERR, "failed to limit req: ", err)
                    return ngx.exit(500)
                end

                if delay >= 0.001 then
                    -- the 2nd return value holds the number of excess requests
                    -- per second for the specified key. for example, number 31
                    -- means the current request rate is at 231 req/sec for the
                    -- specified key.
                    local excess = err

                    -- the request exceeding the 200 req/sec but below 300 req/sec,
                    -- so we intentionally delay it here a bit to conform to the
                    -- 200 req/sec rate.
                    ngx.sleep(delay)
                end
            }

            # content handler goes here. if it is content_by_lua, then you can
            # merge the Lua code above in access_by_lua into your content_by_lua's
            # Lua handler to save a little bit of CPU time.
        }
    }
}
# demonstrate the usage of the resty.limit.conn module (alone!)
http {
    lua_shared_dict my_limit_conn_store 100m;

    server {
        location / {
            access_by_lua_block {
                -- well, we could put the require() and new() calls in our own Lua
                -- modules to save overhead. here we put them below just for
                -- convenience.

                local limit_conn = require "resty.limit.conn"

                -- limit the requests under 200 concurrent requests (normally just
                -- incoming connections unless protocols like SPDY is used) with
                -- a burst of 100 extra concurrent requests, that is, we delay
                -- requests under 300 concurrent connections and above 200
                -- connections, and reject any new requests exceeding 300
                -- connections.
                -- also, we assume a default request time of 0.5 sec, which can be
                -- dynamically adjusted by the leaving() call in log_by_lua below.
                local lim, err = limit_conn.new("my_limit_conn_store", 200, 100, 0.5)
                if not lim then
                    ngx.log(ngx.ERR,
                            "failed to instantiate a resty.limit.conn object: ", err)
                    return ngx.exit(500)
                end

                -- the following call must be per-request.
                -- here we use the remote (IP) address as the limiting key
                local key = ngx.var.binary_remote_addr
                local delay, err = lim:incoming(key, true)
                if not delay then
                    if err == "rejected" then
                        return ngx.exit(503)
                    end
                    ngx.log(ngx.ERR, "failed to limit req: ", err)
                    return ngx.exit(500)
                end

                if lim:is_committed() then
                    local ctx = ngx.ctx
                    ctx.limit_conn = lim
                    ctx.limit_conn_key = key
                    ctx.limit_conn_delay = delay
                end

                -- the 2nd return value holds the current concurrency level
                -- for the specified key.
                local conn = err

                if delay >= 0.001 then
                    -- the request exceeding the 200 connections ratio but below
                    -- 300 connections, so
                    -- we intentionally delay it here a bit to conform to the
                    -- 200 connection limit.
                    -- ngx.log(ngx.WARN, "delaying")
                    ngx.sleep(delay)
                end
            }

            # content handler goes here. if it is content_by_lua, then you can
            # merge the Lua code above in access_by_lua into your
            # content_by_lua's Lua handler to save a little bit of CPU time.

            log_by_lua_block {
                local ctx = ngx.ctx
                local lim = ctx.limit_conn
                if lim then
                    -- if you are using an upstream module in the content phase,
                    -- then you probably want to use $upstream_response_time
                    -- instead of ($request_time - ctx.limit_conn_delay) below.
                    local latency = tonumber(ngx.var.request_time) - ctx.limit_conn_delay
                    local key = ctx.limit_conn_key
                    assert(key)
                    local conn, err = lim:leaving(key, latency)
                    if not conn then
                        ngx.log(ngx.ERR,
                                "failed to record the connection leaving ",
                                "request: ", err)
                        return
                    end
                end
            }
        }
    }
}
# demonstrate the usage of the resty.limit.traffic module
http {
    lua_shared_dict my_req_store 100m;
    lua_shared_dict my_conn_store 100m;

    server {
        location / {
            access_by_lua_block {
                local limit_conn = require "resty.limit.conn"
                local limit_req = require "resty.limit.req"
                local limit_traffic = require "resty.limit.traffic"

                local lim1, err = limit_req.new("my_req_store", 300, 200)
                assert(lim1, err)
                local lim2, err = limit_req.new("my_req_store", 200, 100)
                assert(lim2, err)
                local lim3, err = limit_conn.new("my_conn_store", 1000, 1000, 0.5)
                assert(lim3, err)

                local limiters = {lim1, lim2, lim3}

                local host = ngx.var.host
                local client = ngx.var.binary_remote_addr
                local keys = {host, client, client}

                local states = {}

                local delay, err = limit_traffic.combine(limiters, keys, states)
                if not delay then
                    if err == "rejected" then
                        return ngx.exit(503)
                    end
                    ngx.log(ngx.ERR, "failed to limit traffic: ", err)
                    return ngx.exit(500)
                end

                if lim3:is_committed() then
                    local ctx = ngx.ctx
                    ctx.limit_conn = lim3
                    ctx.limit_conn_key = keys[3]
                end

                print("sleeping ", delay, " sec, states: ",
                      table.concat(states, ", "))

                if delay >= 0.001 then
                    ngx.sleep(delay)
                end
            }

            # content handler goes here. if it is content_by_lua, then you can
            # merge the Lua code above in access_by_lua into your
            # content_by_lua's Lua handler to save a little bit of CPU time.

            log_by_lua_block {
                local ctx = ngx.ctx
                local lim = ctx.limit_conn
                if lim then
                    -- if you are using an upstream module in the content phase,
                    -- then you probably want to use $upstream_response_time
                    -- instead of $request_time below.
                    local latency = tonumber(ngx.var.request_time)
                    local key = ctx.limit_conn_key
                    assert(key)
                    local conn, err = lim:leaving(key, latency)
                    if not conn then
                        ngx.log(ngx.ERR,
                                "failed to record the connection leaving ",
                                "request: ", err)
                        return
                    end
                end
            }
        }
    }
}

Description

This library provides several Lua modules to help OpenResty/ngx_lua users to control and limit the traffic, either request rate or request concurrency (or both).

Please check out these Lua modules' own documentation for more details.

This library provides more flexible alternatives to NGINX's standard modules ngx_limit_req and ngx_limit_conn. For example, the Lua-based limiters provided by this library can be used in any contexts like right before the downstream SSL handshaking procedure (as with ssl_certificate_by_lua) or right before issuing backend requests.

Back to TOC

Installation

This library is enabled by default in OpenResty 1.11.2.2+.

If you have to install this library manually, then ensure you are using at least OpenResty 1.11.2.1 or a custom nginx build including ngx_lua 0.10.6+. Also, You need to configure the lua_package_path directive to add the path of your lua-resty-limit-traffic source tree to ngx_lua's Lua module search path, as in

# nginx.conf
http {
    lua_package_path "/path/to/lua-resty-limit-traffic/lib/?.lua;;";
    ...
}

and then load one of the modules provided by this library in Lua. For example,

local limit_req = require "resty.limit.req"

Back to TOC

Community

Back to TOC

English Mailing List

The openresty-en mailing list is for English speakers.

Back to TOC

Chinese Mailing List

The openresty mailing list is for Chinese speakers.

Back to TOC

Bugs and Patches

Please report bugs or submit patches by

  1. creating a ticket on the GitHub Issue Tracker,
  2. or posting to the OpenResty community.

Back to TOC

Author

Yichun "agentzh" Zhang (章亦春) [email protected], OpenResty Inc.

Back to TOC

Copyright and License

This module is licensed under the BSD license.

Copyright (C) 2015-2019, by Yichun "agentzh" Zhang, OpenResty Inc.

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Back to TOC

See Also

Back to TOC

lua-resty-limit-traffic's People

Contributors

agentzh avatar bungle avatar chipitsine avatar doujiang24 avatar downtown12 avatar pearzl avatar shawnzhu avatar shreemaan-abhishek avatar spacewander avatar thibaultcha avatar tiwarivikash avatar windmgc avatar xiaocang avatar zhuizhuhaomeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lua-resty-limit-traffic's Issues

Limiting Website

Hello, I'm new with OpenResty, I only installed it and I don't even know how to use lua or if it comes already compiled with openresty.
Anyway I would like to know if there is a way to limit a website traffic with lua, I mean if a website reach x connections, then something happens (to the whole website not only to that connections, im not saying rate-limt)
I actually tried this: https://github.com/openresty/lua-resty-limit-traffic but seems like it doesn't limit maybe im missing something compiling the source or I need to add something?

Thanks for your help!

req limit

Hello @agentzh , here use your traffice function. If elapsed is 0 in the following statement, excess will return a number greater than 0 whether or not it exceeds concurrency, so burst stage will be carried out?

code:

local elapsed = now - tonumber(rec.last)
        print(elapsed, ":ms", now, "-", tonumber(rec.last))

        excess = max(tonumber(rec.excess) - rate * abs(elapsed) / 1000 + 1000,

log:
req.lua:89: incoming(): 0:ms1536289344390-1536289344390,

new feature to implement GitHub style request rate limit

I've started some experiment with ngx_http_limit_req_module style request rate limit with resty.limit.req and here's the outcomes I've collected:

  • leaky bucket algorithm shapes traffic like a queue, which works for general cases but it doesn't work with clients who may perform high burst rate in a long given time. E.g., GitHub API supports a given number of requests in minute/hour. see https://developer.github.com/v3/#rate-limiting
  • doesn't work with existing user agent is not designed with rejected request above burst rate in 1 second interval. like a server with resty.limit.req module serves status code 429 when a request is rejected.

I would propose a new module which implements the GitHub API rate limit style request limiting and my experiment shows pretty good outcome.

@agentzh are you interested in this new feature in this repo? I'm glad to create a PR if this is a potentinal improvement.

limit.req with ngx.now

My project is online today, the limit config is 1000/s. But the truth is, 300 requests had trigered the req limit(I understand the bucket algorithm).
I wonder if this has something to do with the cache of ngx.now.

burst setting in resty.limit.req issue

When I use the documented burst setting in resty.limit.req, but it appears higher number of request were rejected (503). For example when I set the burst at 10 like this:

worker_processes  1;
error_log logs/error.log;
events {
    worker_connections 1024;
}
http {
    lua_shared_dict my_limit_req_store 100m;

    server {
        listen 8080;
        location / {
            access_by_lua_block {
                local limit_req = require "resty.limit.req"
                local lim, err = limit_req.new("my_limit_req_store", 20, 10)
                if not lim then
                    ngx.log(ngx.ERR,
                            "failed to instantiate a resty.limit.req object: ", err)
                    return ngx.exit(500)
                end

                local key = 'testing'
                local delay, err = lim:incoming(key, true)
                if not delay then
                    if err == "rejected" then
                        return ngx.exit(503)
                    end
                    ngx.log(ngx.ERR, "failed to limit req: ", err)
                    return ngx.exit(500)
                end

                if delay >= 0.001 then
                    local excess = err
                    ngx.sleep(delay)
                end
            }
            default_type text/html;
            content_by_lua '
                ngx.say("<p>hello, world!</p>")
            ';
        }

Then I hit it with:

for i in {0..30}; do (curl -Is http://localhost:8080 | head -n1 &) 2>/dev/null; done

The result has 19 rejected requests with 503 status code below, is the burst number documented incorrectly?

HTTP/1.1 200 OK
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 200 OK
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK

concurrent connections counting

Here are steps to re-produce the issue I have:

  1. I install OpenRestry version 1.11.2.5 on an EC2 instance with <EC2_IP>
  2. Based on your one of your example here https://github.com/openresty/lua-resty-limit-traffic#synopsis, I made very little changese, please see following for my actual config.
  3. In order to test concurrent connections counting, I install npm package artillery at my local machine, https://www.npmjs.com/package/artillery by npm install -g artillery
  4. then I test concurrent count by execute command at my local machine: `artillery quick --count 5 -n 2 http://<EC2_IP>/
  5. I do step #4 and wait for 10 seconds(or even longer, it looks like it doesn't matter) and repeat steup #4

I expect concurrent connections counting return to 1 after I stop make concurrent request, but that counting number keep increasing without reset

My question is: Did I do something wrong or this is an issue for lua-resty-limit-traffic

This is my config:

# demonstrate the usage of the resty.limit.conn module (alone!)

lua_shared_dict my_limit_conn_store 100m;

server {
    location / {
        access_by_lua_block {
            -- well, we could put the require() and new() calls in our own Lua
            -- modules to save overhead. here we put them below just for
            -- convenience.

            local limit_conn = require "resty.limit.conn"

            -- limit the requests under 200 concurrent requests (normally just
            -- incoming connections unless protocols like SPDY is used) with
            -- a burst of 100 extra concurrent requests, that is, we delay
            -- requests under 300 concurrent connections and above 200
            -- connections, and reject any new requests exceeding 300
            -- connections.
            -- also, we assume a default request time of 0.5 sec, which can be
            -- dynamically adjusted by the leaving() call in log_by_lua below.
            local lim, err = limit_conn.new("my_limit_conn_store", 200, 100, 0.5)
            if not lim then
                ngx.log(ngx.ERR,
                        "failed to instantiate a resty.limit.conn object: ", err)
                return ngx.exit(500)
            end

            -- the following call must be per-request.
            -- here we use the remote (IP) address as the limiting key
            local key = ngx.var.binary_remote_addr
            local delay, err = lim:incoming(key, true)
            if not delay then
                if err == "rejected" then
                    return ngx.exit(503)
                end
                ngx.log(ngx.ERR, "failed to limit req: ", err)
                return ngx.exit(500)
            end

            if lim:is_committed() then
                local ctx = ngx.ctx
                ctx.limit_conn = lim
                ctx.limit_conn_key = key
                ctx.limit_conn_delay = delay
            end

            -- the 2nd return value holds the current concurrency level
            -- for the specified key.
            local conn = err

            if delay >= 0.001 then
                -- the request exceeding the 200 connections ratio but below
                -- 300 connections, so
                -- we intentionally delay it here a bit to conform to the
                -- 200 connection limit.
                -- ngx.log(ngx.WARN, "delaying")
                ngx.sleep(delay)
            end
        }

        # content handler goes here. if it is content_by_lua, then you can
        # merge the Lua code above in access_by_lua into your

        log_by_lua_block {
            local ctx = ngx.ctx
            local lim = ctx.limit_conn
	    if lim then
                -- if you are using an upstream module in the content phase,
                -- then you probably want to use $upstream_response_time
                -- instead of ($request_time - ctx.limit_conn_delay) below.
                local latency = tonumber(ngx.var.request_time) - ctx.limit_conn_delay
                local key = ctx.limit_conn_key
                assert(key)
                local conn, err = lim:leaving(key, latency)
                if not conn then
                    ngx.log(ngx.ERR,
                            "failed to record the connection leaving ",
                            "request: ", err)
                    return
                end
            end
            -- My code to check concurrent connections counting
	    if ngx.ctx then
                ngx.log(ngx.ERR, 'concurrent connections =', ngx.shared.my_limit_conn_store:get(ngx.ctx.limit_conn_key))
	    end
        }
    }
}

Here is partial of the result:
............
2017/10/31 20:28:52 [error] 589#0: *25 [lua] log_by_lua(default:86):20: concurrent connections =49 while logging request, client: 10.1.254.13, server: , request: "GET / HTTP/1.1", host: "10.1.17.130"
2017/10/31 20:28:52 [error] 589#0: *25 [lua] log_by_lua(default:86):20: concurrent connections =50 while logging request, client: 10.1.254.13, server: , request: "GET / HTTP/1.1", host: "10.1.17.130"
.............

Initialization problem

I have some confusion as to where limit_req.new should be initialized
My requirement is to limit the flow based on rules. For example, depending on the requested URI, the flow limit for each URI may be different.
In this case, does it need to be initialized for every request during the access phase? Will this consume a lot of resources?
For example, in my access_by_lua file, I have such a function. Among them, rate, uri and brutal are all read from scheduled tasks and will be updated according to the configuration.


local function access_limit()
    local limit_req = require "resty.limit.req"

    local rate = ... // read specific rate from config
    local uri = ... // read specific uri from config
    local brust = ... // read specific rate from config


    local lim, err = limit_req.new("client_white", rate, brust)
    if not lim then
        ngx.log(ngx.ERR,
                "failed to instantiate a resty.limit.req object: ", err)
        return ngx.exit(503)
    end

    local delay, err = lim:incoming(uri, true)
    if not delay then
        if err == "rejected" then
            return ngx.exit(503)
        end
        ngx.log(ngx.ERR, "failed to limit req: ", err)
        return ngx.exit(500)
    end

    if delay >= 0.001 then
        local excess = err
        ngx.log(ngx.ERR,
                "excess: ", excess)

        ngx.sleep(delay)
    end
end

Will this cause limit_req.new to be instantiated for each request, resulting in slower performance?
Also, assuming that the rate or brutal under the same URI changes, will reinitialization lead to changes in the previous configuration under the same URI?

Whatever I do, I always get nil

This is the error from error log:

2024/04/02 18:02:08 [error] 7#0: *1 [lua] whitelist.lua:96: failed to instantiate a rest1111111: nil, client: 172.22.0.5, server: _, request: "GET / HTTP/1.1", host: "domain.tld"
2024/04/02 18:02:08 [error] 7#0: *1 invalid URL prefix in "fine", client: 172.22.0.5, server: _, request: "GET / HTTP/1.1", host: "domain.tld"

This is lua code. The goal is to check if rate limit is reached. If the rate limit is reached, then it should run the code and if not, the else block:

-- First check for rate limit
local limit_count = require "resty.limit.count"
local lim, err = limit_count.new("my_limit_count_store", 20, 60)

--if rate limited run application
if not lim then
ngx.log(ngx.ERR, "failed to instantiate a resty.limit.count object: ", err) -- This is done for testing if this gets called


(...) -- some lua code which already work
else
        ngx.log(ngx.ERR, "failed to instantiate a rest1111111: ", err)
        ngx.var.check = "fine"
end

the http block contains:

        lua_shared_dict my_req_store 100m;
        lua_shared_dict my_conn_store 100m;
        lua_shared_dict my_limit_conn_store 100m;
        lua_shared_dict my_limit_req_store 100m;
        lua_shared_dict my_limit_count_store 100m;

    init_by_lua_block {
        require "resty.core"
    }

Whatever I do, I always get the same error. I already tried

if not lim then
to
if lim then

and

local limit_conn = require "resty.limit.conn"
local lim, err = limit_conn.new("my_limit_conn_store", 10, 10, 0.5)

but always with the same result.

confused withe the mechanism,always excess 1000.

Hi,

thanks for the wonderful project, it help us find the ability of flow control in openresty!

i am trying to use limit_req as the example way. however i got 99/100 rejected requests.

reproduce step:

  1. take the same code with example
  2. simulate the concurrency using ab -n 100 -c 10
  3. check the ab report Non-2xx responses: 99

and i also add some debug log after line req.lua#L94

2016/03/22 00:19:12 [warn] 4570#0: *9 [lua] req.lua:95: incoming(): incomming,excess: 1000,elapsed: 0,rate: 200000, client: 9.91.39.111, server: _, request: "GET /scm/test HTTP/1.0", host: "9.91.39.77"
2016/03/22 00:19:12 [warn] 4570#0: *10 [lua] req.lua:95: incoming(): incomming,excess: 1000,elapsed: 0,rate: 200000, client: 9.91.39.111, server: _, request: "GET /scm/test HTTP/1.0", host: "9.91.39.77"

i'm confused the formula excess = tonumber(rec.excess) - rate * abs(elapsed) / 1000 + 1000

did i miss some import information aboubt limit_req ?

Thanks.

bug report

hi,I found function set_conn() (which in conn.lua ) assign conn to self.conn which is never be used.
And the test result is consistent with my guess : set_conn() do not make any effetc.
It looks like a wrong name mistake, conn should change to max, if I didn't lost any thing important.

an advise

on line 114 of req.lua

   111     if commit then
112         rec_cdata.excess = excess
113         rec_cdata.last = now
114         dict:set(key, ffi_str(rec_cdata, rec_size))
115     end

I think it will save mem if you set a expire time on the key, like below
dict:set(key, ffi_str(rec_cdata, rec_size),60)

same advise on con.lua ,but you need to change add to set

also ,I can't figure out the uncommit mechanism in req.lua ,where and when should I call uncommit?

About limit reset-expire?

With this config

local lim, err = limit_conn.new("my_limit_conn_store", 10, 5, 0.5)

After somes request. I alway get 503.
I want limit per IP request not over 15 concurrent connections

Limit breaches significantly as we increase traffic

I have noticed that the limit is not always regarded and we can see count of HTTP 200 more than set limit. This is more evident and occurs more frequently as we increase traffic. I have tried testing with actual traffic pattern seen on prod environments. Below is my implementation.

Nginx conf:

     location = /xyz {
        limit_conn_status 429;
    limit_req_status 429;
    limit_req zone=user_details burst=3920 nodelay;
        proxy_set_header        Host            $host;
        proxy_set_header        X-Real-IP     $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Accounts-RequestId $connection:$connection_requests;
        proxy_connect_timeout   10;
        proxy_send_timeout      10;
        proxy_read_timeout      10;
        add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since,X-CSRF-Token';
        # Real value of this upstream will be set by access_by_lua_block block beneath
        set $upstream_proxy '';
        proxy_pass $upstream_proxy;
        access_by_lua_file 'lua/temp.lua'; 
   }

LUA code:

            local headers = ngx.req.get_headers()
            local header_authorization = headers["Authorization"]
            local limit_req = require "resty.limit.req"
            local lim, err = limit_req.new("my_limit_req_store", 200, 200)
            if not lim then
                ngx.log(ngx.ERR,"failed to instantiate a resty.limit.req object: ", err)
                return ngx.exit(500)
            end 
    if header_authorization then
        local base64_enc = string.sub(header_authorization, 7)
        local base64_dec = ngx.decode_base64(base64_enc)
        if base64_dec ~= "clientname" then
            local key = base64_dec
            local delay, err = lim:incoming(key, true)
            if not delay then
                if err == "rejected" then
                    return ngx.exit(503)
                end
                ngx.log(ngx.ERR, "failed to limit req: ", err)
                return ngx.exit(500)
            end
            ngx.say(key,delay,err,header_authorization)
            if delay >= 0.001 then
                local excess = err
                ngx.sleep(delay)
            end
        end
    end

            local counter = (ngx.var.connection + ngx.var.connection_requests ) % 100
            if( counter < 50 ) then
                ngx.var.upstream_proxy = "https://upstream1"
            else
                ngx.var.upstream_proxy = "http://upstream2"
            end

Question about race condition in 'count' limiter ('not found' error)

Hello,

In this issue: #23, it is mentioned that lines from 54 to 69 are for handling a race condition, which I suppose is the case when the 'not found' error is returned.

My question is that is it possible for line 56 or 61 to also return a 'not found' error? We are observing a lot of 'not found' errors in our logs. However, if it is expected to encounter these 'not found' errors with high traffic volumes, then I have no further questions.

In general, I would like to know more about the conditions under which the 'not found' error can happen. Isn't it a normal case that keys are sometimes not found in the dictionary? For example, because they were not initialized before.

Thank you in advance.

ngx.exec ngx.ctx

log_by_lua_block {
    local ctx = ngx.ctx
    local lim = ctx.limit_conn
    if lim then
        local latency = tonumber(ngx.var.request_time)
        local key = ctx.limit_conn_key
        assert(key)
        local conn, err = lim:leaving(key, latency)
        if not conn then
           ngx.log(ngx.ERR, err)
           return
        end
    end
}

如果我使用了ngx.exec进行了转发, 我该如何在log_by_lua_block中获取lim, 因为ngx.ctx已经销毁了?

Thread safety

Inside req.lua, rec_cdata is declared at file scope. The comment above it says:
we can share the cdata here since we only need it temporarily for serialization inside the shared dict

As you know, file scope variables can be shared between nginx processes if you 'require' the lua files using 'init_by_lua_block'. Along with that, I logged the rec_cdata pointer like so:

ngx.log(ngx.ERR, "pid=", tostring(ngx.var.pid), " rec_cdata=", tostring(rec_cdata))

and it showed that the same pointer was being used by all worker processes.

limit traffic based on rolling-counter/sliding-window using ngx.shared.DICT

Thought of something like limit_req.new(<dict_name>, rate, window, resolution)
e.g. limit_req.new(<dict_name>, 200,300,60)
That is, limit the requests under 200 req in a window of 300 seconds where the window resolution is 60 seconds.

Thought of using ffi similar to lua-resty-limit-traffic ( with the same (small) race-condition window :). The ffi struct shall contain a queue, it's length (max length == window/resolution), list values sum and a 64bit last timestamp.
Each time limit_req.incoming() is called the list is adjusted (drop oldest element if now() - 'last' > resolution and recalculate sum) and so forth .. eventually return reject if sum > rate or simply increment sum.

The other option could be utilizing the newly added feature: C API for 3rd-party NGINX C modules to register their own shm-based data structures for the Lua land usage. Seems reasonable? Does it worth the hassle?

Would like to get some feedback before I start. Thanks

Reset is not working with count.md code

For Nginx rate limiting, we are using code written on count.md. Also, we are using count.lua library.
Below code, giving counter value from 199 to 1 from line
local delay, err = lim:incoming(key, true)
After that, above line giving rejected error hence, it returning 503 error.
So, reset is not working in time_window & getting rejected value in err.Please check count.lua library as well.

Refer Code :-
local lim, err = limit_count.new("my_limit_count_store", 200, 1)
ngx.log(ngx.ERR, "lim instance value: ", err)
if not lim then
--ngx.log(ngx.ERR, "failed to instantiate a resty.limit.conn object: ", err)
return ngx.exit(500)
end
--if header_authorization then
local base64_enc = string.sub(header_authorization, 7)
local base64_dec = ngx.decode_base64(base64_enc)
ngx.log(ngx.ERR, "base64_dec value: ", base64_dec)
local key = base64_dec
local delay, err = lim:incoming(key, true)
ngx.log(ngx.ERR,"new instance value: ",err)

      if not delay then
                if err == "rejected" then
                    ngx.log(ngx.ERR, "coming to this block: ", err)
                    ngx.header["X-RateLimit-Limit"] = "200"
                    ngx.header["X-RateLimit-Remaining"] = 0
                    return ngx.exit(503)
                end
          --      ngx.log(ngx.ERR, "failed to limit req: ", err)
                return ngx.exit(500)
      end

    local remaining = err
    ngx.log(ngx.ERR, "remaining value: ", remaining)

    ngx.header["X-RateLimit-Limit"] = "200"
    ngx.header["X-RateLimit-Remaining"] = remaining
    ngx.log(ngx.ERR, "X-RateLimit-Remaining: ", ngx.header["X-RateLimit-Remaining"])

setting resty.limit.conn issue

Hello, I found some problems when i set resty.limit.conn。

  • Purpose: i want to limit the number of concurrent. It can return 503 when requests exceed concurrent

  • nginx.config:

http {
    lua_shared_dict limit_conn_store 100m;
    server {
        listen       9090;
        server_name  localhost;

        location / {
            access_by_lua_file /usr/local/openresty/nginx/conf/access.lua;
            #log_by_lua_file /usr/local/openresty/nginx/conf/log.lua;
            proxy_pass http://10.128.3.68;
  • access.lua
local limit_conn = require "limit_conn"

if ngx.req.is_internal() then
    return
end

limit_conn.incoming()
  • log.lua
local limit_conn = require "limit_conn"

limit_conn.leaving()

-limit_conn.lua

ngx.var.limit_rate = "100K"

local limit_conn = require "resty.limit.conn"
local limit, limit_err = limit_conn.new("limit_conn_store", 10, 2, 0.5)
if not limit then
    print(limit_err)
    ngx.log(ngx.ERR,"failed to instantiate a resty.limit.conn object: ", limit_err)
    return ngx.exit(500)
end

local _conn = {}

function _conn.incoming()
    local key = ngx.var.binary_remote_addr
    local delay, err = limit:incoming(key, true)
    if not delay then
        if err == "rejected" then
            return ngx.exit(503)
        end
        ngx.log(ngx.ERR, "failed to limit req: ", err)
        return ngx.exit(500)
    end

    ngx.log(ngx.INFO, "delay= ", delay)
    if limit:is_committed() then
        local ctx = ngx.ctx
        ctx.limit_conn_key = key
        ctx.limit_conn_delay = delay
    end

    local conn = err

    if delay >= 0.001 then
        ngx.log(ngx.WARN, "delaying conn, excess ", delay, "s per binary_remote_addr by limit_conn_store")
        ngx.sleep(delay)
    end
end

function _conn.leaving()
    local ctx = ngx.ctx
    local key = ctx.limit_conn_key
    ngx.log(ngx.INFO, "key= ", key)
    if key then
        local latency = tonumber(ngx.var.request_time) - ctx.limit_conn_delay
        local conn, err = limit:leaving(key, latency)
        if not conn then
            ngx.log(ngx.ERR,
                    "failed to record the connection leaving ",
                    "request: ", err)
        end
    end
end


return _conn
  • issue:
    No use log.lua
    First execute this command
    for i in {0..50};do (curl -Is http://10.139.8.112:9090 | head -n1 &) 2>/dev/null; done
    It can return 12 OK, the rest return 503.
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable

Second execute this command
for i in {0..50};do (curl -Is http://10.139.8.112:9090 | head -n1 &) 2>/dev/null; done
It all return 503

HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
  • issue two
    user log.lua
    No matter how many times i execute this command, it both return 200 OK
    command:
    for i in {0..50};do (curl -Is http://10.139.8.112:9090 | head -n1 &) 2>/dev/null; done

Rate limit status headers

hello @agentzh, I have a question about how to add information about the rate limits and their current status.
I know many services such as GitHub, Twitter, and Discord return response headers containing the information. For example, there exists "X-RateLimt-Limit", "X-RateLimit-Remaining", "X-RateLimit-Reset", and "Retry-After". (reference: https://jameslao.com/post/gcra-rate-limit-status/)
I see resty.limit.count module has the first two headers and I wonder how to do the same thing in resty.limit.req (leaky bucket method)

Here's how I tried with rate=0.5req/s and burst=5.

http {
    lua_shared_dict my_limit_req_store 100m;

    server {
        location / {
            access_by_lua_block {
                local limit_req = require "resty.limit.req"
                local rate, burst = 0.5, 5
                local lim, err = limit_req.new("my_limit_req_store", rate, burst)
                if not lim then
                    ngx.log(ngx.ERR,
                            "failed to instantiate a resty.limit.req object: ", err)
                    return ngx.exit(500)
                end

                local key = ngx.var.binary_remote_addr
                local delay, err = lim:incoming(key, true)
                if not delay then
                    if err == "rejected" then
                        ngx.header["X-RateLimit-Limit"] = burst
                        ngx.header["X-RateLimit-Remaining"] = 0
                        ngx.header["Retry-After"] = 1/rate
                        return ngx.exit(503)
                    end
                    ngx.log(ngx.ERR, "failed to limit req: ", err)
                    return ngx.exit(500)
                end

                if delay >= 0.001 then
                    local excess = err
                    ngx.sleep(delay)
                end

                local remaining = burst - err

                ngx.header["X-RateLimit-Limit"] = burst
                ngx.header["X-RateLimit-Remaining"] = remaining
            }

        }
    }
}

"X-RateLimt-Limit" and "X-RateLimit-Remaining" seem to be working but I'm not sure how to implement "X-RateLimit-Reset", and "Retry-After".
Could you help with this?

Thanks

Why the traffic limit requires two limit_req

local lim1, err = limit_req.new("my_req_store", 300, 200)
assert(lim1, err)
local lim2, err = limit_req.new("my_req_store", 200, 100)
assert(lim2, err)
local lim3, err = limit_conn.new("my_conn_store", 1000, 1000, 0.5)
assert(lim3, err)

Is it to avoid sudden growth and reduction?

200 ~ 100
           300 ~ 200

How to share limit information among nodes?

I noticed that the lua-resty-limit-traffic uses the shared dict struct to store the data and I couldn't find any part of the documentation that deals with multiple nodes case, where a set of nodes should enforce a global traffic limit.

A stateful system is always something to be avoided but sometimes we just can't, is this library ready to use with a distributed in memory storage (like redis)? if it's not, can I try to do it and make a PR?

Graceful way to "leave" connection while handling error

We are using incoming and leaving the same was as in the example in the readme. however when the server errors we redirect to an error page, this causes the context to be cleared and we lose the saved connection.

is there a more graceful way to leave the connection in this situation other than opening a new connection in the error handling code?

[p0][conn-limit]blocked all traffic

https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/conn.md

Trigger condition:
The backend API processing time exceeds the maximum timeout set in Nginx.

Problem triggered:
During the log_by_lua phase, ctx.limit_conn is nil, resulting in the inability to invoke the lim:leaving function.

log_by_lua_block {
  local ctx = ngx.ctx
  local lim = ctx.limit_conn
  if lim then // lim is nil
      local latency = tonumber(ngx.var.request_time) - ctx.limit_conn_delay
      local key = ctx.limit_conn_key
      assert(key)
      local conn, err = lim:leaving(key, latency)
      if not conn then
          ngx.log(ngx.ERR,
                  "failed to record the connection leaving ",
                  "request: ", err)
          return
      end
  end
}

dict:incr(key, -1) not to execute, which accumulates and eventually leads to blocked all traffic. https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/conn.lua#LL58C4-L58C4 conn > max + self.burst Always true

limit_req is not as expected in openresty/1.15.8.2

hello @agentzh , I have a question to disturb you。

I found some problems use limit_req, I set a limit of 500, but the normal number of requests is 3000.

or version: openresty/1.15.8.2

nginx.conf:
lua_shared_dict limit_req_store 1m;

limit_config:
{
"conn_limit": -1,
"req_total_limit": 500,
"req_host_limit": -1,
"org_host_limit": -1,
"limit_rate_after": 0,
"limit_rate": -1,
"exception": {}
}

lua script:
`
_M.req_limit_process = function(host)
local conf = _M.conf

if not conf.req_total_limit or conf.req_total_limit < 0 then
    return
end

if conf.req_total_limit and conf.req_total_limit >= 0 then
ngx_logger(NGX_ERR, conf.req_total_limit)
    local limiter, err = limit_req.new(limit_req_store, conf.req_total_limit, 0)
    if not limiter then
        ngx_logger(NGX_ERR, "create req total limiter failed, err: ", err)
        return bdapp.exit(500, errno.UNKNOW_ERROR)
    end

    local key = "total_request"
    local delay, err = limiter:incoming(key, true)
    if not delay then
        ngx_logger(NGX_WARN, "req limit")
        return bdapp.exit(429, errno.REQ_TOTAL_LIMT_REJECT)
    end

    -- no delay in req_total_limit
end

-- req_host_limit被设置为-1, 可以忽略下列逻辑
local host_limit = conf.req_host_limit
local elem = conf.exception[host]
if elem then
    host_limit = elem.req_limit
end

if host_limit >= 0 then
    local burst = 50
    if host_limit < 100 then
        burst = 0
    end

    local limiter, err = limit_req.new(limit_req_store, host_limit, burst)
    if not limiter then
        return bdapp.exit(500, errno.UNKNOW_ERROR)
    end

    local delay, err = limiter:incoming(host, true)
    if not delay then
        return bdapp.exit(429, errno.REQ_HOST_LIMIT_REJECT)
    end

    if delay >= 0.001 then
        ngx.sleep(delay)
    end
end

end
`

testing and result:
`
wrk -c 100 -d 100 -t 4 http://127.0.0.1/1.txt -H "Host: www.ortest.com"

Running 2m test @ http://127.0.0.1/1.txt
4 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.82ms 18.13ms 344.20ms 91.07%
Req/Sec 2.15k 546.94 5.88k 73.02%
853821 requests in 1.67m, 259.09MB read
Non-2xx or 3xx responses: 502925
Requests/sec: 8530.88
Transfer/sec: 2.59MB

200 status qps is (853821 - 502925) / 100 ~= 3500
`

limit connection 's question

hi
i have a question about " CONN.LUA"
ngx.shared.DICT.incr is atomic operation?

How to ensure “conn, err = dict:incr(key, -1)” is right

thank you

malfunction for sample code in random case

  1. I use this sample code in a nginx reverse proxy settings. https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/traffic.md#synopsis
  2. Then I use tool like siege to execute command like this: siege -c 1 -r 100 http://mytesturl, in some case the code is working fine. In some case, it keep showing request has been rejected. Then I stopped for some time and try the command again and again, This issue never changed. It can only be recovered to normal state after I restart nginx
  3. It looks like this componment goes to reject state and never come back to accept request state after I constantly send some amount of requests, but it happend randomly not always.
  4. I plan to use this component on production soon, could you possible investage this issue?

A question about limit_conn

Hello, I have a question about limit_conn, the description said that it can "limit request concurrency (or concurrent connections)", but after I read the source code, I think it can only be used to limit the HTTP requests, not something like TCP connection. I wonder how to use it for TCP connection when I set configuration like "keepalive_requests", thank you.

Rate-Limit Per Minute

I want to use this module for rate-limiting incoming requests per minute. This module has rate-limiting per second.
how can I limit incoming requests based on 3 requests per minute(With the possibility of sending 3 requests at the same time)?

ngx.ctx with ssl_certificate_by_lua

Hi,
thanks for the great work! One example of the use of ssl_certificate_by_lua_{file,block} is request limiting with this library.

But in the ssl_certificate_by_lua_* code there is no ngx.ctx available, which is used in the examples.

That would be the recommended way to store the limiter, key and delay in the ssl routine?

when using "resty.limit.req" to control rate, actual request processing rate might exceed configured rate.

http {
limit_req_zone $uri zone=one:10m rate=1000r/s;
server {
location /limit-req {
access_by_lua_block {
local limit_req = require "resty.limit.req"
local lim, err = limit_req.new("my_limit_req_store", 100, 0)
...
if delay >= 0.001 then
--ngx.sleep(delay)
end
}
...
}
}
}

When using jmeter(threads=400, interval=300s) to simulate the stress test, jmeter test results show that actual request processing rate(hearly 1000r/s) has exceeded configured rate(100r/s).

even though start delay processing, the result is same.

Some advise for limit count module

  1. In module resty/limit/count.lua, At line: 54 ~ 69 in method _M.incoming() it seemed never to executed;
  2. Can not set limit like req limit module "set_rate", Change cache the incoming amount instead of remaining should be fine for this feature.

attempt to call method 'expire' (a nil value)

2018/04/08 00:17:58 [error] 1176#1176: *9 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/resty/limit/count.lua:53: attempt to call method 'expire' (a nil value)
stack traceback:
coroutine 0:
/usr/local/openresty/lualib/resty/limit/count.lua: in function 'incoming'
/usr/local/openresty/lualib/resty/limit/traffic.lua:26: in function 'combine'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.