GithubHelp home page GithubHelp logo

lua-resty-core's Introduction

Name

lua-resty-core - New FFI-based Lua API for ngx_http_lua_module and/or ngx_stream_lua_module

Table of Contents

Status

This library is production ready.

Synopsis

This library is automatically loaded by default since OpenResty 1.15.8.1. This behavior can be disabled via the lua_load_resty_core directive, but note that the use of this library is vividly recommended, as its FFI implementation is both faster, safer, and more complete than the Lua C API of the ngx_lua module.

If you are using an older version of OpenResty, you must load this library like so:

    # nginx.conf

    http {
        # you do NOT need to configure the following line when you
        # are using the OpenResty bundle 1.4.3.9+.
        lua_package_path "/path/to/lua-resty-core/lib/?.lua;;";

        init_by_lua_block {
            require "resty.core"
            collectgarbage("collect")  -- just to collect any garbage
        }

        ...
    }

Description

This pure Lua library reimplements part of the ngx_lua module's Nginx API for Lua with LuaJIT FFI and installs the new FFI-based Lua API into the ngx.* and ndk.* namespaces used by the ngx_lua module.

In addition, this Lua library implements any significant new Lua APIs of the ngx_lua module as proper Lua modules, like ngx.semaphore and ngx.balancer.

The FFI-based Lua API can work with LuaJIT's JIT compiler. ngx_lua's default API is based on the standard Lua C API, which will never be JIT compiled and the user Lua code is always interpreted (slowly).

Support for the new ngx_stream_lua_module has also begun.

This library is shipped with the OpenResty bundle by default. So you do not really need to worry about the dependencies and requirements.

Back to TOC

Prerequisites

WARNING This library is included with every OpenResty release. You should use the bundled version of this library in the particular OpenResty release you are using. Otherwise you may run into serious compatibility issues.

Back to TOC

Installation

By default, LuaJIT will search Lua files in /usr/local/share/lua/5.1/. But make install will install this module to /usr/local/lib/lua. So you may find the error like this:

nginx: [alert] failed to load the 'resty.core' module

You can install this module with the following command to resolve the above problem.

cd lua-resty-core
sudo make install LUA_LIB_DIR=/usr/local/share/lua/5.1

You can also change the installation directory to any other directory you like with the LUA_LIB_DIR argument.

cd lua-resty-core
sudo make install LUA_LIB_DIR=/opt/nginx/lualib

After that, you need to add the above directory to the LuaJIT search direcotries with lua_package_path nginx directive in the http context and stream context.

lua_package_path "/opt/nginx/lualib/?.lua;;";

Back to TOC

API Implemented

Back to TOC

resty.core.hash

Back to TOC

resty.core.base64

Back to TOC

resty.core.uri

Back to TOC

resty.core.regex

Back to TOC

resty.core.exit

Back to TOC

resty.core.shdict

Back to TOC

resty.core.var

Back to TOC

resty.core.ctx

Back to TOC

get_ctx_table

syntax: ctx = resty.core.ctx.get_ctx_table(ctx?)

Similar to ngx.ctx but it accepts an optional ctx argument. It will use the ctx from caller instead of creating a new table when the ctx table does not exist.

Notice: the ctx table will be used in the current request's whole life cycle. Please be very careful when you try to reuse the ctx table. You need to make sure there is no Lua code using or going to use the ctx table in the current request before you reusing the ctx table in some other place.

Back to TOC

resty.core.request

Back to TOC

resty.core.response

Back to TOC

resty.core.misc

Back to TOC

resty.core.time

Back to TOC

resty.core.worker

Back to TOC

resty.core.phase

Back to TOC

resty.core.ndk

Back to TOC

resty.core.socket

Back to TOC

resty.core.param

Back to TOC

ngx.semaphore

This Lua module implements a semaphore API for efficient "light thread" synchronization, which can work across different requests (but not across nginx worker processes).

See the documentation for this Lua module for more details.

Back to TOC

ngx.balancer

This Lua module implements for defining dynamic upstream balancers in Lua.

See the documentation for this Lua module for more details.

Back to TOC

ngx.ssl

This Lua module provides a Lua API for controlling SSL certificates, private keys, SSL protocol versions, and etc in NGINX downstream SSL handshakes.

See the documentation for this Lua module for more details.

Back to TOC

ngx.ssl.clienthello

This Lua module provides a Lua API for post-processing SSL client hello message for NGINX downstream SSL connections.

See the documentation for this Lua module for more details.

Back to TOC

ngx.ssl.session

This Lua module provides a Lua API for manipulating SSL session data and IDs for NGINX downstream SSL connections.

See the documentation for this Lua module for more details.

Back to TOC

ngx.re

This Lua module provides a Lua API which implements convenience utilities for the ngx.re API.

See the documentation for this Lua module for more details.

Back to TOC

ngx.resp

This Lua module provides Lua API which could be used to handle HTTP response.

See the documentation for this Lua module for more details.

Back to TOC

ngx.pipe

This module provides a Lua API to spawn processes and communicate with them in a non-blocking fashion.

See the documentation for this Lua module for more details.

This module was first introduced in lua-resty-core v0.1.16.

Back to TOC

ngx.process

This Lua module is used to manage the nginx process in Lua.

See the documentation for this Lua module for more details.

This module was first introduced in lua-resty-core v0.1.12.

Back to TOC

ngx.errlog

This Lua module provides Lua API to capture and manage nginx error log messages.

See the documentation for this Lua module for more details.

This module was first introduced in lua-resty-core v0.1.12.

Back to TOC

ngx.base64

This Lua module provides Lua API to urlsafe base64 encode/decode.

See the documentation for this Lua module for more details.

This module was first introduced in lua-resty-core v0.1.14.

Back to TOC

Caveat

If the user Lua code is not JIT compiled, then use of this library may lead to performance drop in interpreted mode. You will only observe speedup when you get a good part of your user Lua code JIT compiled.

Back to TOC

TODO

  • Re-implement ngx_lua's cosocket API with FFI.
  • Re-implement ngx_lua's ngx.eof and ngx.flush API functions with FFI.

Back to TOC

Author

Yichun "agentzh" Zhang (章亦春) [email protected], OpenResty Inc.

Back to TOC

Copyright and License

This module is licensed under the BSD license.

Copyright (C) 2013-2019, by Yichun "agentzh" Zhang, OpenResty Inc.

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Back to TOC

See Also

Back to TOC

lua-resty-core's People

Contributors

agentzh avatar alubbe avatar catbro666 avatar chipitsine avatar chronolaw avatar dndx avatar doujiang24 avatar ghedo avatar halfcrazy avatar lynch1981 avatar membphis avatar moonming avatar ms2008 avatar noname007 avatar oowl avatar p0pr0ck5 avatar poorsea avatar pushrax avatar rainingmaster avatar spacewander avatar subnetmarco avatar swananan avatar tajpouria avatar theweakgod avatar thibaultcha avatar walkermi avatar willmafh avatar xiaocang avatar xuruidong avatar zhuizhuhaomeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lua-resty-core's Issues

ssl_certificate_by_lua valid in http{}

I just noticed that the below code is valid but throws a lot of runtime errors, incl. segfaults.
From the docs, ssl_certificate_by_lua should not be valid in http{}, only in server{}, right?

http {
    ssl_certificate_by_lua_block { print("hello") }
    server {
        listen 443 ssl;
    }
}

Two sample errors:

SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:ssl_bytes_to_cipher_list:inappropriate fallback) while SSL handshaking

worker process 45145 exited on signal 11 (core dumped)

something related to the last "ngx.semaphore" commit ?

have a look, it fails with something related to semaphores:

https://travis-ci.org/chipitsine/lua-resty-core/builds/128364836

ok 162 - TEST 1: clear certs - pattern "[emerg]" does not match a line in error.log (req 1)

WARNING: TEST 1: clear certs - 2016/05/06 18:15:59 [crit] 25576#0: *4 SSL_shutdown() failed (SSL: error:140E0197:SSL routines:SSL_shutdown:shutdown while in init), client: 127.0.0.1, server: localhost, request: \"GET /t HTTP/1.1\", host: \"localhost\" at /usr/local/share/perl/5.18.2/Test/Nginx/Socket.pm line 1192.

ok 163 - TEST 14: ngx.semaphore in ssl_certificate_by_lua* - status code ok

not ok 164 - TEST 14: ngx.semaphore in ssl_certificate_by_lua* - response_body - response is expected (req 0)

#   Failed test 'TEST 14: ngx.semaphore in ssl_certificate_by_lua* - response_body - response is expected (req 0)'

#   at /usr/local/share/perl/5.18.2/Test/Nginx/Socket.pm line 1277.

# @@ -1,2 +1,2 @@

#  connected: 1

# -ssl handshake: boolean

# +failed to do SSL handshake: handshake failed

not ok 165 - TEST 14: ngx.semaphore in ssl_certificate_by_lua* - grep_error_log_out (req 0)

ngx.socket.tcp cannot send msg to logstash

env

host

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"

logstash version 5.5.2

openresty version 1.11.2.5

openresty configure params

sudo ./configure --prefix=/etc/openresty \
--user=nginx \
--group=nginx \
--with-cc-opt='-O2 -I/usr/local/openresty/zlib/include -I/usr/local/openresty/pcre/include -I/usr/local/openresty/openssl/include' \
--with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/usr/local/openresty/zlib/lib -L/usr/local/openresty/pcre/lib -L/usr/local/openresty/openssl/lib -Wl,-rpath,/usr/local/openresty/zlib/lib:/usr/local/openresty/pcre/lib:/usr/local/openresty/openssl/lib' \
--with-pcre-jit \
--with-stream \
--with-stream_ssl_module \
--with-http_v2_module \
--with-http_stub_status_module \
--with-http_realip_module \
--with-http_gzip_static_module \
--with-http_sub_module \
--with-http_gunzip_module \
--with-threads \
--with-file-aio \
--with-http_ssl_module \
--with-http_auth_request_module \
--without-mail_pop3_module \
--without-mail_imap_module \
--without-mail_smtp_module \
--without-http_fastcgi_module \
--without-http_uwsgi_module \
--without-http_scgi_module \
--without-http_autoindex_module \
--without-http_memcached_module \
--without-http_empty_gif_module \
--without-http_ssi_module \
--without-http_userid_module \
--without-http_browser_module \
--without-http_rds_json_module \
--without-http_rds_csv_module \
--without-http_memc_module \
--without-http_redis2_module \
--without-lua_resty_memcached \
--without-lua_resty_mysql \
-j4

sudo make -j4 

sudo make install

nginx.conf

error_log  logs/error.log error;
pid        /var/run/nginx.pid;

worker_rlimit_nofile 10240;

events {
    worker_connections  10240; 
}


http {
    include       mime.types;    
    server {
        listen       80;
        location / {
          
                content_by_lua '
                    local sock,err = ngx.socket.tcp()
                    if not sock then
                      ngx.say("init socket has error : ",err)
                    else
                      ngx.say("init socket is ok")
                    end

                    local ok, err = sock:connect("127.0.0.1", 5044)
                    if not ok then 
                      ngx.say("create connect has error : ",err)
                    else
                      ngx.say("create connect is ",ok)
                    end

                    local bytes, err = sock:send("this is test msg")
                    
                    if not bytes then
                        ngx.say("socket send msg has error : ",err)
                    else
                        ngx.say("sended bytes size: " ,bytes)
                    end


                    local ok, err = sock:setkeepalive(0, 100)
                    if not ok then 
                       ngx.say("set keepalive has error : ",err)
                    else
                       ngx.say("set keepalive is ",ok)
                    end

                ';
        }
    }    
}

logstash conf named demo.conf

input {
    tcp {
        port => "5044"
        codec => "plain"
    }
}
output {
  stdout { codec => rubydebug }
}

./bin/logstash -f demo.conf wait for output Successfully started Logstash API endpoint {:port=>9600}

curl localhost

# openresty output
init socket is ok
create connect is 1
sended bytes size: 16
set keepalive is 1

logstash console can't output anything,

sudo tcpdump -i any -vvv -n -A port 5044
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
18:36:40.188829 IP (tos 0x0, ttl 64, id 23893, offset 0, flags [DF], proto TCP (6), length 68)
    127.0.0.1.49714 > 127.0.0.1.5044: Flags [P.], cksum 0xfe38 (incorrect -> 0x8862), seq 2638081705:2638081721, ack 3382832894, win 342, options [nop,nop,TS val 2222059870 ecr 2222058789], length 16
E..D]U@.@..\.........2...=.........V.8.....
.q.^.q.%this is test msg
18:36:40.188838 IP (tos 0x0, ttl 64, id 23861, offset 0, flags [DF], proto TCP (6), length 52)
    127.0.0.1.5044 > 127.0.0.1.49714: Flags [.], cksum 0xfe28 (incorrect -> 0x7145), seq 1, ack 16, win 342, options [nop,nop,TS val 2222059870 ecr 2222059870], length 0
E..4]5@[email protected].....=.....V.(.....
.q.^.q.^
^C
2 packets captured
4 packets received by filter
0 packets dropped by kernel

wireshark has many tcp dup ack and tcp Out-Of-Order info.

curl localhost:5044

logstash console output

{
    "@timestamp" => 2017-09-08T10:33:19.254Z,
          "port" => 49710,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "GET / HTTP/1.1\r"
}
{
    "@timestamp" => 2017-09-08T10:33:19.257Z,
          "port" => 49710,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "Host: localhost:5044\r"
}
{
    "@timestamp" => 2017-09-08T10:33:19.258Z,
          "port" => 49710,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "User-Agent: curl/7.47.0\r"
}
{
    "@timestamp" => 2017-09-08T10:33:19.259Z,
          "port" => 49710,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "Accept: */*\r"
}
{
    "@timestamp" => 2017-09-08T10:33:19.259Z,
          "port" => 49710,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "\r"
}
telnet 127.0.0.1 5044

Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
this is test msg
^]
telnet> Connection closed.

Ctrl+] and Ctrl+D exit telnet

logstash output msg

{
    "@timestamp" => 2017-09-08T10:34:28.588Z,
          "port" => 49712,
      "@version" => "1",
          "host" => "127.0.0.1",
       "message" => "this is test msg\r"
}

ngx.semaphore does not support stream-lua-nginx-module

Here is my nginx.conf

worker_processes  1; 
error_log logs/error.log;

events {
    worker_connections 1024;
}
http {
    server {
        listen       38000;
        server_name  localhost;

        location = /t {
           content_by_lua_file conf/tcp_demo.lua;
        }
    }
}
stream {
    server {
        listen 39999;
        lua_socket_read_timeout 300s;

        content_by_lua_file conf/tcp_demo.lua;
    }
}

And tcp_demo.lua code

local semaphore = require "ngx.semaphore"
local sema = semaphore.new()

local function handler()
    ngx.say("sub thread: waiting on sema...")

    local ok, err = sema:wait(1)  -- wait for a second at most
    if not ok then
        ngx.say("sub thread: failed to wait on sema: ", err)
    else
        ngx.say("sub thread: waited successfully.")
    end
end

local co = ngx.thread.spawn(handler)

ngx.say("main thread: sleeping for a little while...")

ngx.sleep(0.1)  -- wait a bit

ngx.say("main thread: posting to sema...")

sema:post(1)

ngx.say("main thread: end.")

The HTTP works well, but tcp server got errors:

2017/03/29 11:39:02 [error] 77432#0: *24 stream lua entry thread aborted: runtime error: /data0/openresty/lualib/resty/core/base.lua:20: ngx_lua 0.10.7+ required
stack traceback:
coroutine 0:
	[C]: in function 'require'
	/data0/tcp_demo/conf/tcp_demo.lua:1: in function </data0/tcp_demo/conf/tcp_demo.lua:1> while handling client connection, client: 127.0.0.1, server: 0.0.0.0:39999

It seems that there are some bugs about `balancer.set_more_retries`

My nginx config file is:

user              root;
worker_processes  1;  

error_log  logs/error.log;
pid        logs/nginx.pid;
daemon     on; 

events {
    use epoll;
    worker_connections  1024;
}

http {
    upstream backend {
        # only a fake server, set it to an arbitrary value
        server 127.0.0.1:8080;
        balancer_by_lua_block {
            local balancer = require "ngx.balancer"
            local state_name, status_code = balancer.get_last_failure()
            if state_name == nil then
                ngx.log(ngx.ERR, "this is the first attempt")
            else
                ngx.log(ngx.ERR, "retrying because state_name: "..state_name,
                    ", status_code: "..status_code)
            end
            local ok, err = balancer.set_more_tries(3) --XXX: !!!note here!!!
            if not ok then
                ngx.log(ngx.ERR, "set_more_tries failed, because: "..tostring(err))
            end

            ok, err = balancer.set_current_peer("127.0.0.1", 80) --XXX: !!!hard code here!!!
            if not ok then
                ngx.log(ngx.ERR, "set_current_peer failed, because: "..tostring(err))
                return ngx.exit(500)
            end
        }
    }

    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$request_time $upstream_response_time $remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  logs/access.log  main;

    keepalive_timeout  0;

    server {
        listen       8080;
        server_name  localhost;

        location / {
            default_type text/plain;

            proxy_next_upstream error timeout http_502 http_503 http_504 http_404;
            proxy_next_upstream_tries 4;
            proxy_next_upstream_timeout 5s;

            proxy_set_header Host $host;
            add_header Upstream-Addr $upstream_addr always;
            proxy_pass http://backend;
        }
    }
}

Note the parameters below:

  • balancer.set_more_tries(3)
  • proxy_next_upstream_tries 4
  • add_header Upstream-Addr $upstream_addr always

Then, start nginx:
/usr/local/openresty/nginx/sbin/nginx -p . -c nginx2.conf
At last, test it:
curl -I -XGET http://127.0.0.1:8080/this_is_a_404_page/
I got these headers:

HTTP/1.1 404 Not Found
Server: openresty/1.11.2.1
Date: Tue, 08 Nov 2016 05:03:52 GMT
Content-Type: text/html
Content-Length: 175
Connection: close
Upstream-Addr: 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80

while I change balancer.set_more_tries(3) to balancer.set_more_tries(1), the test result is:

HTTP/1.1 404 Not Found
Server: openresty/1.11.2.1
Date: Tue, 08 Nov 2016 05:07:44 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 1845
Connection: close
X-Powered-By: Express
Cache-Control: no-cache, private, no-store, must-revalidate, max-stale=0, post-check=0, pre-check=0
ETag: W/"ubl3wwo7a8ZJ5opujSFyhQ=="
Vary: Accept-Encoding
Upstream-Addr: 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80

So, how to set the count of set_more_tries, or there are some bugs?

Thank you.

`signal_graceful_exit` just set a flag for graceful exit?

It seems that signal_graceful_exit only set the ngx_quit instead of triggering a real graceful exit...

Refer from the Nginx source:

sigsuspend(&set);
...
if (ngx_quit) {

I guess that if a graceful exit is in need, it is required to send a signal to resume master process after sigsuspend.

Maybe I am missing something.

How can i use password file

We can use ssl_password_file in nginx's *.conf files to set password files. How can we do the same thing in lua.

set_ssl_certificate_by_lua* compatibility with lua-resty-redis, connect blocking

Hi there, I've done the following PoC to use ssl_certifciate_by_lua: and loading dynamically a certificate from a redis server. However it doesn't work I can't connect to the database.

The first logs appear correctly in my error log, however the second is never called. So I assume, the red:connect statement is blocking.

Is it by design? Can't I use the coroutine - tcp related API here? What is the workaround?

Versions installed are

  • nginx 1.10.1
  • lua-resty-core v0.1.8
  • lua-resty-redis v0.25
  • lua-nginx-module v0.10.6

Nginx is stopping client connection, curl is exiting with the following error:

  • Unknown SSL protocol error in connection to :443Hi there, I've done the following PoC to use ssl_certifciate_by_lua: and loading dynamically a certificate from a redis server. However it doesn't work I can't connect to the database.

The nginx configuration:

server  {
  listen 80 default_server;
  listen 443 ssl default_server;

  server_name  default;

  access_log /var/log/nginx/app-access.log;
  error_log /var/log/nginx/app-error.log;

  ssl_certificate /etc/ssl/web/default.crt;
  ssl_certificate_key /etc/ssl/web/default.key;

  ssl_certificate_by_lua_block {
    local ssl = require "ngx.ssl"
    local redis           = require "resty.redis"
    local red             = redis:new()
    ngx.log(ngx.ERR, "Before connection")
    local ok, err         = red:connect("127.0.0.1", 6379)
    ngx.log(ngx.ERR, ok..""..err)
    ... More logic (clean old cert, setup new)
  }
}

proxy_next_upstream配置对使用balancer_by_lua_block后的集群无效

如下配置,当访问http://127.0.0.1:8800/test2时出现404
`
proxy_next_upstream error timeout invalid_header http_504 http_404;#什么样的错误分发自动重试
proxy_http_version 1.1;
proxy_set_header Connection "";

server {access_log logs/access.log;server_name up_1;listen 8801;
location /test1 {content_by_lua 'ngx.say(ngx.var.server_port,":",ngx.var.uri)';}
}
server {access_log logs/access.log;server_name up_2;listen 8802;
location /test2 {content_by_lua 'ngx.say(ngx.var.server_port,":",ngx.var.uri)';}
}
server {server_name up_0;listen 8800;
location =/favicon.ico { return 200;}
location / {
proxy_pass http://up_test;
log_by_lua '
ngx.log(ngx.ERR,"\nuri=",ngx.var.uri,"upstream_addr:",ngx.var.upstream_addr,"upstream_status:",ngx.var.upstream_status)
';
}
}
upstream up_test {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local upservers={3,
{"127.0.0.1",8801},
{"127.0.0.1",8802},
{"127.0.0.1",8803}
}
local state_name, status_code = balancer.get_last_failure()
ngx.log(ngx.ERR,"\nstate_name:",state_name, "status_code:",status_code)

local ibf=require "randombuff"
local index=ibf.index()--每次递增+1,模拟轮询负载均衡策略
index=math.mod(index,upservers[1])+2
local ok, err = balancer.set_current_peer(upservers[index][1], upservers[index][2])
ngx.log(ngx.ERR,"n\uri:",ngx.var.uri,"index;",index,"ok:",ok, err)
if not ok then
    ngx.log(ngx.ERR, "failed to set the current peer: ", err)
    return ngx.exit(500)
end          
}
keepalive 10;  # connection pool

}`
如果是如下集群配置则不会404
upstream up_test2 {
server 127.0.0.1:8801;
server 127.0.0.1:8802;
server 127.0.0.1:8803;
keepalive 10;
}
请问是BUG还是我写法不对呢

balancer_by_lua_file ignores lua_code_cache setting ?

Hi,

I'm testing the ngx.balancer module and I believe the lua_code_cache directive is being ignored ?

I have the following backend configuration :

http {

  lua_code_cache off;
  upstream backend {
      server 127.0.0.1; 
      balancer_by_la_file my_balancer.lua;
  }

}

Code changes in the my_balancer.lua file are reflected only when I restart nginx. Other blocks, like for example content_by_lua_file are working as expected (no restart required).

I'm testing all this on a fresh 1.9.7.3 OpenResty installation.

ngx.balancer: get_last_failure shows 503 errors as 502

Hi,

I've tried to figure out where this would be coming from, but my understanding of Openresty is not good enough and of course it could be my config.

Below is the configuration I'm using along with the logs and requests.

worker_processes 1;
daemon off;

events {
    worker_connections 1024;
}

http {

    error_log /dev/stdout;
    access_log /dev/stdout;

    upstream backend {
        server 0.0.0.1;

        balancer_by_lua_block {
            local balancer = require "ngx.balancer"

            if not ngx.ctx.tries then
                ngx.ctx.tries = 0
            end

            if ngx.ctx.tries < 5 then
                local ok, err = balancer.set_more_tries(1)
                    if not ok then
                    ngx.log(ngx.ERR, "failed to set more tries: ", err)
                elseif err then
                    ngx.log(ngx.ERR, "set more tries: ", err)
                end
            end
            ngx.ctx.tries = ngx.ctx.tries + 1

            local host = "127.0.0.1"
            local port = 8080

            local state, code, err = balancer.get_last_failure()
            ngx.log(ngx.ERR, "state: ", state, ", code: ", code, ", err: ", err)

            local ok, err = balancer.set_current_peer(host, port)
            if not ok then
                ngx.log(ngx.ERR, "failed to set the peer: ", err)
                return ngx.exit(505)
            end
        }
    }

    server {
        listen 80;

        location / {
            proxy_next_upstream_tries 2;
            proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_403 http_404;
            proxy_pass http://backend;
        }
    }

    server {

        listen 127.0.0.1:8080;

        location / {
            return 503;
        }
    }
}

curling

$ curl localhost:80 -v
* Rebuilt URL to: localhost:80/
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.43.0
> Accept: */*
> 
< HTTP/1.1 503 Service Temporarily Unavailable
< Server: openresty/1.9.7.2
< Date: Mon, 25 Jan 2016 17:15:56 GMT
< Content-Type: text/html
< Content-Length: 218
< Connection: keep-alive
< 
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>openresty/1.9.7.2</center>
</body>
</html>
* Connection #0 to host localhost left intact

Logs created by balancer_by_lua_block should show 503 error code but instead show 502:

2016/01/25 17:15:56 [error] 6#0: *1 [lua] balancer_by_lua:22: state: nil, code: nil, err: nil while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost"
2016/01/25 17:15:56 [error] 6#0: *1 [lua] balancer_by_lua:13: set more tries: reduced tries due to limit while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2016/01/25 17:15:56 [error] 6#0: *1 [lua] balancer_by_lua:22: state: failed, code: 502, err: nil while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
127.0.0.1 - - [25/Jan/2016:17:15:56 +0000] "GET / HTTP/1.0" 503 218 "-" "curl/7.43.0"
127.0.0.1 - - [25/Jan/2016:17:15:56 +0000] "GET / HTTP/1.1" 503 218 "-" "curl/7.43.0"
127.0.0.1 - - [25/Jan/2016:17:15:56 +0000] "GET / HTTP/1.0" 503 218 "-" "curl/7.43.0"

Rsa 2048

Hi,
i am currently using lua-resty-auto-ssl, it´s working like a charm, thank you.
I just noticed, that it´s generating 4096bits keys and i would like it to be 2048 keys (in opnenssl.cnf i have 2048 as default). I´ve looked through config files, but I haven´t found where should i set this.
Can you help me ?
Thanks

Recommendations for Usage

A great piece of documentation (especially if you have already performed the analysis) would be an explanation of when its best to use FFI methods (i,e loops etc that the JITter can do a good job as a result), and when (if any) the overheads of the small amounts of Lua code exceed that of the standard Lua module API.

I am guessing you did this research before starting the module.

error 'not a semaphore instance' when semaphore.post

the code like below:
sema.lua

local _M = {}

local semaphore = require("ngx.semaphore")

_M.sema = semaphore.new(1)

return _M

core.lua

local ok, err = sema:wait(0.01)
    if not ok then
        -- lock failed acquired
        -- but go on. This action just set a fence for all but this request
    end
    local ups = upstreamCache:getUpstream(runtime, userInfo)
    if ups ~= nil then
        if ok then upsSema.post(1) end
        return ups, info
    end

I really want to know when the error will occur, thanks

Sometimes ssl.server_name() may returns nil value

Hello.
It's possible not a bug, but it's our current problem.

Sometimes, about 1 of 1000 requests has an error in this function. As I understood on this line: https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/ssl.lua#L160 Because there is no error string returns.

Of course part of these errors can be fired by really wrong requests. But yesterday we have checked with 100% trusted request (from yandex money), and problem really exists. Part of request from ya.money comes to one listen IP and has no problem, and part comes to another IP and has this problem. The nginx config is the same for both.

Is there any way to check what's wrong? And also are there any nginx options can affect on this code?

set_ssl_certificate_by_lua* problem with lua-resty-redis, connect blocking

Hi there,
I'm having the same issue with the latest version of openresty 1.11.2 that includes:
lua-resty-core v0.1.11
lua-resty-redis v0.26
I did the exact same troubleshooting procedure and the code has stopped in "connect()" function.

local red = redis:new()

ngx.log(ngx.ERR, "before... ")
local ok, err = red:connect("127.0.0.1", 6379)
ngx.log(ngx.ERR, "after...")```
also, i should mention that I can connect successfully to Redis in the access_by_lua_* section.

My Error_log:

```2017/08/08 15:53:08 [error] 7#7: *2 [lua] ssl_certificate_by_lua:14: REDIS: before... : nil, context: ssl_certificate_by_lua*, client: X.X.X.X, server: 0.0.0.0:443
2017/08/08 15:53:08 [info] 7#7: *1 peer closed connection in SSL handshake while loading SSL certificate by lua, client: X.X.X.X, server: 0.0.0.0:443```

I use ngx.re.split(),but receive http 500.

example:
local ngx_re = require "ngx.re"
local res, err = ngx_re.split("c:574|576|20170424155217","|")

log:
lua entry thread aborted: memory allocation error: not enough memory

use another:

function split(str, split_char)
    local sub_str_tab = {};
    while (true) do
        local pos = string.find(str, split_char, 1, true);
        if (not pos) then
            sub_str_tab[#sub_str_tab + 1] = str;
            break;
        end
        local sub_str = string.sub(str, 1, pos - 1);
        sub_str_tab[#sub_str_tab + 1] = sub_str;
        str = string.sub(str, pos + 1, #str);
    end

    return sub_str_tab;
end

local res, err = split("c:574|576|20170424155217","|")
ngx.say(res[1])

result:
c:574

I do not know if I have used it correctly.

Balancer/redis issues

I'm a little confused about requiring some modules. I was using the latest OpenResty bundle, and while working with it I found I could use require "redis" but not require "resty.redis" as documented, and require "ngx.balancer" doesn't work no matter what I do. Please advise.

balancer_by_lua_block : Reusing stale connection

Hello,
I am using balancer_by_lua for upstream routing which works great. We are seeing intermittent errors where nginx is caching the connection while upstream has already terminated.

See the below image. xxx.xxx.xxx.159 is OpenResty server and xxx.xxx.xxx.27 is upstream server.

upstream_stale_connection

Thanks,
Rohit Joshi

regex is not jit enabled

I apologize if this is the wrong repo (please point me to the correct one where I can raise this issue).

When I profile open resty with the nginx-systemtap-toolkit tools, especially the ngx-pcrejit tool, it tells me that PCRE JIT is not enabled.

root@myhost:~/nginx-systemtap-toolkit# ./ngx-pcrejit -p 6018
Tracing 6018 (/usr/sbin/nginx)...
Hit Ctrl-C to end.
^C
ngx_http_lua_ffi_exec_regex: 0 of 225968 are PCRE JITted.
ngx_http_regex_exec: 0 of 112985 are PCRE JITted.

Looks like JIT support is missing in your PCRE build.

However, I have enabled pcre jit support. I used the standard build script from docker-openresty/trusty
wherein I enabled --with-pcre and --with-pcre-jit. The nginx config also shows that pcre jit is enabled.

Am I supposed to enable something else for jit compiling PCRE regexes?
I even tried the nginx directive pcre_jit on;. No use.

Am I missing something?

Is it safe to use ngx.balancer only with the latest stable LuaJIT2.0

Hi
Is it safe to use only the ngx.balancer with the latest stable version of LuaJIT; LuaJIT-2.0.4?
When I use it, I am getting a warning:
[warn] 2403#0: *5 [lua] base.lua:25: use of lua-resty-core with LuaJIT 2.0 is not recommended; use LuaJIT 2.1+ instead
However, the LuaJIT2.1+ is still in betta stage.
Please let me know if it is safe to use the balancer with the LuaJIT-2.0.4.
Thanks

configure error : no lua-resty-core/config was found

hi all ,

I am now using nginx , and want compile lua-resty-core into nginx , by using below command

./configure --prefix=/...../nginx
--with-ld-opt="-Wl,-rpath,/....../luagit210b2/lib"
--add-module=/.../ngx_devel_kit-0.3.0
--add-module=/.../lua-nginx-module-0.10.6
--add-module=/.../lua-upstream-nginx-module
--add-module=/..../lua-resty-core

but there are error happend:

./configure: error: no /..../config was found

btw, I am using git clone [email protected]:openresty/lua-resty-core.git to get the lua-resty-core code .

Thanks all .

Can i use content_by_lua_file when using ngx.balancer

stream {
    upstream backend {
        server 0.0.0.1:1234;   # just an invalid address as a place holder
        balancer_by_lua_block {
            local balancer = require "ngx.balancer"
            local host = "127.0.0.2"
            local port = 8080
            local ok, err = balancer.set_current_peer(host, port)
            if not ok then
                ngx.log(ngx.ERR, "failed to set the current peer: ", err)
                return ngx.exit(ngx.ERR)
            end
        }
    }

    server {
        # this is the real entry point
        listen 10000;

        location / {

            content_by_lua_file xxxx;

            proxy_pass backend;
        }
    }
}

God Zhang, if i want to do some process on the url request in the content, however, i found this didn't work here when i put content_by_lua_file before proxy_pass.

So, how can i deal with the request before the balancer?
Thx

[Feature Proposal] New API to get Nginx log_level

It would be quite useful if we could get the configured error_log level via Lua code.
For instance, we could do something like this:

-- config.lua
_M.log_level = ngx.config.log_level()

-- req.lua
if log_level >= ngx.WARN then
    -- Now we save an `encode` operation if we configure error_log level to `error`.
    ngx.log(ngx.WARN, cjson.encode(obj), ...)
end

Currently, ngx.log handles the lua parameters only when the given level is higher than the configured one. And maybe we don't need to care about those log levels higher than debug in Lua land. So what we need to do is just extracting some existent code into a new function.

Here is a question: since we could implement this feature in a separate Nginx C module, will you still accept it as a part of OpenResty's feature list?

enable retry in balancer_by_lua_block: need to set proxy_next_upstream_tries

hello, I find, in order to enable the retry, it needs to explicitly set the proxy_next_upstream_tries directive to a value greater than the default value, zero. I think it's a bug or at least it's incompatible with the original upstream failover policy. Because the document says the default zero value indicates that nginx won't limit the retry times other than disable the retry. If it's by design, its better to make it clearly in the document.

Unix sockets work but are not documented

The following configuration using a unix socket as the current peer works, but it is not documented.

Adding the line:
local ok, err = balancer.set_current_peer('some_unix_socket_name');
would suffice

http {
upstream backend {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local suname = 'unix:/tmp/nginx_socket';
local ok, err = balancer.set_current_peer(suname);
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
}
}
server {
listen 8080;
location /S/ {
proxy_pass http://backend/S/;
}
}
server {
listen unix:/tmp/nginx_socket;
location /S/__debug {
content_by_lua_file ./locations/debug;
}
}

pem_to_der functions break handshake on error

Hi Yichun,

If you call either of the pem_to_der conversion functions in ssl_certificate_by_lua and they throw an error it appears to break the handshake, even though the default certs haven't been cleared and no new cert has been set.

Gist with example config and output: https://gist.github.com/hamishforbes/402c4cebef665969cb34

It would seem preferable to be able to continue the handshake, albeit with probably incorrect an cert / key, than to break entirely?

Hamish

x64: consistent SEGV when using LuaJIT GC64 mode

See

t/request.t .. 1/180
#   Failed test 'TEST 14: ngx.req.set_header (single number value) - status code ok'
#   at /home/agentzh/git/lua-resty-core/../test-nginx/lib/Test/Nginx/Socket.pm line 936.
#          got: ''
#     expected: '200'

#   Failed test 'TEST 14: ngx.req.set_header (single number value) - response_body - response is expected (repeated req 0, req 0)'
#   at /home/agentzh/git/lua-resty-core/../test-nginx/lib/Test/Nginx/Socket.pm line 1501.
#          got: ""
#       length: 0
#     expected: "header foo: 500\x{0a}"
#       length: 16
#     strings begin to differ at char 1 (line 1 column 1)
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
	Retry connecting after 0.63 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
	Retry connecting after 0.693 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
	Retry connecting after 0.759 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
	Retry connecting after 0.828 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
	Retry connecting after 0.9 sec
...

GDB found no C backtraces:

(gdb) c
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0x00007f9f846dfece in ?? ()
(gdb) bt full
#0  0x00007f9f846dfece in ?? ()
No symbol table info available.
#1  0x00007f9f5aec1d08 in ?? ()
No symbol table info available.
#2  0x000000000041c3b0 in ?? ()
No symbol table info available.
#3  0x00007ffde7944390 in ?? ()
No symbol table info available.
#4  0x00007f9f5aeab330 in ?? ()
No symbol table info available.
#5  0x00007f9f5aec56c8 in ?? ()
No symbol table info available.
#6  0x00007ffde7944390 in ?? ()
No symbol table info available.
#7  0x000000000041c3b0 in ?? ()
No symbol table info available.
#8  0x0000000000000000 in ?? ()
No symbol table info available.

ssl.set_der_priv_key() makes en error.

Hi,
when I call ssl.set_der_priv_key(), I got an error like this.

2017/07/20 11:27:00 [error] 12735#12735: *4 failed to load external Lua file "/danbi/nginx/conf.d/route.lua": /danbi/nginx/conf.d/route.lua:45: unfinished string near '")', client: 124.53.127.11, server: _, request: "GET / HTTP/1.1", host: "speakingsolution.com"

ssl.set_priv_key(), too.

I'm using openresty/1.11.2.4.

My code here.

`
-- set key
local der_pkey, err = ssl.priv_key_pem_to_der(pkey_data)
if not der_pkey then
ngx.log(ngx.ERR, "failed to convert private key ", "from PEM to DER: ", err)
return ngx.exit(ngx.ERROR)
end

local ok, err = ssl.set_der_priv_key(der_pkey)
if not ok then
ngx.log(ngx.ERR, "failed to set DER private key: ", err)
return ngx.exit(ngx.ERROR)
end
`

Thanks in advance.

Behavior limitation in ngx.re.(g)sub

I believe I found a limitation in the resty.core implemention of ngx.re.(g)sub. Examine the following (very contrived) example, without loading resty.core:

content_by_lua '
      local lookup = function(m)
          -- note we are returning a number type here
          return 5
      end

      local newstr, n, err = ngx.re.sub("hello, 1234", "([0-9])[0-9]", lookup, "oij")
      ngx.say(newstr)

';

This performs as expected:

$ curl localhost/re-test
hello, 55

However, when we enable resty.core, this fails with the following:

2016/02/01 08:49:52 [debug] 4659#0: pcre JIT compiling result: 1
2016/02/01 08:49:52 [debug] 4659#0: *1 [lua] content_by_lua(nginx.conf:271):3: replace(): m[0] is 12
2016/02/01 08:49:52 [debug] 4659#0: *1 lua resume returned 2
2016/02/01 08:49:52 [error] 4659#0: *1 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/resty/core/regex.lua:641: attempt to get length of local 'bit' (a number value)
stack traceback:
coroutine 0:
    /usr/local/openresty/lualib/resty/core/regex.lua: in function 'gsub'
    content_by_lua(nginx.conf:271):7: in function <content_by_lua(nginx.conf:271):1>, client: 127.0.0.1, server: localhost, request: "GET /re-test HTTP/1.1", host: "localhost"

Obviously, we can see why in regex.lua this fails:

587 local function re_sub_func_helper(subj, regex, replace, opts, global)
[...snip...]
638         local res = collect_captures(compiled, rc, subj, flags)
639 
640         local bit = replace(res)
641         local bit_len = #bit

When replace returns a number value, we see our issue.

There are other locations where the # operator is called, that could lead a similar issue. For example:

347 local function re_match_helper(subj, regex, opts, ctx, want_caps, res, nth)
[...snip...]
374         rc = C.ngx_http_lua_ffi_exec_regex(compiled, flags, subj, #subj, pos)

If subj is a number, we end up with a thread abort. I could make this work by casting the return in my lookup function via tostring(), but later in my actual use case I need to perform numeric comparison, so forcing two castes as a workaround seems wasteful. Any thoughts here? For reference, this is using a fresh download of openresty 1.9.7.3, with a fairly vanilla configuration (compiled in PCRE 8.38 w/ JIT):

root@soter:~# /usr/local/openresty/nginx/sbin/nginx -V
nginx version: openresty/1.9.7.3
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) 
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --prefix=/usr/local/openresty/nginx --with-debug --with-cc-opt='-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC -O2' --add-module=../ngx_devel_kit-0.2.19 --add-module=../echo-nginx-module-0.58 --add-module=../xss-nginx-module-0.05 --add-module=../ngx_coolkit-0.2rc3 --add-module=../set-misc-nginx-module-0.29 --add-module=../form-input-nginx-module-0.11 --add-module=../encrypted-session-nginx-module-0.04 --add-module=../srcache-nginx-module-0.30 --add-module=../ngx_lua-0.10.0 --add-module=../ngx_lua_upstream-0.04 --add-module=../headers-more-nginx-module-0.29 --add-module=../array-var-nginx-module-0.04 --add-module=../memc-nginx-module-0.16 --add-module=../redis2-nginx-module-0.12 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.14 --add-module=../rds-csv-nginx-module-0.07 --with-ld-opt=-Wl,-rpath,/usr/local/openresty/luajit/lib --with-pcre=/usr/local/src/pcre-8.38 --with-pcre-jit --with-pcre-opt=-g --with-http_ssl_module

ssl_certificate_by_lua_file - set ciphers too?

Hello
Currently using ssl_certificate_by_lua_file with great success, thank you!
We have a request from one customer to only allow very secure ciphers, but the other customers would not like this.
Is there a way to set the ciphers dynamically too?

Thanks
Richard

Problems when the hostname is not an ip but a name defined in /etc/hosts (Docker)

Given this config for upstream

upstream postgrest {
        server remotecomputer:3000;
        balancer_by_lua_block {
            local balancer = require 'ngx.balancer'
            local host = 'remotecomputer'
            local port = 3000
            local ok, err = balancer.set_current_peer(host, port)
            if not ok then
                ngx.log(ngx.ERR, 'failed to set the current peer ', err)
                return ngx.exit(500)
            end
        }
        keepalive 64;
}

I get the error

2016/06/13 10:47:54 [error] 5#5: *1 [lua] balancer_by_lua:8: failed to set the current peer no host allowed while connecting to upstream, client: 192.168.99.1, server: localhost, request: "GET /.... HTTP/1.1", host: "192.168.99.100:8080"

If i comment the whole balancer_by_lua_block everything works

Any idea what i'm doing wrong?
Thank you

unable to return ngx error code other than 500 in balancer_by_lua block

From the example given in https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md#synopsis

Changing ngx.exit(500) to ngx.exit(503) or another error code still returns 500 (host set to google.com intentionally)

upstream backend {
    server 0.0.0.1;   # just an invalid address as a place holder

    balancer_by_lua_block {
        local balancer = require "ngx.balancer"

        -- well, usually we calculate the peer's host and port
        -- according to some balancing policies instead of using
        -- hard-coded values like below
        local host = "google.com"
        local port = 8080

        local ok, err = balancer.set_current_peer(host, port)
        if not ok then
            ngx.log(ngx.ERR, "failed to set the current peer: ", err)
            return ngx.exit(503)
        end
    }

    keepalive 10;  # connection pool
}

Does the lua “ngx.balancer” support session re-use ?

I am doing NGINX upstream with lua “ngx.balancer”. Balancing HTTPS requests.

In my scenario, HTTP requests are sent every 5 seconds to NGINX, which then balances HTTPS requests to the upstream server.

At this point, lua balancer has only 1 upstream server to load balance from.

All requests use the same “Host”, which means that once the SSL handshake is done, the same session ID can be re-used between NGINX and the upstream server (which is also running NGINX).

    location /one {
       proxy_pass https://upstream;
       proxy_http_version 1.1;

       proxy_ssl_trusted_certificate /my/trusted/certificate.pem;
       proxy_ssl_session_reuse on;
       proxy_ssl_verify on;
       proxy_ssl_verify_depth 2;
       proxy_ssl_name $host;
       proxy_ssl_server_name on;
    }

When using NGINX upstream, without lua “ngx.balancer”, every upstream HTTPS request establishes a new connection, but the SSL session is re-used. In other words, the Client Hello contains the Session ID from the previous connection and the previous session resumes. A full SSH Handshake is not performed.

When using NGINX upstream, with lua “ngx.balancer”, the Client Hello never contains the Session ID from the previous connection. A full SSH Handshake is needed.

Does the lua “ngx.balancer” support session re-use ?

balancer by lua block: Getting errors : attempt to send data on a closed socket

Hello,
I am running a performance test using balancer by lua block module. Periodically, I am seeing following errors.

2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040095EA8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2352#0: *207819 attempt to send data on a closed socket: u:00000000410F5058, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040096068, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2352#0: *207819 attempt to send data on a closed socket: u:00000000410F4CB8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040095CC8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040095EA8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2357#0: *203392 attempt to send data on a closed socket: u:00000000410F4E98, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"

My code:

upstream balancer_upstream {

      server 0.0.0.1 max_fails=0 fail_timeout=5s;   # just an invalid address as a place holder

      balancer_by_lua_block {

          local upstream_servers_str = ngx.var.upstream_servers

          local upstream_servers = capi_util.json_to_lua(upstream_servers_str)
          local set_peer = false
          if not ngx.ctx.upstream_retries then
            ngx.ctx.upstream_retries = 0
          end

          if #upstream_servers == 1 then
             local ip_port = upstream_servers[1]
             local ok, err = balancer.set_current_peer(ip_port["ip"], ip_port["port"])
             set_peer = true
          else    
            if ngx.ctx.upstream_retries < #upstream_servers then
                local ok, err = balancer.set_more_tries(1)
                local ip_port = upstream_servers[ngx.ctx.upstream_retries + 1]
                ngx.ctx.upstream_retries = ngx.ctx.upstream_retries + 1
                ok, err = balancer.set_current_peer(ip_port["ip"], ip_port["port"])
                set_peer = true
            end
        end


        if not set_peer then
         ngx.status = 500
         return ngx.exit(500)
       end
     }

      keepalive 1000;  # connection pool
}

Don't blindly intern error strings

Some low level functions don't set the errmsg pointer on every possible error. Case in point, if the shdict FFI functions are called with a NULL zone, they just return NGX_ERROR, and the Lua part calls ffi_string(errmsg[0]) with an stale message, or even a NULL value. (https://github.com/openresty/lua-nginx-module/blob/master/src/ngx_http_lua_shdict.c#L2639)

This could be fixed either in Lua with more paranoid style (and acknowledge that the errmsg "might be setup or not") or on the C side making sure that the errmsg is always set before returning.

ssl_certificate_by_lua_* - ability to read client cipher suite

Since ngx.var.ssl_ciphers (the client-supported cipher suite) is not available in ssl_certificate_by_lua_*, is it possible to export this value in the ssl_certificate_by_lua context for cipher-based negotiation (EC for newer clients, RSA for older ones)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.