openresty / lua-resty-core Goto Github PK
View Code? Open in Web Editor NEWNew FFI-based API for lua-nginx-module
New FFI-based API for lua-nginx-module
https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md#get_last_failure
Retrieves the failure details about the previous failed attempt (if any) when the next_upstream retrying mechanism is in action. When there was indeed a failed previous attempt, it returned a string descrbing
->describing
that attempt's state name, as well as an integer describing the status code of that attempt.
hi all ,
I am now using nginx , and want compile lua-resty-core into nginx , by using below command
./configure --prefix=/...../nginx
--with-ld-opt="-Wl,-rpath,/....../luagit210b2/lib"
--add-module=/.../ngx_devel_kit-0.3.0
--add-module=/.../lua-nginx-module-0.10.6
--add-module=/.../lua-upstream-nginx-module
--add-module=/..../lua-resty-core
but there are error happend:
./configure: error: no /..../config was found
btw, I am using git clone [email protected]:openresty/lua-resty-core.git to get the lua-resty-core code .
Thanks all .
Here is my nginx.conf
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
server {
listen 38000;
server_name localhost;
location = /t {
content_by_lua_file conf/tcp_demo.lua;
}
}
}
stream {
server {
listen 39999;
lua_socket_read_timeout 300s;
content_by_lua_file conf/tcp_demo.lua;
}
}
And tcp_demo.lua
code
local semaphore = require "ngx.semaphore"
local sema = semaphore.new()
local function handler()
ngx.say("sub thread: waiting on sema...")
local ok, err = sema:wait(1) -- wait for a second at most
if not ok then
ngx.say("sub thread: failed to wait on sema: ", err)
else
ngx.say("sub thread: waited successfully.")
end
end
local co = ngx.thread.spawn(handler)
ngx.say("main thread: sleeping for a little while...")
ngx.sleep(0.1) -- wait a bit
ngx.say("main thread: posting to sema...")
sema:post(1)
ngx.say("main thread: end.")
The HTTP works well, but tcp server got errors:
2017/03/29 11:39:02 [error] 77432#0: *24 stream lua entry thread aborted: runtime error: /data0/openresty/lualib/resty/core/base.lua:20: ngx_lua 0.10.7+ required
stack traceback:
coroutine 0:
[C]: in function 'require'
/data0/tcp_demo/conf/tcp_demo.lua:1: in function </data0/tcp_demo/conf/tcp_demo.lua:1> while handling client connection, client: 127.0.0.1, server: 0.0.0.0:39999
Hi there,
I'm having the same issue with the latest version of openresty 1.11.2 that includes:
lua-resty-core v0.1.11
lua-resty-redis v0.26
I did the exact same troubleshooting procedure and the code has stopped in "connect()" function.
local red = redis:new()
ngx.log(ngx.ERR, "before... ")
local ok, err = red:connect("127.0.0.1", 6379)
ngx.log(ngx.ERR, "after...")```
also, i should mention that I can connect successfully to Redis in the access_by_lua_* section.
My Error_log:
```2017/08/08 15:53:08 [error] 7#7: *2 [lua] ssl_certificate_by_lua:14: REDIS: before... : nil, context: ssl_certificate_by_lua*, client: X.X.X.X, server: 0.0.0.0:443
2017/08/08 15:53:08 [info] 7#7: *1 peer closed connection in SSL handshake while loading SSL certificate by lua, client: X.X.X.X, server: 0.0.0.0:443```
stream {
upstream backend {
server 0.0.0.1:1234; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local host = "127.0.0.2"
local port = 8080
local ok, err = balancer.set_current_peer(host, port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(ngx.ERR)
end
}
}
server {
# this is the real entry point
listen 10000;
location / {
content_by_lua_file xxxx;
proxy_pass backend;
}
}
}
God Zhang, if i want to do some process on the url request in the content, however, i found this didn't work here when i put content_by_lua_file before proxy_pass.
So, how can i deal with the request before the balancer?
Thx
from openresty/lua-upstream-nginx-module#1, i found this https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md.
but this still not implement the feature add/delete upstream by lua code to have more flexible 。
do you have planed to do this?
Without going into too much details,
return ngx.exit(503)
in the balancer_by_lua block. However, I still get HTTP 500.
Is there a specific API that I need to use?
Given this config for upstream
upstream postgrest {
server remotecomputer:3000;
balancer_by_lua_block {
local balancer = require 'ngx.balancer'
local host = 'remotecomputer'
local port = 3000
local ok, err = balancer.set_current_peer(host, port)
if not ok then
ngx.log(ngx.ERR, 'failed to set the current peer ', err)
return ngx.exit(500)
end
}
keepalive 64;
}
I get the error
2016/06/13 10:47:54 [error] 5#5: *1 [lua] balancer_by_lua:8: failed to set the current peer no host allowed while connecting to upstream, client: 192.168.99.1, server: localhost, request: "GET /.... HTTP/1.1", host: "192.168.99.100:8080"
If i comment the whole balancer_by_lua_block
everything works
Any idea what i'm doing wrong?
Thank you
Since ngx.var.ssl_ciphers (the client-supported cipher suite) is not available in ssl_certificate_by_lua_*, is it possible to export this value in the ssl_certificate_by_lua context for cipher-based negotiation (EC for newer clients, RSA for older ones)?
My nginx config file is:
user root;
worker_processes 1;
error_log logs/error.log;
pid logs/nginx.pid;
daemon on;
events {
use epoll;
worker_connections 1024;
}
http {
upstream backend {
# only a fake server, set it to an arbitrary value
server 127.0.0.1:8080;
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local state_name, status_code = balancer.get_last_failure()
if state_name == nil then
ngx.log(ngx.ERR, "this is the first attempt")
else
ngx.log(ngx.ERR, "retrying because state_name: "..state_name,
", status_code: "..status_code)
end
local ok, err = balancer.set_more_tries(3) --XXX: !!!note here!!!
if not ok then
ngx.log(ngx.ERR, "set_more_tries failed, because: "..tostring(err))
end
ok, err = balancer.set_current_peer("127.0.0.1", 80) --XXX: !!!hard code here!!!
if not ok then
ngx.log(ngx.ERR, "set_current_peer failed, because: "..tostring(err))
return ngx.exit(500)
end
}
}
include mime.types;
default_type application/octet-stream;
log_format main '$request_time $upstream_response_time $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
keepalive_timeout 0;
server {
listen 8080;
server_name localhost;
location / {
default_type text/plain;
proxy_next_upstream error timeout http_502 http_503 http_504 http_404;
proxy_next_upstream_tries 4;
proxy_next_upstream_timeout 5s;
proxy_set_header Host $host;
add_header Upstream-Addr $upstream_addr always;
proxy_pass http://backend;
}
}
}
Note the parameters below:
balancer.set_more_tries(3)
proxy_next_upstream_tries 4
add_header Upstream-Addr $upstream_addr always
Then, start nginx:
/usr/local/openresty/nginx/sbin/nginx -p . -c nginx2.conf
At last, test it:
curl -I -XGET http://127.0.0.1:8080/this_is_a_404_page/
I got these headers:
HTTP/1.1 404 Not Found
Server: openresty/1.11.2.1
Date: Tue, 08 Nov 2016 05:03:52 GMT
Content-Type: text/html
Content-Length: 175
Connection: close
Upstream-Addr: 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80
while I change balancer.set_more_tries(3)
to balancer.set_more_tries(1)
, the test result is:
HTTP/1.1 404 Not Found
Server: openresty/1.11.2.1
Date: Tue, 08 Nov 2016 05:07:44 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 1845
Connection: close
X-Powered-By: Express
Cache-Control: no-cache, private, no-store, must-revalidate, max-stale=0, post-check=0, pre-check=0
ETag: W/"ubl3wwo7a8ZJ5opujSFyhQ=="
Vary: Accept-Encoding
Upstream-Addr: 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80, 127.0.0.1:80
So, how to set the count of set_more_tries
, or there are some bugs?
Thank you.
Hi, yichun,
At present I want to define my own load balance policy with balancer_by_lua and session sticky is enabled at the same time. Can these 2 feature work together?
Thanks
如下配置,当访问http://127.0.0.1:8800/test2时出现404
`
proxy_next_upstream error timeout invalid_header http_504 http_404;#什么样的错误分发自动重试
proxy_http_version 1.1;
proxy_set_header Connection "";
server {access_log logs/access.log;server_name up_1;listen 8801;
location /test1 {content_by_lua 'ngx.say(ngx.var.server_port,":",ngx.var.uri)';}
}
server {access_log logs/access.log;server_name up_2;listen 8802;
location /test2 {content_by_lua 'ngx.say(ngx.var.server_port,":",ngx.var.uri)';}
}
server {server_name up_0;listen 8800;
location =/favicon.ico { return 200;}
location / {
proxy_pass http://up_test;
log_by_lua '
ngx.log(ngx.ERR,"\nuri=",ngx.var.uri,"upstream_addr:",ngx.var.upstream_addr,"upstream_status:",ngx.var.upstream_status)
';
}
}
upstream up_test {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local upservers={3,
{"127.0.0.1",8801},
{"127.0.0.1",8802},
{"127.0.0.1",8803}
}
local state_name, status_code = balancer.get_last_failure()
ngx.log(ngx.ERR,"\nstate_name:",state_name, "status_code:",status_code)
local ibf=require "randombuff"
local index=ibf.index()--每次递增+1,模拟轮询负载均衡策略
index=math.mod(index,upservers[1])+2
local ok, err = balancer.set_current_peer(upservers[index][1], upservers[index][2])
ngx.log(ngx.ERR,"n\uri:",ngx.var.uri,"index;",index,"ok:",ok, err)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
}
keepalive 10; # connection pool
}`
如果是如下集群配置则不会404
upstream up_test2 {
server 127.0.0.1:8801;
server 127.0.0.1:8802;
server 127.0.0.1:8803;
keepalive 10;
}
请问是BUG还是我写法不对呢
Hi,
i am currently using lua-resty-auto-ssl, it´s working like a charm, thank you.
I just noticed, that it´s generating 4096bits keys and i would like it to be 2048 keys (in opnenssl.cnf i have 2048 as default). I´ve looked through config files, but I haven´t found where should i set this.
Can you help me ?
Thanks
example:
local ngx_re = require "ngx.re"
local res, err = ngx_re.split("c:574|576|20170424155217","|")
log:
lua entry thread aborted: memory allocation error: not enough memory
use another:
function split(str, split_char)
local sub_str_tab = {};
while (true) do
local pos = string.find(str, split_char, 1, true);
if (not pos) then
sub_str_tab[#sub_str_tab + 1] = str;
break;
end
local sub_str = string.sub(str, 1, pos - 1);
sub_str_tab[#sub_str_tab + 1] = sub_str;
str = string.sub(str, pos + 1, #str);
end
return sub_str_tab;
end
local res, err = split("c:574|576|20170424155217","|")
ngx.say(res[1])
result:
c:574
I do not know if I have used it correctly.
The following configuration using a unix socket as the current peer works, but it is not documented.
Adding the line:
local ok, err = balancer.set_current_peer('some_unix_socket_name');
would suffice
http {
upstream backend {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local suname = 'unix:/tmp/nginx_socket';
local ok, err = balancer.set_current_peer(suname);
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(500)
end
}
}
server {
listen 8080;
location /S/ {
proxy_pass http://backend/S/;
}
}
server {
listen unix:/tmp/nginx_socket;
location /S/__debug {
content_by_lua_file ./locations/debug;
}
}
It seems that signal_graceful_exit
only set the ngx_quit
instead of triggering a real graceful exit...
Refer from the Nginx source:
sigsuspend(&set);
...
if (ngx_quit) {
I guess that if a graceful exit is in need, it is required to send a signal to resume master process after sigsuspend
.
Maybe I am missing something.
A great piece of documentation (especially if you have already performed the analysis) would be an explanation of when its best to use FFI methods (i,e loops etc that the JITter can do a good job as a result), and when (if any) the overheads of the small amounts of Lua code exceed that of the standard Lua module API.
I am guessing you did this research before starting the module.
have a look, it fails with something related to semaphores:
https://travis-ci.org/chipitsine/lua-resty-core/builds/128364836
ok 162 - TEST 1: clear certs - pattern "[emerg]" does not match a line in error.log (req 1)
WARNING: TEST 1: clear certs - 2016/05/06 18:15:59 [crit] 25576#0: *4 SSL_shutdown() failed (SSL: error:140E0197:SSL routines:SSL_shutdown:shutdown while in init), client: 127.0.0.1, server: localhost, request: \"GET /t HTTP/1.1\", host: \"localhost\" at /usr/local/share/perl/5.18.2/Test/Nginx/Socket.pm line 1192.
ok 163 - TEST 14: ngx.semaphore in ssl_certificate_by_lua* - status code ok
not ok 164 - TEST 14: ngx.semaphore in ssl_certificate_by_lua* - response_body - response is expected (req 0)
# Failed test 'TEST 14: ngx.semaphore in ssl_certificate_by_lua* - response_body - response is expected (req 0)'
# at /usr/local/share/perl/5.18.2/Test/Nginx/Socket.pm line 1277.
# @@ -1,2 +1,2 @@
# connected: 1
# -ssl handshake: boolean
# +failed to do SSL handshake: handshake failed
not ok 165 - TEST 14: ngx.semaphore in ssl_certificate_by_lua* - grep_error_log_out (req 0)
The version number check fails in the stream lua module.
We can use ssl_password_file
in nginx's *.conf
files to set password files. How can we do the same thing in lua.
See
t/request.t .. 1/180
# Failed test 'TEST 14: ngx.req.set_header (single number value) - status code ok'
# at /home/agentzh/git/lua-resty-core/../test-nginx/lib/Test/Nginx/Socket.pm line 936.
# got: ''
# expected: '200'
# Failed test 'TEST 14: ngx.req.set_header (single number value) - response_body - response is expected (repeated req 0, req 0)'
# at /home/agentzh/git/lua-resty-core/../test-nginx/lib/Test/Nginx/Socket.pm line 1501.
# got: ""
# length: 0
# expected: "header foo: 500\x{0a}"
# length: 16
# strings begin to differ at char 1 (line 1 column 1)
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
Retry connecting after 0.63 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
Retry connecting after 0.693 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
Retry connecting after 0.759 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
Retry connecting after 0.828 sec
TEST 14: ngx.req.set_header (single number value) - Can't connect to 127.0.0.1:5855: Connection refused
Retry connecting after 0.9 sec
...
GDB found no C backtraces:
(gdb) c
Continuing.
Program received signal SIGSEGV, Segmentation fault.
0x00007f9f846dfece in ?? ()
(gdb) bt full
#0 0x00007f9f846dfece in ?? ()
No symbol table info available.
#1 0x00007f9f5aec1d08 in ?? ()
No symbol table info available.
#2 0x000000000041c3b0 in ?? ()
No symbol table info available.
#3 0x00007ffde7944390 in ?? ()
No symbol table info available.
#4 0x00007f9f5aeab330 in ?? ()
No symbol table info available.
#5 0x00007f9f5aec56c8 in ?? ()
No symbol table info available.
#6 0x00007ffde7944390 in ?? ()
No symbol table info available.
#7 0x000000000041c3b0 in ?? ()
No symbol table info available.
#8 0x0000000000000000 in ?? ()
No symbol table info available.
I just noticed that the below code is valid but throws a lot of runtime errors, incl. segfaults.
From the docs, ssl_certificate_by_lua should not be valid in http{}, only in server{}, right?
http {
ssl_certificate_by_lua_block { print("hello") }
server {
listen 443 ssl;
}
}
Two sample errors:
SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:ssl_bytes_to_cipher_list:inappropriate fallback) while SSL handshaking
worker process 45145 exited on signal 11 (core dumped)
Hi Yichun,
If you call either of the pem_to_der conversion functions in ssl_certificate_by_lua and they throw an error it appears to break the handshake, even though the default certs haven't been cleared and no new cert has been set.
Gist with example config and output: https://gist.github.com/hamishforbes/402c4cebef665969cb34
It would seem preferable to be able to continue the handshake, albeit with probably incorrect an cert / key, than to break entirely?
Hamish
I believe I found a limitation in the resty.core implemention of ngx.re.(g)sub. Examine the following (very contrived) example, without loading resty.core:
content_by_lua '
local lookup = function(m)
-- note we are returning a number type here
return 5
end
local newstr, n, err = ngx.re.sub("hello, 1234", "([0-9])[0-9]", lookup, "oij")
ngx.say(newstr)
';
This performs as expected:
$ curl localhost/re-test
hello, 55
However, when we enable resty.core, this fails with the following:
2016/02/01 08:49:52 [debug] 4659#0: pcre JIT compiling result: 1
2016/02/01 08:49:52 [debug] 4659#0: *1 [lua] content_by_lua(nginx.conf:271):3: replace(): m[0] is 12
2016/02/01 08:49:52 [debug] 4659#0: *1 lua resume returned 2
2016/02/01 08:49:52 [error] 4659#0: *1 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/resty/core/regex.lua:641: attempt to get length of local 'bit' (a number value)
stack traceback:
coroutine 0:
/usr/local/openresty/lualib/resty/core/regex.lua: in function 'gsub'
content_by_lua(nginx.conf:271):7: in function <content_by_lua(nginx.conf:271):1>, client: 127.0.0.1, server: localhost, request: "GET /re-test HTTP/1.1", host: "localhost"
Obviously, we can see why in regex.lua this fails:
587 local function re_sub_func_helper(subj, regex, replace, opts, global)
[...snip...]
638 local res = collect_captures(compiled, rc, subj, flags)
639
640 local bit = replace(res)
641 local bit_len = #bit
When replace
returns a number value, we see our issue.
There are other locations where the #
operator is called, that could lead a similar issue. For example:
347 local function re_match_helper(subj, regex, opts, ctx, want_caps, res, nth)
[...snip...]
374 rc = C.ngx_http_lua_ffi_exec_regex(compiled, flags, subj, #subj, pos)
If subj
is a number, we end up with a thread abort. I could make this work by casting the return in my lookup function via tostring(), but later in my actual use case I need to perform numeric comparison, so forcing two castes as a workaround seems wasteful. Any thoughts here? For reference, this is using a fresh download of openresty 1.9.7.3, with a fairly vanilla configuration (compiled in PCRE 8.38 w/ JIT):
root@soter:~# /usr/local/openresty/nginx/sbin/nginx -V
nginx version: openresty/1.9.7.3
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04)
built with OpenSSL 1.0.1f 6 Jan 2014
TLS SNI support enabled
configure arguments: --prefix=/usr/local/openresty/nginx --with-debug --with-cc-opt='-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC -O2' --add-module=../ngx_devel_kit-0.2.19 --add-module=../echo-nginx-module-0.58 --add-module=../xss-nginx-module-0.05 --add-module=../ngx_coolkit-0.2rc3 --add-module=../set-misc-nginx-module-0.29 --add-module=../form-input-nginx-module-0.11 --add-module=../encrypted-session-nginx-module-0.04 --add-module=../srcache-nginx-module-0.30 --add-module=../ngx_lua-0.10.0 --add-module=../ngx_lua_upstream-0.04 --add-module=../headers-more-nginx-module-0.29 --add-module=../array-var-nginx-module-0.04 --add-module=../memc-nginx-module-0.16 --add-module=../redis2-nginx-module-0.12 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.14 --add-module=../rds-csv-nginx-module-0.07 --with-ld-opt=-Wl,-rpath,/usr/local/openresty/luajit/lib --with-pcre=/usr/local/src/pcre-8.38 --with-pcre-jit --with-pcre-opt=-g --with-http_ssl_module
Hello,
I am using balancer_by_lua for upstream routing which works great. We are seeing intermittent errors where nginx is caching the connection while upstream has already terminated.
See the below image. xxx.xxx.xxx.159 is OpenResty server and xxx.xxx.xxx.27 is upstream server.
Thanks,
Rohit Joshi
when i compile the nginx with the new ssl_session branch and implement sesion_fetch_by_lua
i got the error cannot yield in sess get cb: missing async sess get cb support in OpenSSL
sysinfo: Linux version 3.10.0-123.el7.x86_64 ([email protected]) (gcc version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) ) #1 SMP Mon Jun 30 12:09:22 UTC 2014
Hi,
when I call ssl.set_der_priv_key(), I got an error like this.
2017/07/20 11:27:00 [error] 12735#12735: *4 failed to load external Lua file "/danbi/nginx/conf.d/route.lua": /danbi/nginx/conf.d/route.lua:45: unfinished string near '")', client: 124.53.127.11, server: _, request: "GET / HTTP/1.1", host: "speakingsolution.com"
ssl.set_priv_key(), too.
I'm using openresty/1.11.2.4.
My code here.
`
-- set key
local der_pkey, err = ssl.priv_key_pem_to_der(pkey_data)
if not der_pkey then
ngx.log(ngx.ERR, "failed to convert private key ", "from PEM to DER: ", err)
return ngx.exit(ngx.ERROR)
end
local ok, err = ssl.set_der_priv_key(der_pkey)
if not ok then
ngx.log(ngx.ERR, "failed to set DER private key: ", err)
return ngx.exit(ngx.ERROR)
end
`
Thanks in advance.
The problem is this line:
https://github.com/openresty/lua-resty-core/blob/master/t/balancer-timeout.t#L19
t/balancer-timeout.t ... nginx: [emerg] invalid event type "poll" in /Users/bungle/Sources/vendor/lua-resty-core/t/servroot/conf/nginx.conf:74
Is there anything I can do about it on my side?
Can I allocate the number of privileged agent processes ?
Hi,
I'm testing the ngx.balancer
module and I believe the lua_code_cache
directive is being ignored ?
I have the following backend configuration :
http {
lua_code_cache off;
upstream backend {
server 127.0.0.1;
balancer_by_la_file my_balancer.lua;
}
}
Code changes in the my_balancer.lua
file are reflected only when I restart nginx. Other blocks, like for example content_by_lua_file
are working as expected (no restart required).
I'm testing all this on a fresh 1.9.7.3 OpenResty installation.
When proxy_next_upstream_timeout set to 0(default),nginx will try all upstream servers;
When proxy_next_upstream_timeout set to 0(default),using balancer.set_more_tries(x>0) will get "reduced tries due to limit",then stop trying;
Here balancer.set_more_tries() seems not to do what we want?
Hello,
I am running a performance test using balancer by lua block module. Periodically, I am seeing following errors.
2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040095EA8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2352#0: *207819 attempt to send data on a closed socket: u:00000000410F5058, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040096068, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2352#0: *207819 attempt to send data on a closed socket: u:00000000410F4CB8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040095CC8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2346#0: *200075 attempt to send data on a closed socket: u:0000000040095EA8, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
2016/07/07 03:17:04 [error] 2357#0: *203392 attempt to send data on a closed socket: u:00000000410F4E98, c:0000000000000000, client: xx.xxx.xxx.44, server: 127.0.0.1, request: "GET /perftest_category/success?http_status=201&body_size=100 HTTP/1.1", host: "xx.xxx.xxx.64:11111"
My code:
upstream balancer_upstream {
server 0.0.0.1 max_fails=0 fail_timeout=5s; # just an invalid address as a place holder
balancer_by_lua_block {
local upstream_servers_str = ngx.var.upstream_servers
local upstream_servers = capi_util.json_to_lua(upstream_servers_str)
local set_peer = false
if not ngx.ctx.upstream_retries then
ngx.ctx.upstream_retries = 0
end
if #upstream_servers == 1 then
local ip_port = upstream_servers[1]
local ok, err = balancer.set_current_peer(ip_port["ip"], ip_port["port"])
set_peer = true
else
if ngx.ctx.upstream_retries < #upstream_servers then
local ok, err = balancer.set_more_tries(1)
local ip_port = upstream_servers[ngx.ctx.upstream_retries + 1]
ngx.ctx.upstream_retries = ngx.ctx.upstream_retries + 1
ok, err = balancer.set_current_peer(ip_port["ip"], ip_port["port"])
set_peer = true
end
end
if not set_peer then
ngx.status = 500
return ngx.exit(500)
end
}
keepalive 1000; # connection pool
}
Can I user balancer.lua by hash or weight rout to rout server.
env
host
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
logstash version 5.5.2
openresty version 1.11.2.5
openresty configure params
sudo ./configure --prefix=/etc/openresty \
--user=nginx \
--group=nginx \
--with-cc-opt='-O2 -I/usr/local/openresty/zlib/include -I/usr/local/openresty/pcre/include -I/usr/local/openresty/openssl/include' \
--with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/usr/local/openresty/zlib/lib -L/usr/local/openresty/pcre/lib -L/usr/local/openresty/openssl/lib -Wl,-rpath,/usr/local/openresty/zlib/lib:/usr/local/openresty/pcre/lib:/usr/local/openresty/openssl/lib' \
--with-pcre-jit \
--with-stream \
--with-stream_ssl_module \
--with-http_v2_module \
--with-http_stub_status_module \
--with-http_realip_module \
--with-http_gzip_static_module \
--with-http_sub_module \
--with-http_gunzip_module \
--with-threads \
--with-file-aio \
--with-http_ssl_module \
--with-http_auth_request_module \
--without-mail_pop3_module \
--without-mail_imap_module \
--without-mail_smtp_module \
--without-http_fastcgi_module \
--without-http_uwsgi_module \
--without-http_scgi_module \
--without-http_autoindex_module \
--without-http_memcached_module \
--without-http_empty_gif_module \
--without-http_ssi_module \
--without-http_userid_module \
--without-http_browser_module \
--without-http_rds_json_module \
--without-http_rds_csv_module \
--without-http_memc_module \
--without-http_redis2_module \
--without-lua_resty_memcached \
--without-lua_resty_mysql \
-j4
sudo make -j4
sudo make install
nginx.conf
error_log logs/error.log error;
pid /var/run/nginx.pid;
worker_rlimit_nofile 10240;
events {
worker_connections 10240;
}
http {
include mime.types;
server {
listen 80;
location / {
content_by_lua '
local sock,err = ngx.socket.tcp()
if not sock then
ngx.say("init socket has error : ",err)
else
ngx.say("init socket is ok")
end
local ok, err = sock:connect("127.0.0.1", 5044)
if not ok then
ngx.say("create connect has error : ",err)
else
ngx.say("create connect is ",ok)
end
local bytes, err = sock:send("this is test msg")
if not bytes then
ngx.say("socket send msg has error : ",err)
else
ngx.say("sended bytes size: " ,bytes)
end
local ok, err = sock:setkeepalive(0, 100)
if not ok then
ngx.say("set keepalive has error : ",err)
else
ngx.say("set keepalive is ",ok)
end
';
}
}
}
logstash conf named demo.conf
input {
tcp {
port => "5044"
codec => "plain"
}
}
output {
stdout { codec => rubydebug }
}
./bin/logstash -f demo.conf
wait for output Successfully started Logstash API endpoint {:port=>9600}
curl localhost
# openresty output
init socket is ok
create connect is 1
sended bytes size: 16
set keepalive is 1
logstash console can't output anything,
sudo tcpdump -i any -vvv -n -A port 5044
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
18:36:40.188829 IP (tos 0x0, ttl 64, id 23893, offset 0, flags [DF], proto TCP (6), length 68)
127.0.0.1.49714 > 127.0.0.1.5044: Flags [P.], cksum 0xfe38 (incorrect -> 0x8862), seq 2638081705:2638081721, ack 3382832894, win 342, options [nop,nop,TS val 2222059870 ecr 2222058789], length 16
E..D]U@.@..\.........2...=.........V.8.....
.q.^.q.%this is test msg
18:36:40.188838 IP (tos 0x0, ttl 64, id 23861, offset 0, flags [DF], proto TCP (6), length 52)
127.0.0.1.5044 > 127.0.0.1.49714: Flags [.], cksum 0xfe28 (incorrect -> 0x7145), seq 1, ack 16, win 342, options [nop,nop,TS val 2222059870 ecr 2222059870], length 0
E..4]5@[email protected].....=.....V.(.....
.q.^.q.^
^C
2 packets captured
4 packets received by filter
0 packets dropped by kernel
wireshark has many tcp dup ack
and tcp Out-Of-Order
info.
curl localhost:5044
logstash console output
{
"@timestamp" => 2017-09-08T10:33:19.254Z,
"port" => 49710,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "GET / HTTP/1.1\r"
}
{
"@timestamp" => 2017-09-08T10:33:19.257Z,
"port" => 49710,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "Host: localhost:5044\r"
}
{
"@timestamp" => 2017-09-08T10:33:19.258Z,
"port" => 49710,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "User-Agent: curl/7.47.0\r"
}
{
"@timestamp" => 2017-09-08T10:33:19.259Z,
"port" => 49710,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "Accept: */*\r"
}
{
"@timestamp" => 2017-09-08T10:33:19.259Z,
"port" => 49710,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "\r"
}
telnet 127.0.0.1 5044
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
this is test msg
^]
telnet> Connection closed.
Ctrl+] and Ctrl+D exit telnet
logstash output msg
{
"@timestamp" => 2017-09-08T10:34:28.588Z,
"port" => 49712,
"@version" => "1",
"host" => "127.0.0.1",
"message" => "this is test msg\r"
}
Some low level functions don't set the errmsg
pointer on every possible error. Case in point, if the shdict FFI functions are called with a NULL zone, they just return NGX_ERROR, and the Lua part calls ffi_string(errmsg[0])
with an stale message, or even a NULL value. (https://github.com/openresty/lua-nginx-module/blob/master/src/ngx_http_lua_shdict.c#L2639)
This could be fixed either in Lua with more paranoid style (and acknowledge that the errmsg "might be setup or not") or on the C side making sure that the errmsg is always set before returning.
hello, I find, in order to enable the retry, it needs to explicitly set the proxy_next_upstream_tries directive to a value greater than the default value, zero. I think it's a bug or at least it's incompatible with the original upstream failover policy. Because the document says the default zero value indicates that nginx won't limit the retry times other than disable the retry. If it's by design, its better to make it clearly in the document.
I am doing NGINX upstream with lua “ngx.balancer”. Balancing HTTPS requests.
In my scenario, HTTP requests are sent every 5 seconds to NGINX, which then balances HTTPS requests to the upstream server.
At this point, lua balancer has only 1 upstream server to load balance from.
All requests use the same “Host”, which means that once the SSL handshake is done, the same session ID can be re-used between NGINX and the upstream server (which is also running NGINX).
location /one {
proxy_pass https://upstream;
proxy_http_version 1.1;
proxy_ssl_trusted_certificate /my/trusted/certificate.pem;
proxy_ssl_session_reuse on;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_name $host;
proxy_ssl_server_name on;
}
When using NGINX upstream, without lua “ngx.balancer”, every upstream HTTPS request establishes a new connection, but the SSL session is re-used. In other words, the Client Hello contains the Session ID from the previous connection and the previous session resumes. A full SSH Handshake is not performed.
When using NGINX upstream, with lua “ngx.balancer”, the Client Hello never contains the Session ID from the previous connection. A full SSH Handshake is needed.
Does the lua “ngx.balancer” support session re-use ?
balancer_by_lua
in the latest code has support for connect / read timeout. Is it safe to replace lua-resty-core-0.1.6
version with latest code or should I just replace lib/ngx/balancer.lua
file?
We have been playing with ngx.ocsp. Ideally, we'd like to cache the CA's responses. Is there a way to find for how long the responses are valid, similar to https://github.com/indutny/ocsp/blob/master/lib/ocsp/cache.js#L81-L117?
From the example given in https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/balancer.md#synopsis
Changing ngx.exit(500) to ngx.exit(503) or another error code still returns 500 (host set to google.com intentionally)
upstream backend {
server 0.0.0.1; # just an invalid address as a place holder
balancer_by_lua_block {
local balancer = require "ngx.balancer"
-- well, usually we calculate the peer's host and port
-- according to some balancing policies instead of using
-- hard-coded values like below
local host = "google.com"
local port = 8080
local ok, err = balancer.set_current_peer(host, port)
if not ok then
ngx.log(ngx.ERR, "failed to set the current peer: ", err)
return ngx.exit(503)
end
}
keepalive 10; # connection pool
}
Hi there, I've done the following PoC to use ssl_certifciate_by_lua: and loading dynamically a certificate from a redis server. However it doesn't work I can't connect to the database.
The first logs appear correctly in my error log, however the second is never called. So I assume, the red:connect statement is blocking.
Is it by design? Can't I use the coroutine - tcp related API here? What is the workaround?
Versions installed are
Nginx is stopping client connection, curl is exiting with the following error:
The nginx configuration:
server {
listen 80 default_server;
listen 443 ssl default_server;
server_name default;
access_log /var/log/nginx/app-access.log;
error_log /var/log/nginx/app-error.log;
ssl_certificate /etc/ssl/web/default.crt;
ssl_certificate_key /etc/ssl/web/default.key;
ssl_certificate_by_lua_block {
local ssl = require "ngx.ssl"
local redis = require "resty.redis"
local red = redis:new()
ngx.log(ngx.ERR, "Before connection")
local ok, err = red:connect("127.0.0.1", 6379)
ngx.log(ngx.ERR, ok..""..err)
... More logic (clean old cert, setup new)
}
}
It would be quite useful if we could get the configured error_log
level via Lua code.
For instance, we could do something like this:
-- config.lua
_M.log_level = ngx.config.log_level()
-- req.lua
if log_level >= ngx.WARN then
-- Now we save an `encode` operation if we configure error_log level to `error`.
ngx.log(ngx.WARN, cjson.encode(obj), ...)
end
Currently, ngx.log
handles the lua parameters only when the given level is higher than the configured one. And maybe we don't need to care about those log levels higher than debug
in Lua land. So what we need to do is just extracting some existent code into a new function.
Here is a question: since we could implement this feature in a separate Nginx C module, will you still accept it as a part of OpenResty's feature list?
Hello.
It's possible not a bug, but it's our current problem.
Sometimes, about 1 of 1000 requests has an error in this function. As I understood on this line: https://github.com/openresty/lua-resty-core/blob/master/lib/ngx/ssl.lua#L160 Because there is no error string returns.
Of course part of these errors can be fired by really wrong requests. But yesterday we have checked with 100% trusted request (from yandex money), and problem really exists. Part of request from ya.money comes to one listen IP and has no problem, and part comes to another IP and has this problem. The nginx config is the same for both.
Is there any way to check what's wrong? And also are there any nginx options can affect on this code?
I'm a little confused about requiring some modules. I was using the latest OpenResty bundle, and while working with it I found I could use require "redis"
but not require "resty.redis"
as documented, and require "ngx.balancer"
doesn't work no matter what I do. Please advise.
I apologize if this is the wrong repo (please point me to the correct one where I can raise this issue).
When I profile open resty with the nginx-systemtap-toolkit tools, especially the ngx-pcrejit tool, it tells me that PCRE JIT is not enabled.
root@myhost:~/nginx-systemtap-toolkit# ./ngx-pcrejit -p 6018
Tracing 6018 (/usr/sbin/nginx)...
Hit Ctrl-C to end.
^C
ngx_http_lua_ffi_exec_regex: 0 of 225968 are PCRE JITted.
ngx_http_regex_exec: 0 of 112985 are PCRE JITted.
Looks like JIT support is missing in your PCRE build.
However, I have enabled pcre jit support. I used the standard build script from docker-openresty/trusty
wherein I enabled --with-pcre
and --with-pcre-jit
. The nginx config also shows that pcre jit is enabled.
Am I supposed to enable something else for jit compiling PCRE regexes?
I even tried the nginx directive pcre_jit on;
. No use.
Am I missing something?
the code like below:
sema.lua
local _M = {}
local semaphore = require("ngx.semaphore")
_M.sema = semaphore.new(1)
return _M
core.lua
local ok, err = sema:wait(0.01)
if not ok then
-- lock failed acquired
-- but go on. This action just set a fence for all but this request
end
local ups = upstreamCache:getUpstream(runtime, userInfo)
if ups ~= nil then
if ok then upsSema.post(1) end
return ups, info
end
I really want to know when the error will occur, thanks
Hello
Currently using ssl_certificate_by_lua_file with great success, thank you!
We have a request from one customer to only allow very secure ciphers, but the other customers would not like this.
Is there a way to set the ciphers dynamically too?
Thanks
Richard
Hi,
I've tried to figure out where this would be coming from, but my understanding of Openresty is not good enough and of course it could be my config.
Below is the configuration I'm using along with the logs and requests.
worker_processes 1;
daemon off;
events {
worker_connections 1024;
}
http {
error_log /dev/stdout;
access_log /dev/stdout;
upstream backend {
server 0.0.0.1;
balancer_by_lua_block {
local balancer = require "ngx.balancer"
if not ngx.ctx.tries then
ngx.ctx.tries = 0
end
if ngx.ctx.tries < 5 then
local ok, err = balancer.set_more_tries(1)
if not ok then
ngx.log(ngx.ERR, "failed to set more tries: ", err)
elseif err then
ngx.log(ngx.ERR, "set more tries: ", err)
end
end
ngx.ctx.tries = ngx.ctx.tries + 1
local host = "127.0.0.1"
local port = 8080
local state, code, err = balancer.get_last_failure()
ngx.log(ngx.ERR, "state: ", state, ", code: ", code, ", err: ", err)
local ok, err = balancer.set_current_peer(host, port)
if not ok then
ngx.log(ngx.ERR, "failed to set the peer: ", err)
return ngx.exit(505)
end
}
}
server {
listen 80;
location / {
proxy_next_upstream_tries 2;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_403 http_404;
proxy_pass http://backend;
}
}
server {
listen 127.0.0.1:8080;
location / {
return 503;
}
}
}
curling
$ curl localhost:80 -v
* Rebuilt URL to: localhost:80/
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 503 Service Temporarily Unavailable
< Server: openresty/1.9.7.2
< Date: Mon, 25 Jan 2016 17:15:56 GMT
< Content-Type: text/html
< Content-Length: 218
< Connection: keep-alive
<
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>openresty/1.9.7.2</center>
</body>
</html>
* Connection #0 to host localhost left intact
Logs created by balancer_by_lua_block
should show 503 error code but instead show 502:
2016/01/25 17:15:56 [error] 6#0: *1 [lua] balancer_by_lua:22: state: nil, code: nil, err: nil while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost"
2016/01/25 17:15:56 [error] 6#0: *1 [lua] balancer_by_lua:13: set more tries: reduced tries due to limit while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2016/01/25 17:15:56 [error] 6#0: *1 [lua] balancer_by_lua:22: state: failed, code: 502, err: nil while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
127.0.0.1 - - [25/Jan/2016:17:15:56 +0000] "GET / HTTP/1.0" 503 218 "-" "curl/7.43.0"
127.0.0.1 - - [25/Jan/2016:17:15:56 +0000] "GET / HTTP/1.1" 503 218 "-" "curl/7.43.0"
127.0.0.1 - - [25/Jan/2016:17:15:56 +0000] "GET / HTTP/1.0" 503 218 "-" "curl/7.43.0"
Hi
Is it safe to use only the ngx.balancer with the latest stable version of LuaJIT; LuaJIT-2.0.4?
When I use it, I am getting a warning:
[warn] 2403#0: *5 [lua] base.lua:25: use of lua-resty-core with LuaJIT 2.0 is not recommended; use LuaJIT 2.1+ instead
However, the LuaJIT2.1+ is still in betta stage.
Please let me know if it is safe to use the balancer with the LuaJIT-2.0.4.
Thanks
Currently this library uses the return error()
form. This triggers tail call optimization in the Lua VM and makes the resulting backtrace not so useful.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.