liseen / lua-resty-http Goto Github PK
View Code? Open in Web Editor NEWLua http client driver for the ngx_lua based on the cosocket API
Lua http client driver for the ngx_lua based on the cosocket API
local http = require("resty.http")
local hc = http:new()
local ok, code, headers, status, body = hc:request({
url = "http://www.tokoled.net/uploads/1920.jpg",
method = "GET",
headers = { },
body = ""
})
return ngx.say(body)
Are my code correct ? i have new project to migrate my other website to LUA+ OpenResty, but it need phpcurl, so i think this package is suitable to replace phpcurl function.
建议各位大大,在 Synopsis 里面加个 get 的例子吧,这样新手也有一个好的example!
[error] 4733#0: *1 peer closed connection in SSL handshake, client: 172.16.48.28, server: localhost, request: "POST /lua/index HTTP/1.1", host: "172.16.48.31"
I'm receiving the following error when trying to consume an SLL endpoint (the GitHub API):
2015/02/18 11:06:01 [error] 3087#0: *28889 SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol), client: 52.1.6.14, server: , request: "POST /oauth_url HTTP/1.1", host: "mashaper-github.p.mashape.com"
I found somewhere that this problem could be releated to this:
This error happens when OpenSSL receives something other than a ServerHello in a protocol version it understands from the server. It can happen if the server answers with a plain (unencrypted) HTTP. It can also happen if the server only supports e.g. TLS 1.2 and the client does not understand that protocol version. Normally, servers are backwards compatible to at least SSL 3.0 / TLS 1.0, but maybe this specific server isn't (by implementation or configuration).
Hi,
First let me congrats you with great lib. I was just wondering if HTTPS was supported ? i do not think to have any luck with it. i managed to get http working ok.
thanks
-seb
local hc = http:new()
local ok, code, headers, status, body = hc:request {
url = "https://secure.com/auth",
method = "POST",
scheme = "https",
port = "443",
headers = { ["Content-Length"] = string.len(post_args_escaped), ["Content-Type"] = "application/x-www-form-urlencoded" },
body = post_args_escaped,
}
Hi, I didn't see that there is already a lua-resty-http
, so I implemented our own: https://github.com/bsm/lua-resty-http. My version includes chunked response handling. Would be really great to merge our code bases into one.
I'm using lua with nginx and I need to do some piping. I need to auth in lua on 3rd party website and then download file but not to my server, but directly piping to user.
nginx ==> 3rdparty/auth
nginx <== 3rdparty: You are logged in
nginx ==> 3rdparty/get/file/x.tar.gz;
nginx <== 3rdparty: Here you are... binary data
nginx.on_3rdparty_chunk(data) ==> nginx.send_to_client(data)
Is it possible? Can you give a simple example?
line 277~286 in http.lua:
local h = "\r\n"
for i, v in pairs(nreqt.headers) do
-- fix cookie is a table value
if type(v) == "table" and i == "cookie" then
v = table.concat(v, "; ")
end
h = i .. ": " .. v .. "\r\n" .. h
end
h = h .. '\r\n' -- close headers
this code snippet makes the request body into "\r\n",My recommendations are as follows:
old code:
local h = "\r\n"
new code:
local h = ""
When http.lua receives chunked body, if upstream peer closes this socket, there may be an infinite loop in receivebody() funciton:
local function receivebody(sock, headers, nreqt)
...
if t and t ~= "identity" then
-- chunked
while true do
local chunk_header = sock:receiveuntil("\r\n")
local data, err, partial = chunk_header() <<< chunk_header may return nil, "closed"
if not err then
if data == "0" then
return body -- end of chunk
else
local length = tonumber(data, 16)
-- TODO check nreqt.max_body_size !!
local ok, err = read_body_data(sock,length, nreqt.fetch_size, callback)
if err then
return nil,err
end
end
end
end
hi there,
the resolver in the request
function is a redundant, IMHO.
as the tcpsock.connect API link will do the resolve-related thing, using a more effective mechanism provided by nginx itself. that resolver uses rbtree to cache the result and handles the expiration.
so maybe it's not needed? correct me if I'm wrong, thanks. :)
os:mac
docker image build by
FROM alpine
MAINTAINER edwin_uestc <[email protected]>
ENV LUA_SUFFIX=jit-2.1.0-beta1 \
LUAJIT_VERSION=2.1 \
NGINX_PREFIX=/opt/openresty/nginx \
OPENRESTY_PREFIX=/opt/openresty \
OPENRESTY_SRC_SHA1=1a2029e1c854b6ac788b4d734dd6b5c53a3987ff \
OPENRESTY_VERSION=1.9.7.3 \
VAR_PREFIX=/var/nginx
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/' /etc/apk/repositories
RUN set -ex \
&& apk --no-cache add --virtual .build-dependencies \
curl \
make \
musl-dev \
gcc \
ncurses-dev \
openssl-dev \
pcre-dev \
perl \
readline-dev \
zlib-dev \
\
&& curl -fsSL http://openresty.org/download/openresty-${OPENRESTY_VERSION}.tar.gz -o /tmp/openresty.tar.gz \
\
&& cd /tmp \
&& echo "${OPENRESTY_SRC_SHA1} *openresty.tar.gz" | sha1sum -c - \
&& tar -xzf openresty.tar.gz \
\
&& cd openresty-* \
&& readonly NPROC=$(grep -c ^processor /proc/cpuinfo 2>/dev/null || 1) \
&& ./configure \
--prefix=${OPENRESTY_PREFIX} \
--http-client-body-temp-path=${VAR_PREFIX}/client_body_temp \
--http-proxy-temp-path=${VAR_PREFIX}/proxy_temp \
--http-log-path=${VAR_PREFIX}/access.log \
--error-log-path=${VAR_PREFIX}/error.log \
--pid-path=${VAR_PREFIX}/nginx.pid \
--lock-path=${VAR_PREFIX}/nginx.lock \
--with-luajit \
--with-pcre-jit \
--with-ipv6 \
--with-http_ssl_module \
--without-http_ssi_module \
--with-http_realip_module \
--without-http_scgi_module \
--without-http_uwsgi_module \
--without-http_userid_module \
-j${NPROC} \
&& make -j${NPROC} \
&& make install \
\
&& rm -rf /tmp/openresty-* \
&& apk del .build-dependencies
RUN ln -sf ${NGINX_PREFIX}/sbin/nginx /usr/local/bin/nginx \
&& ln -sf ${NGINX_PREFIX}/sbin/nginx /usr/local/bin/openresty \
&& ln -sf ${OPENRESTY_PREFIX}/bin/resty /usr/local/bin/resty \
&& ln -sf ${OPENRESTY_PREFIX}/luajit/bin/luajit-* ${OPENRESTY_PREFIX}/luajit/bin/lua \
&& ln -sf ${OPENRESTY_PREFIX}/luajit/bin/luajit-* /usr/local/bin/lua
RUN apk --no-cache add \
libgcc \
libpcrecpp \
libpcre16 \
libpcre32 \
libssl1.0 \
libstdc++ \
openssl \
pcre
WORKDIR $NGINX_PREFIX
CMD ["nginx", "-g", "daemon off; error_log /dev/stderr info;"]
i am trying to build post request using following code
local http = require "resty.http"
local hc = http:new()
local ok, code, headers, status, body = hc:request {
url = "http://wi.hit.edu.cn/cemr/",
-- proxy = "202.118.253.110:80",
timeout = 3000,
method = "POST", -- POST or GET
headers = { ["Upgrade-Insecure-Requests"]="1",["referer"]="http://wi.hit.edu.cn/cemr/",["cache-control"]="max-age=0",["connection"]="keep-alive",["user-agent"]="Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:48.0) Gecko/20100101 Firefox/48.0",["host"]="wi.hit.edu.cn",["Content-Length"]="705",["Content-Type"] = "application/x-www-form-urlencoded" },
-- body = "source=1. 患者既往有高血压病史,最高达180/100mmHg,口服北京降压灵、利血平治疗,有脑梗死病史,遗留语笨及左侧肢体无力.2. 门诊行头CT检查,显示左侧侧脑室体旁低密度病灶,以脑梗死收入我科."
-- -- headers = { UserAgent = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11"}
}
ngx.say(ok)
ngx.say(code)
ngx.say(status)
ngx.say(body)
outcome i am looking for is to realize post request in the following page
http://wi.hit.edu.cn/cemr/,
the full request info i get from firefox debug tools is
Host: wi.hit.edu.cn
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:48.0) Gecko/20100101 Firefox/48.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://wi.hit.edu.cn/cemr/
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Cache-Control: max-age=0
form-data
source:"1.+患者既往有高血压病史,最高达180/100mmHg,口服"北京降压灵、利血平"治疗,有脑梗死病史,遗留语笨及左侧肢体无力.
2.+门诊行头CT检查,显示左侧侧脑室体旁低密度病灶,以"脑梗死"收入我科."
full error is like this
2016/09/17 05:53:47 [error] 7#0: *10 lua tcp socket read timed out, client: 172.17.0.1, server: , request: "GET /cemr HTTP/1.1", host: "localhost:8080"
but curl works
curl -d "source=患者既往有高血压病史,最高达180/100mmHg,口服北京降压灵、 利血平治疗,有脑梗死病史,遗留语笨及左侧肢体无力." http://wi.hit.edu.cn/cemr/ > result.html
示例给出了可以在request里添加 headers = { Cookie = "ABCDEFG", ["Content-Type"] = "application/x-www-form-urlencoded" }这个元素,很明显这里添加的cookie不是table类型,而翻看http.lua源码的296行,添加的cookie元素必须是一个table类型才会被正确解析,不知道我的理解对不对?
使用代码形式如下:
local ok, code, headers, status, body =hc:request {
url = url,
method = "POST",
body = postBody,
headers = {["Content-Type"] = "application/x-www-form-urlencoded" }
}
返回信息:
code ='read status line failed read status line failed timeout'
服务端http状态码为499,
我看前面有这个提问,但是回复我的我没有理解意思。
麻烦看下这个问题吧,谢谢!
Hello
I'm very new and maybe this is the issue :)
Im trying to just make a test and then follow the example and change a little bit
location /test {
content_by_lua '
local http = require("resty.http")
local hc = http:new()
local ok, code, headers, status, body = hc:request {
url = "http://www.google.com",
method = "GET",
headers = { ["Content-Type"] = "application/json" },
body = "",
}
ngx.say(ok)
ngx.say(code)
ngx.say(body)
';
}
but Im getting this error:
nil
sock connected failed no resolver defined to resolve "www.google.com"
nil
What is wrong or what Im missing?
thanks
依旧是一个 multipart/form-data的出错问题.....
现在拼装了一个请求
����JFIF``��C
可是调用发送的时候
local ok, code, headers, status, body = hc:request {
url = weedfsurl.submit,
timeout = 100,
method = "POST",
headers = rq.headers,
body = rq.source
}
返回结果是 ,我这里的存储后端是 weedfs , 使用的端口是 127.0.0.1:9333 的了.
这里竟然socket链接不上......
sock connected failed localhost could not be resolved (110: Operation timed out)
这里发送请求貌似不靠谱啊...针对multipart/form-data的请求,请大家提供一些思路吧..
谢谢.
lua代码
local url_string = 'http://localhost/test.php'
local http = require("resty.http")
local hc = http:new()
local ok, code, headers, status, body = hc:request {
url = url_string,
timeout = 2000,
method = "POST",
keepalive = false,
body = "1234567890"
}
php代码
<?php
file_put_contents('/tmp/t.txt', file_get_contents('php://input'));
发送的body是1234567890,接收到的确是12345678,
Can you start making releases for every new version?
It makes deployment a lot easier in installation scripts.
I am using NGINX 1.9.13 with lua-resty-http.
The CPU load is abnormally too hight when making HTTP requests via the lua http resty module (
https://github.com/liseen/lua-resty-http) :
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
8446 root 20 0 209m 142m 1748 R 99.1 3.7 0:31.27 nginx
The more the body of the request is big, the higher the CPU. In this case the body was 20MB. If we add multiple requests,
it will kill the CPU.
QUESTION: Is there a way to reduce the CPU load when using LUA http resty?
I added below the NGINX Lua code as well as the NGINX configuration:
NGINX is used a front-end proxy towards an internal web server.
The below lua code (content.lua) is called multiple times from NGINX server for each curl http request :
local ok, code, headers, status, body = nil, 0, nil, 0, nil
local Http = require("resty.http") -- https://github.com/liseen/lua-resty-http
local Httpc = Http.new()
local req_headers = ngx.req.get_headers()
req_headers["range"] = "bytes=0-20000000" -- example of range
req_headers["connection"] = "keep-alive"
req_headers["if-range"] = nil
local req_params = {
url = "http://127.0.01:8099" .. ngx.var.request_uri,
method = 'GET',
headers = req_headers,
keepalive = 30000 -- 30 seconds
}
req_params["code_callback"] =
function(statuscode)
code = statuscode
end
req_params["header_callback"] =
function(headers)
if headers and headers["x-custom"] and ngx.re.match(headers ["x-custom"],"CUSTOM_STR") then
--increment a counter
else
--increment another counter
end
end
req_params["body_callback"] =
function(data, chunked_header, ...)
if chunked_header or code ~= 206 then return end
ngx.print(data)
ngx.flush(true)
end
ok, code, headers, status, body = Httpc:request(req_params)
local start, stop = 0, 20000000 -- example of value
if (code == 206) and body then
ngx.print(string.sub(body, start, stop))
end
Below is my NGINX configuration:
worker_processes auto;
events {
worker_connections 2048;
multi_accept on;
use epoll;
}
http {
include ./mime.types;
default_type application/octet-stream;
proxy_cache_bypass 1;
proxy_no_cache 1;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
proxy_buffer_size 128k;
proxy_buffers 100 128k;
sendfile on;
tcp_nopush on;
keepalive_timeout 10;
reset_timedout_connection on;
upstream backend {
server 127.0.0.1:8090;
keepalive 20;
}
server {
listen 192.168.0.10:8080;
location ~ \.(mp4|jpg)$ {
lua_check_client_abort on;
lua_code_cache on;
lua_http10_buffering off;
#request the lua code
content_by_lua_file '/opt/location/content.lua';
}
}
server {
listen 127.0.0.1:8099;
access_log off;
location / {
proxy_pass http://backend;
}
}
}
How to configure a connection to e.g. Squid proxy which runs on localhost? I need to send requests to an endpoint on the internet through this proxy. In the proxy example, it seems that httpc:connect(HOST, PORT)
is for endpoint configuration. So how should I tell LUA to use the proxy on the specific port on the localhost? Is it possible?
local http = require "resty.http"
local httpc = http.new()
httpc:set_timeout(500)
local ok, err = httpc:connect(HOST, PORT)
if not ok then
ngx.log(ngx.ERR, err)
return
end
httpc:set_timeout(2000)
httpc:proxy_response(httpc:proxy_request())
httpc:set_keepalive()
https://github.com/liseen/lua-resty-http/blob/master/lib/resty/http.lua#L193 here, is that missing the handle of error being other than nil or false(i.e. connection was closed remotely by accident)?
hey, 你好。
问下是否支持发送异步http请求?
当前情况:
既然https://github.com/liseen/lua-resty-http/blob/master/lib/resty/http.lua#L46 adjustheaders函数已经尽量将headers头字段小写化,为什么https://github.com/liseen/lua-resty-http/blob/master/lib/resty/http.lua#L277 这里出现的Content-Length还是有大写字母?
建议:
将Content-Length修改成content-length保持一致性
情况说明:
https://github.com/liseen/lua-resty-http/blob/master/lib/resty/url.lua#L257 和 https://github.com/liseen/lua-resty-http/blob/master/lib/resty/url.lua#L276 用到的table.getn函数,lua5.1+版本中已经不再支持了
建议:
替换成#符号
类似这样的一些参数可以直接设置吗??? 不知道怎么设置啊。
local h = ""
for i, v in pairs(nreqt.headers) do
-- fix cookie is a table value
if type(v) == "table" and i == "cookie" then
v = table.concat(v, "; ")
end
h = i .. ": " .. v .. "\r\n" .. h
end
h = i .. ": " .. v .. "\r\n" .. h 如果headers比较多是否会有性能问题,为什么不考虑使用table.concat?
现在手头有个问题。
使用这个组件通过http代理来请求https地址的时候,老是提示handshake failed,请问该怎么解决?
When I try to use
local ok, code, headers, status, body = httpClient:request {
url = url,
timeout = 3000,
scheme = 'http',
method = "POST",
headers = {["Content-Type"] = "application/x-www-form-urlencoded" },
body = postBody
}
one request returned the result that
ok = nil
code ='read status line failed read status line failed timeout'
As the code described, I presume it happened because the server didn't response anything in 3 seconds which I set in the code. But I am not sure about it, and I can't check how the server handle my request since it's one of our clients' server. But according to the client, they did response for the request. So I come here to ask for help. Is there any demonstrations about this error? Or anyone could help me with the reason. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.