mod_zip assembles ZIP archives dynamically. It can stream component files from upstream servers with nginx's native proxying code, so that the process never takes up more than a few KB of RAM at a time, even while assembling archives that are (potentially) gigabytes in size.
mod_zip supports a number of "modern" ZIP features, including large files, UTC timestamps, and UTF-8 filenames. It allows clients to resume large downloads using the "Range" and "If-Range" headers, although these feature require the server to know the file checksums (CRC-32's) in advance. See "Usage" for details.
To unzip files on the fly, check out nginx-unzip-module.
To install, compile nginx with the following option:
--add-module=/path/to/mod_zip
- nginx 1.10.0 or later is required
- (optional) to enable the
X-Archive-Charset
header, libiconv is required - http_postpone must be enabled by including at least one of the http_addition, http_slice or http_ssi modules
The module is activated when the original response (presumably from an upstream) includes the following HTTP header:
X-Archive-Files: zip
It then scans the response body for a list of files. The syntax is a space-separated list of the file checksum (CRC-32), size (in bytes), location (properly URL-encoded), and file name. One file per line. The file location corresponds to a location in your nginx.conf; the file can be on disk, from an upstream, or from another module. The file name can include a directory path, and is what will be extracted from the ZIP file. Example:
1034ab38 428 /foo.txt My Document1.txt
83e8110b 100339 /bar.txt My Other Document1.txt
0 0 @directory My empty directory
Files are retrieved and encoded in order. If a file cannot be found or the file request returns any sort of error, the download is aborted.
The CRC-32 is optional. Put "-" if you don't know the CRC-32; note that in this
case mod_zip will disable support for the Range
header.
A special URL marker @directory
can be used to declare a directory entry
within an archive. This is very convenient when you have to package a tree of
files, including some empty directories. As they have to be declared explicitly.
If you want mod_zip to include some HTTP headers of the original request, in the sub-requests that fetch content of files, then pass the list of their names in the following HTTP header:
X-Archive-Pass-Headers: <header-name>[:<header-name>]*
To re-encode the filenames as UTF-8, add the following header to the upstream response:
X-Archive-Charset: [original charset name]
The original charset name should be something that iconv understands. (This feature only works if iconv is present.)
If you set original charset as native
:
X-Archive-Charset: native;
filenames from the file list are treated as already in the system native charset. Consequently, the ZIP general purpose flag (bit 11) that indicates UTF-8 encoded names will not be set, and archivers will know it's a native charset.
Sometimes there is problem converting UTF-8 names to native(CP866) charset that causes popular archivers to fail to recognize them. And at the same time you want data not to be lost so that smart archivers can use Unicode Path extra field. You can provide you own, adapted representation of filename in native charset along with original UTF-8 name in one string. You just need to add following header:
X-Archive-Name-Sep: [separator];
So your file list should look like:
<CRC-32> <size> <path> <native-filename><separator><utf8-filename>
...
then filename field will contatin native-filename
and Unicode Path extra field
will contain utf8-filename
.
-
Add a header "Content-Disposition: attachment; filename=foobar.zip" in the upstream response if you would like the client to name the file "foobar.zip"
-
To save bandwidth, add a "Last-Modified" header in the upstream response; mod_zip will then honor the "If-Range" header from clients.
-
To wipe the X-Archive-Files header from the response sent to the client, use the headers_more module: http://wiki.nginx.org/NginxHttpHeadersMoreModule
-
To improve performance, ensure the backends are not returning gzipped files. You can achieve this with
proxy_set_header Accept-Encoding "";
in the location blocks for the component files.
Questions/patches may be directed to Evan Miller, [email protected].
mod_zip's People
Forkers
vasfed jaygooby tony2001 sam kastiglione devgs pjwerneck amrnotamr mfuterko furk changyy hustflion curtisgould bet0x axingblog zxbodya weizetao manni83 anthonyryan1 khobbits themill nginx-modules affmaker avodaniel rtm-ctrlz callmeyan petertonoli nanaya theangryangel gencer armydudemattb dev-develop mikhail-nikitin rsh7 b32147 doytsujin dc-aps rekgrpth rovodov lemonfish qwm 853612777 scr1pt rbranche dunkelheitx gitokiks webstorage119 mdineen vince2678 d--j dvershinin quiltdata dragoonwings xuejun-deng jikui dark-light-cz gronke welldevops titanfile steezer hujianjavamod_zip's Issues
Support HEAD requests
We witness a strange behaviour of mod_zip
when different HTTP methods are used.
If I make a GET request to our application, it correctly generates the file list and sends it back to nginx, which then generates the ZIP file and sends it back to the client. Everything working as expected and nginx does not log any error or similar.
Now, I'll do the same request, but this time as a HEAD request. The request hits our application, it generates the exact same file list and sends it back to nginx. But this time nginx logs the following error:
2013/12/12 08:21:49 [error] 25049#0: *45414 mod_zip: invalid file list from upstream while sending to client, client: 192.168.1.10, server: _, request: "HEAD /abc/download HTTP/1.1", upstream: "http://unix:/var/run/myapp/server.socket:/abc/download", host: "myapp.com"
The interessting thing is that our application is not able to differ between HEAD and GET requests. So the HEAD request is handled exactly the same way as the (working) GET request is.
Double URL encoding of upstream file requests
When requesting files for zipping, if the url of the requested file is url-encoded, then the mod_zip module double-encodes the URL, resulting in a 404 from the upstream proxy. Here's a request to zip a single file that will fail due to URL encoding:
- - /flv/contains%20space.flv contains space.flv
The error log shows the upstream proxy being asked for /flv/contains%2520space.flv rather than /flv/contains%20space.flv
A simple workaround for this is to not url-encode spaces and use + instead:
- - /flv/contains+space.flv contains space.flv
However, this is only suitable when your files contain alphanumerics and spaces. You'll still probably have issues with characters like &, # etc.
Dynamic Module
Is there a planned dynamic module release planned?
mod_zip does not work when ngx_pagespeed is enabled
The mod_zip extension does not seem to function properly when the ngx_pagespeed module is enabled,
https://developers.google.com/speed/pagespeed/module/
Even when trying to disable pagespeed on specific zones using the Disallow directive, mod_zip requests still fail if pagespeed is enabled whatsoever.
Turning pagespeed off globally resolves the issue.
Range in subrequests
If the original request is a Range request then subrequest doesn't use Range header.
So if a zip contains for example 2x 10G files and Range request was made for the last 1G of the second file then mod will subrequest the whole file and just skip the first 9G of the file.
For a client it looks like a download is not started for 2-3 mins. And since most downloadmanagers have just 60 secs timeout it will fail on such files.
Compile error on linux with glibc version lower than 2.9
Hi, I'm trying to get nginx mod_zip and lua working together. But I was encountered when building nginx with mod_zip module.
My nginx version is nginx-1.2.9, and here is my linux version.
$ cat /proc/version
Linux version 2.6.32_1-11-0-0 (gcc version 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) ) #1 SMP Mon May 20 14:01:01 CST 2013
Here is my building steps:
$ ./configure --prefix=/home/qiaoba/nginx/mynginx --with-debug --with-http_stub_status_module --with-http_realip_module --with-http_image_filter_module --add-module=/home/qiaoba/nginx/mod_zip-master --add-module=/home/soft/output/nginx/src/lua-nginx-module-0.9.7 --add-module=/home/qiaoba/nginx/echo-nginx-module-0.56
$ make
And It occurs after the 'make' command. Part of the errors is pasted here.
make -f objs/Makefile
make[1]: cd “/home/qiaoba/nginx/nginx-1.2.9”
gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -DNDK_SET_VAR -I src/core -I src/event -I src/event/modules -I src/os/unix -I /home/local/include/luajit-2.0 -I /home/soft/output/nginx/src/lua-nginx-module-0.9.7/src/api -I objs -I src/http -I src/http/modules -I src/mail
-o objs/addon/mod_zip-master/ngx_http_zip_file.o
/home/qiaoba/nginx/mod_zip-master/ngx_http_zip_file.c
/home/qiaoba/nginx/mod_zip-master/ngx_http_zip_file.c: function ‘ngx_http_zip_file_header_chain_link’中:
/home/qiaoba/nginx/mod_zip-master/ngx_http_zip_file.c:440:5: error:Implicit Declaration ‘htole32’ [-Werror=implicit-function-declaration]
/home/qiaoba/nginx/mod_zip-master/ngx_http_zip_file.c:441:5: error:Implicit Declaration‘htole16’ [-Werror=implicit-function-declaration]
/home/qiaoba/nginx/mod_zip-master/ngx_http_zip_file.c:455:9: error:Implicit Declaration‘htole64’ [-Werror=implicit-function-declaration]
cc1: all warnings being treated as errors
make[1]: *** [objs/addon/mod_zip-master/ngx_http_zip_file.o] error 1
make[1]: get out “/home/qiaoba/nginx/nginx-1.2.9”
make: *** [build] error 2
Intermittent segfault caused by #26
After 8ffc053 I am observing regular segmentation faults. See attached stack trace:
#0 0x00007f33a938b029 in ngx_http_variable_unknown_header ()
#1 0x00007f33a93bcacb in ngx_http_zip_header_filter ()
#2 0x00007f33a937a9fe in ngx_http_send_special_response.isra.0 ()
#3 0x00007f33a937ad64 in ngx_http_special_response_handler ()
#4 0x00007f33a937dfd1 in ngx_http_finalize_request ()
#5 0x00007f33a9377d66 in ngx_http_core_access_phase ()
#6 0x00007f33a9373205 in ngx_http_core_run_phases ()
#7 0x00007f33a937f174 in ngx_http_process_request ()
#8 0x00007f33a937ffff in ngx_http_process_request_line ()
#9 0x00007f33a936072a in ngx_event_process_posted ()
#10 0x00007f33a9367058 in ngx_worker_process_cycle ()
#11 0x00007f33a9365793 in ngx_spawn_process ()
#12 0x00007f33a9368291 in ngx_master_process_cycle ()
Cannot access memory at address 0x7fff445af018
Compile errors on Mac OS X 10.7
Hi gyus,
I get the following when trying to compile the latest (dd2ff19) version on OS X 10.7:
Undefined symbols for architecture x86_64:
"_iconv_open", referenced from:
_ngx_http_zip_generate_pieces in ngx_http_zip_file.o
"_iconv_close", referenced from:
_ngx_http_zip_generate_pieces in ngx_http_zip_file.o
"_iconv", referenced from:
_ngx_http_zip_generate_pieces in ngx_http_zip_file.o
anyone got any ideas of any fix or workaround?
Thank you.
Unused variable on system without (usable) iconv
This variable in ngx_http_zip_file.c
static ngx_str_t ngx_http_zip_header_charset_name = ngx_string("upstream_http_x_archive_charset");
causes compilation error on system without usable iconv thanks to -Werror
and -Wunused-variable
. It should be wrapped with proper ifdef
.
Problem with + in file name
We have a file name: vm068 - 11 - z + 3.mp3
encoding it we have:
vm068%20-%2011%20-%20z%20%2B%203.mp3
But mod_zip does not work with that.
Downloading stops after severalk kb's on CentOS version 6 nginx 1.10.2
I am using mod_zip for mp3 files.
On one server CentOS version 7 nginx 1.10.2 and plugin works clean.
On another with CentOS version 6 nginx 1.10.2 sometimes resposne stops after downloading several kb's. It depends on files count.
Does anybody know why this error occurs?
Generated zips store the current time, so they aren't reproducible
When you generate a .zip file using mod_zip, the current time (the one of the nginx host) is always stored inside the .zip. Here the time is read:
Line 296 in 255cf54
This is later stored in the File Header/Last Modification Time/Date fields, the File Header/Extra Field parameter and in the .zip Central Directory. So the generated .zip file is always different unless you download it in the exact same second.
This affects resumability, because you can get inconsistent datetimes if you download the .zip file using multiple (Range) requests and some request starts from the middle offset of a datetime field bytes. Is also very troublesome when you need the .zip generated to be always the same (because you are check-summing the entire .zip file in a client application).
Possible fixes:
A- Modify mod_zip to store NULL/1970 dates on zips, making it the default behaviour or one behind an option.
B- Get the file dates from upstream files "Last-Modified" headers
C- Add a new date field to the space-separated file list.
implicit declaration of function ‘ngx_http_upstream_header_variable’
Trying to build Nginx 1.11.10 with mod_zip on Debian Jessie fails with the following error,
/usr/local/src/nginx/extensions/mod_zip/ngx_http_zip_file.c: In function ‘ngx_http_zip_generate_pieces’:
/usr/local/src/nginx/extensions/mod_zip/ngx_http_zip_file.c:253:5: error: implicit declaration of function ‘ngx_http_upstream_header_variable’ [-Werror=implicit-function-declaration]
if(ngx_http_upstream_header_variable(r, vv, (uintptr_t)(&ngx_http_zip_header_name_separator)) == NGX_OK && !vv->not_found) {
^
/usr/local/src/nginx/extensions/mod_zip/ngx_http_zip_module.c: In function ‘ngx_http_zip_main_request_header_filter’:
/usr/local/src/nginx/extensions/mod_zip/ngx_http_zip_module.c:197:9: error: implicit declaration of function ‘ngx_http_upstream_header_variable’ [-Werror=implicit-function-declaration]
variable_header_status = ngx_http_upstream_header_variable(r, vv,
cc1: all warnings being treated as errors
objs/Makefile:1535: recipe for target 'objs/addon/mod_zip/ngx_http_zip_module.o' failed
make[1]: *** [objs/addon/mod_zip/ngx_http_zip_module.o] Error 1
make[1]: *** Waiting for unfinished jobs....
cc1: all warnings being treated as errors
objs/Makefile:1549: recipe for target 'objs/addon/mod_zip/ngx_http_zip_file.o' failed
make[1]: *** [objs/addon/mod_zip/ngx_http_zip_file.o] Error 1
make[1]: Leaving directory '/usr/local/src/nginx/source/nginx-1.11.10'
Makefile:8: recipe for target 'build' failed
make: *** [build] Error 2
without upstream, mod_zip does not work
make response body by ngx_lua ,my code:
ngx.header['X-Archive-Files'] = 'zip'
ngx.say('- 5866317 /disk/20140125/462b4ae8.apk /ac/test.apk')
mod_zip does not work
but use php by fastcgi to make response is ok.
mod_zip only supports upstream to make response body, Friendship is not!
the mod_zip can't support compressed files download
Example:
the output :
- 243410 /1.jpg 1.jpg
- 13524697 /2.zip 2.zip
the response header:
X-Archive-Files:zip;
Content-Disposition:attachment; filename=all.zip;
Content-Type:application/octet-stream
When the all.zip
file has been downloaded , unzip the all.zip
, got the 2.zip
. But I can't unzip the 2.zip . The System Error is The document “2.zip ” could not be opened. The file isn’t in the correct format.
Please help me , thanks .
Add option: separator for all fields (e.g. tab)
Not a mission-critical enhancment, but some of advantages are:
- Lists will look a bit more aligned without need to count symbols.
- Won't need any other separator, only to specify what optional fields are enabled.
I normalize all whitespaces to a single simple space in my served filenames anyway.
And I don't think I've ever seen a file with a tab in filename, not fabricated specifically to show off some bug somewhere.
Create zip by concatenating pre-compressed files
Our websites offer generated downloads of user-filtered data. Data is generated by workers in parallel while the user is waiting, and the user clicks a link to download it once the workers have completed processing.
At present the workers can generate files in parallel, but the final task of creating a zip is non-parallelisable. We'd like to fix that so the users can start downloading data faster.
To stream zips to the user, we'd want:
- workers generating DEFLATE-compressed files rather than non-compressed files. ie. all files produced would be run through
gzip
- mod_zip (or similar project I haven't found yet) would stream those files into a coherent zip, without re-compressing.
- correct zip headers
My understanding is that mod_zip can't currently create zips from pre-compressed files. Is that correct? What is your feeling about how difficult it would be to implement?
I also gather from #23 that mod_zip generates non-compressed zip files, so it can produce correct filesize headers. Is that still correct? This change would obviously also fix that.
Reminder: if requesting URL from upstream server, make sure Accept-Encoding: header is known
mod_zip doesn't perform well when the upstream server(s) return content that is compressed.
Here's what you DO NOT WANT to be sending to your upstream: Accept-Encoding: gzip,deflate,sdch
To avoid this, in your upstream subrequest, set the header in a location using:
proxy_set_header Accept-Encoding "";
Output chains are mangled when SSI module is not included in nginx
I have encountered a bug where mod_zip delivers the raw content, then the HTTP headers, then the zip file header, in that order, producing an invalid zip file.
I would like to request additional instruction for how to debug this further and produce a helpful bug report.
versioning and releases
Any plans to use versioning and releases for this module?
We build it with a CI system along with our nginx build and currently track the master branch but this might lead to unstable builds as new code is included automatically. I just switched to a commit-hash based download URI to prevent this, it feels a bit ugly..
Using releases with a versioning scheme would be great! 🙏
Option to use other hash algorithm
Hi,
According to the ZIP Specification, as well as CRC-32, ZIP works with:
0x8003 MD5
0x8004 SHA1
0x8007 RIPEMD160
0x800C SHA256
0x800D SHA384
0x800E SHA512
Could mod_zip add the ability to work with these, or at least the commonly used md5/sha1? Having to CRC-32 third party files cannot be done without downloading the entire file beforehand, but the third party websites provide MD5/SHA1 information in my instance.
Or maybe it's possible to generate a CRC-32 by just loading a few bytes of a given file (sounds unlikely...)?
Failing tests
1..93
ok 1 - download file1.txt
ok 2 - file1.txt file size
ok 3 - download file1.txt partial
ok 4 - download file2.txt
ok 5 - file2.txt file size
not ok 6 - Returns OK with missing CRC
# Failed test 'Returns OK with missing CRC'
# at ./ziptest.pl line 108.
# got: '502'
# expected: '200'
ok 7 - Content-Length header when missing CRC
ok 8 - No Accept-Ranges header when missing CRC (fails with nginx 0.7.44 - 0.8.6)
format error: can't find EOCD signature
at /usr/lib64/perl5/vendor_perl/5.24.1/Archive/Zip/Archive.pm line 718.
Archive::Zip::Archive::_findEndOfCentralDirectory(Archive::Zip::Archive=HASH(0x1e76458), IO::File=GLOB(0x2be8830)) called at /usr/lib64/perl5/vendor_perl/5.24.1/Archive/Zip/Archive.pm line 591
Archive::Zip::Archive::readFromFileHandle(Archive::Zip::Archive=HASH(0x1e76458), IO::File=GLOB(0x2be8830), "/tmp/mod_zip.zip") called at /usr/lib64/perl5/vendor_perl/5.24.1/Archive/Zip/Archive.pm line 559
Archive::Zip::Archive::read(Archive::Zip::Archive=HASH(0x1e76458), "/tmp/mod_zip.zip") called at /usr/lib64/perl5/vendor_perl/5.24.1/Archive/Zip/Archive.pm line 55
Archive::Zip::Archive::new("Archive::Zip::Archive", "/tmp/mod_zip.zip") called at /usr/lib64/perl5/vendor_perl/5.24.1/Archive/Zip.pm line 316
Archive::Zip::new("Archive::Zip", "/tmp/mod_zip.zip") called at ./ziptest.pl line 56
main::write_temp_zip("<html>\x{a}<head>\x{a}<title>The page is temporarily unavailable</tit"...) called at ./ziptest.pl line 62
main::test_zip_archive("<html>\x{a}<head>\x{a}<title>The page is temporarily unavailable</tit"..., "when missing CRC") called at ./ziptest.pl line 112
Can't call method "contents" on an undefined value at ./ziptest.pl line 64.
# Looks like you planned 93 tests but ran 8.
# Looks like you failed 1 test of 8 run.
# Looks like your test exited with -1 just after 8.
2017/06/22 16:34:21 [error] 8831#8831: *12 mod_zip: invalid file list from upstream while sending response to client, client: 127.0.0.1, server: localhost, request: "GET /zip-missing-crc.txt HTTP/1.0", host: "ziplist"
2017/06/22 16:34:21 [error] 8831#8831: *10 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /zip-missing-crc.txt HTTP/1.1", upstream: "http://127.0.0.1:8082/zip-missing-crc.txt", host: "localhost:8081"
2017/06/22 16:34:21 [warn] 8831#8831: *10 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /zip-missing-crc.txt HTTP/1.1", upstream: "http://127.0.0.1:8082/zip-missing-crc.txt", host: "localhost:8081"
2017/06/22 16:34:21 [error] 8831#8831: *14 mod_zip: invalid file list from upstream while sending response to client, client: 127.0.0.1, server: localhost, request: "GET /zip-missing-crc.txt HTTP/1.0", host: "ziplist"
2017/06/22 16:34:21 [error] 8831#8831: *10 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /zip-missing-crc.txt HTTP/1.1", upstream: "http://127.0.0.1:8082/zip-missing-crc.txt", host: "localhost:8081"
2017/06/22 16:34:21 [warn] 8831#8831: *10 upstream server temporarily disabled while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /zip-missing-crc.txt HTTP/1.1", upstream: "http://127.0.0.1:8082/zip-missing-crc.txt", host: "localhost:8081"
I'm using @anthonyryan1 fork and nginx 1.13.1.
Binary files (images, mp3 etc) are zipped successfully but not plain text files :(
Please help and thank you!
Support Unicode Path Extra Field
Different Zip software expects filenames with non-ASCII characters to be encoded in different charsets. For example to make files with cyrillic filenames visible in Windows Explorer ("compressed folders") I have to encode them into cp866. But that makes them unreadable by other archivers like 7-Zip and Info-ZIP etc. that expects windows-1251 charset (or Unicode Field, see below) to be used.
Solution was proposed by Info-ZIP developers: it allows to specify filename in Unicode (UTF-8) in EXTRA header field. This field called Unicode Path Extra Field (0x7075) in .ZIP File Format Specification(http://www.pkware.com/documents/casestudies/APPNOTE.TXT)
Many archivers (like two mentioned above) supports this feature. It's also used by GMail in "download all attachments" to encode attachments with non-ASCII names. So such file names specified twice: first in regular header field in some 8-bit charset (for old software), second - in extra field.
It would be great if mod_zip could support Unicode Paths.
Zips are not compressed
Looks like produced zip files are not compressed. Compression ratio is 100%. Is this intentional to have zip file size known even before the content is downloaded?
Excessive memory usage
This is with nginx 1.8.1 and mod_zip 1.1.6.
I'm trying to download a zip package for around 170,000 files and 100GB.
I use the workaround I described in issue #42 because of the large number of files we sometimes deal with, but this doesn't seem related.
When I watch under top, nginx just uses all the memory of the machine. I tried this with a machine with as much as 160GB of memory, and nginx used it all in less than a minute. This seems to be happening while it is dealing with the manifest file; the second nginx server in the above workaround never gets hit before memory is exhausted.
I don't see anything in the configuration that I think would be causing this problem. I will attach a minorly scrubbed version:
Doing some other trials the problem seems to be with the number of files to go into the zip, not the overall size in bytes.
When I was developing the code that uses this (beginning of 2016) I had experimented with some even larger (in terms of number of files) downloads than this, and they were fine, but I don't really have good records to suggest what might be different.
Honestly I'm at a loss to think how nginx even could consume that much memory doing this, but I watched it happen.
When nginx serves files to be zipped instead of the upstream proxy, files are open()ed with URL encoding
Some of the zips I've been building contain a significant number of small files. When I realised that the upstream proxy would be responsible for serving these, I thought I'd copy them to a temporary location and let nginx serve them itself, so I don't tie up my application servers for any longer than necessary.
Here's part of my nginx.conf:
location /zips { root /tmp; }
Files are copied into /tmp/zips and then passed to mod_zip with a line like:
- - /zips/file_to_zip.doc file_to_zip.doc
This works fine for files with no spaces or other characters that need URL encoding, but a request like:
- - /zips/contain%20space.txt contains space.txt
Results in a 404 from nginx, as it is attempting to open() the file with the URL encoding present. Here's an extract from the nginx debug log:
2010/07/27 12:29:21 [debug] 18887#0: *6 http filename: "/tmp/zips/contains%20space.txt" 2010/07/27 12:29:21 [error] 18887#0: *6 open() "/tmp/zips/contains%20space.txt" failed (2: No such file or directory), client: 127.0.0.1, server: 127.0.0.1, request: "GET /zip HTTP/1.1", subrequest: "/zips/contains%20space.txt", host: "localhost"
Need a release
There seems to be a lot of changes since the last release. Is it possible to tag one here?
This would help with packaging the module as RPM into nginx extras RPM repository.
No Content-Length in headers via ngx_lua
Hi, headers are different between nginx+php and ngx_lua when I'm trying to zip files by mod_zip. Is there anything wrong in my code? Is anybody know the reason why dosen't 'content-length' exist in response headers by ngx_lua ? @evanmiller
code by nginx + php
header('X-Accel-Chareset: utf-8');
header('Content-Type: application/zip');
header('Content-Disposition: attachment; filename=test.zip');
header('X-Archive-Files: zip');
$crc32 = "-";
$downloadPath1 = 'http://fileserver/file1';
$downloadPath2 = 'http://fileserver/file2';
$fileSize1 = 32704424;
$fileSize2 = 4849469;
$str1 = sprintf("%s %d %s %s\n", $crc32, $fileSize1, $downloadPath1, 'file1');
$str2 = sprintf("%s %d %s %s\n", $crc32, $fileSize2, $downloadPath2, 'file2');
echo $str1;
echo $str2;
http response headers:
HTTP/1.1 200 OK
Server: nginx/1.2.9
Date: Fri, 14 Nov 2014 03:24:05 GMT
Content-Type: application/zip
Content-Length: 37554209
Connection: keep-alive
X-Accel-Chareset: utf-8
Content-Disposition: attachment; filename=test.zip
X-Archive-Files: zip
code by ngx_lua
ngx.header['X-Accel-Chareset'] = 'utf-8'
ngx.header['Content-Type'] = 'application/zip'
ngx.header['Content-Disposition'] = 'attachment; filename=test.zip'
ngx.header['X-Archive-Files'] = 'zip'
local crc32 = '-'
local downloadPath1 = 'http://fileserver/file1'
local downloadPath2 = 'http://fileserver/file2'
local fileSize1 = 32704424
local fileSize2 = 4849469
ngx.print(crc32 .. ' ' .. fileSize1 .. ' ' .. downloadPath1 .. ' ' .. 'file1' .. '\r\n')
ngx.print(crc32 .. ' ' .. fileSize2 .. ' ' .. downloadPath2 .. ' ' .. 'file2' .. '\r\n')
http response headers
HTTP/1.1 200 OK
Server: nginx/1.2.9
Date: Fri, 14 Nov 2014 03:25:51 GMT
Content-Type: application/zip
Transfer-Encoding: chunked
Connection: keep-alive
X-Accel-Chareset: utf-8
Content-Disposition: attachment; filename=test.zip
Content-Encoding: none
X-Archive-Files: zip
transfer closed with 16467 bytes remaining to read
We use mod_zip with nginx 1.6.3 serving zip files with HTTPS (openssl 1.0.1k).
For a list of 3 files, we can reproduce an error where the client does not receive the full zip archive but experiences a timeout with the error above as not all data announced in "Content-Length" is sent by the server.
The response body (as generated by a rails application is)
a6f0fd29 28763 /data/X/200%20NM/200%20NM.xml 200 NM/200 NM.xml
c43dfa28 1386 /data/X/Media/foo.png Media/foo.png
32b4e3ce 1203 /data/X/66C9EB32CB5F4A1B95F3CA1D6BE0C5B4.png 66C9EB32CB5F4A1B95F3CA1D6BE0C5B4.png
The response header is (curl output)
< Date: Thu, 09 Apr 2015 22:51:43 GMT
< Content-Type: application/zip
< Content-Length: 31904
< Connection: keep-alive
< Status: 200 OK
< Content-Disposition: attachment; filename=bulletin-1428619903-7729.zip
The transfer is stalled after 15437 bytes.
The most strange thing is that the size of the second file matters (the 1386 bytes) but not the content. If we reorder the body lines for mod_zip to have "Media/foo.png" at the end, it works without any issues.
When we change the size of the file (by adding random bytes), it works again. When the file size (random data) is between 1285 and 1386 bytes, the transfer is always stalled.
The error log of nginx (even with debugging enabled) does not show errors but in turn complains about the client having timed out (after 60 seconds, the keep-alive timeout) with "client timed out (110: Connection timed out) while sending response to client"
So both client and server seem to wait for each other and then timeout.
Dynamic modules broke --without-http_ssi_module
A quick bisect confirmed the regression started with: nginx/nginx@0805ba1
I'll continue debugging this and prepare a patch soon.
File modification dates
Nginx is logging in my server's local timezone. But the files within the zip are the download time in GMT after extracting. Is there any way to set this so that the dates aren't (for me) five hours in the future.
Can't compile under FreeBSD
nginx version: tip
FreeBSD version: 12.0-RELEASE
compile options (from hg repository):
./auto/configure --add-module="${PWD}/../../git/mod_zip"
error:
In file included from /home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_file.c:5:
/home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_endian.h:46:10: error: 'be16toh' macro redefined [-Werror,-Wmacro-redefined]
# define be16toh betoh16
^
/usr/include/sys/endian.h:77:9: note: previous definition is here
#define be16toh(x) bswap16((x))
^
In file included from /home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_file.c:5:
/home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_endian.h:47:10: error: 'le16toh' macro redefined [-Werror,-Wmacro-redefined]
# define le16toh letoh16
^
/usr/include/sys/endian.h:80:9: note: previous definition is here
#define le16toh(x) ((uint16_t)(x))
^
In file included from /home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_file.c:5:
/home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_endian.h:49:10: error: 'be32toh' macro redefined [-Werror,-Wmacro-redefined]
# define be32toh betoh32
^
/usr/include/sys/endian.h:78:9: note: previous definition is here
#define be32toh(x) bswap32((x))
^
In file included from /home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_file.c:5:
/home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_endian.h:50:10: error: 'le32toh' macro redefined [-Werror,-Wmacro-redefined]
# define le32toh letoh32
^
/usr/include/sys/endian.h:81:9: note: previous definition is here
#define le32toh(x) ((uint32_t)(x))
^
In file included from /home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_file.c:5:
/home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_endian.h:52:10: error: 'be64toh' macro redefined [-Werror,-Wmacro-redefined]
# define be64toh betoh64
^
/usr/include/sys/endian.h:79:9: note: previous definition is here
#define be64toh(x) bswap64((x))
^
In file included from /home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_file.c:5:
/home/edho/hg/nginx/../../git/mod_zip/ngx_http_zip_endian.h:53:10: error: 'le64toh' macro redefined [-Werror,-Wmacro-redefined]
# define le64toh letoh64
^
/usr/include/sys/endian.h:82:9: note: previous definition is here
#define le64toh(x) ((uint64_t)(x))
^
6 errors generated.
gmake[1]: *** [objs/Makefile:1225: objs/addon/mod_zip/ngx_http_zip_file.o] Error 1
gmake[1]: Leaving directory '/home/edho/hg/nginx'
gmake: *** [Makefile:8: build] Error 2
Removing the definitions in ngx_http_zip_endian.h
seems to fix it...? I'm not even sure for which FreeBSD version was it for as {be,le}{16,32,64}toh
functions are available since 5.0 which was released centuries ago.
Note I only tested whether or not it compiles; I haven't tested whether the module actually works or not.
Resumability without explicit checksums
I have got an idea how to achieve resumability without specifying CRCs. Currently it doesn't work as Range header is supported only with CRC-32 explicitly specified on archive contents list.
So simply the module could remember the state of checksum calculation. If the transfer is aborted then user could only start resume exactly from breakpoint or earlier (Range + Content-Range). Calculation can be resumed once user is past the last calculation point.
That would require temporary storage for calculations and ZIP "session" identification.
Calculations storage could be shared memory, local file or redis. The "session" key could be SHA1 of upstream archive contents specification.
Completed and stored checksum calculations could be reused with next downloads of the same ZIP.
compile error with nginx openresty 1.9.7
openresty compiles without mod_zip (latest github) but with the module, make fails with this error:
objs/src/http/ngx_http_postpone_filter_module.o:(.data+0x0): multiple definition of `ngx_http_postpone_filter_module'
objs/src/http/ngx_http_postpone_filter_module.o:(.data+0x0): first defined here
collect2: error: ld returned 1 exit status
objs/Makefile:362: recipe for target 'objs/nginx' failed
make[2]: *** [objs/nginx] Error 1
make[2]: Leaving directory '/root/openresty-1.9.7.3/build/nginx-1.9.7'
Makefile:8: recipe for target 'build' failed
make[1]: *** [build] Error 2
make[1]: Leaving directory '/root/openresty-1.9.7.3/build/nginx-1.9.7'
Makefile:4: recipe for target 'all' failed
make: *** [all] Error 2
compile options:
./configure --without-http_uwsgi_module --without-http_userid_module --without-http_autoindex_module --without-http_auth_basic_module --without-http_ssi_module --without-http_geo_module --with-pcre-jit --with-http_stub_status_module --with-http_mp4_module --add-module=/root/mod_zip
Connection closes with a large file list on Nginx 1.5.3 and later
We use mod_zip module. After setting up a new server (under FreeBSD) we've installed fresh nginx (1.10.1) and found that there's a limit for a number of rules. Also there's a limit on a total rules size. If the limit is exceeded, connection closes and nginx reports an error:
2016/06/08 17:41:33 [error] 82841#101370: *1646064 upstream prematurely closed connection while reading upstream, client: 10.118.254.242, server: , request: "GET /test.php HTTP/1.1", upstream: "http://176.xx.xx.xx:8092/test.php", host: "xxxxx.com".
Two examples:
- 78 zipping rules, ~11000 byte
- 99 rules (with a shorter path), ~8000 bytes
If we add one more rule in any of these cases, connection closes with an error.
Could you give us a hint of a possible reason of this behaviour? What nginx settings may affect and fix it? Or it's an nginx's bug?
Everything worked fine with nginx 1.4.4.
mod_zip version is 1.1.6
Nginx config are the same for both versions.
...
server {
listen 176.xx.xx.xx:80;
...
location ~ /upltest/(.*) {
root /usr/home/test;
index index.html;
}
...
}
...
Rule example:
- 6523091 /upltest/A/ABCvJOqZdlmF/ABCo_o_1ak7ouul5gcob0i1jk1qvtko8rm DSC_1234.JPG
mod_zip uses a lot of file descriptors
Hi, we use mod_zip every day and we love it.
Since a couple of weeks now we've found that a lot of our zip don't get delivered properly to our customers because it uses a lot of file_descriptors which causes our servers to produce a lot of those lines in our errors.log file
2012/06/04 11:25:34 [alert] 8146#0: accept4() failed (24: Too many open files)
2012/06/04 11:25:34 [alert] 8146#0: accept4() failed (24: Too many open files)
2012/06/04 11:25:34 [alert] 8146#0: accept4() failed (24: Too many open files)
We use mod_zip with a remote up_stream
Did anyone encounter the same issue?
Is there a problem in nginx or mod_zip?
I want zip files from local path: /mnt/lelink/movies/temp/139392056522454729/139400226980221046;
nginx location config below:
location /v2/zip/download {
root /mnt/lelink/movies/temp/139392056522454729/139400226980221046;
proxy_pass http://api-server/v1/clip/zip/download;
}
upstream return content:
- 140279 /00001.mp4 00001.mp4
- 98657 /00002.mp4 00002.mp4
- 47956 /00003.mp4 00003.mp4
- 89650 /00004.mp4 00004.mp4
- 129491 /00005.mp4 00005.mp4
mod_zip donot read files from rootpath which I configed in my location , It read from the default root path "/opt/srv/nginx-1.4.5/html",
Is there a problem?
Incorrect files Modification Time in archives if using Range downloading.
Add this to ngx_http_zip_file.c
central_directory_file_header = ngx_zip_central_directory_file_header_template;
file->unix_time = time(NULL);
file->dos_time = ngx_dos_time(file->unix_time);
central_directory_file_header.mtime = file->dos_time;
central_directory_file_header.crc32 = file->crc32;
UTF8 filenames don't quite work
Even though the 11-th general purpose bit is set and all filenames are in UTF8, the result archive still shows unreadable names if non-Latin symbols were used.
Everything seems to be done according to the docs, so this is more a complaint than a report, as I don't quite understand how is that possible myself.
Do you have any idea by any chance?
Let me know if you need any examples and/or testing.
Tmp file or actual stream
My understanding was that this module will stream a zip (while being created) to the browser, however in that case we wouldn't know the content-length. In practice as we've been using it however, the content-length is available, so does mod_zip create a tmp zip file behind the scenes to get the content-length?
Thank you
Compile Error on Mac OS X 10.9
The following takes place during make
when I configure nginx with mod_zip:
ngx_http_zip_parsers.c:158:18: error: unused variable 'request_error'
[-Werror,-Wunused-const-variable]
static const int request_error = 0;
^
ngx_http_zip_parsers.c:160:18: error: unused variable 'request_en_main'
[-Werror,-Wunused-const-variable]
static const int request_en_main = 1;
^
ngx_http_zip_parsers.c:418:18: error: unused variable 'range_error'
[-Werror,-Wunused-const-variable]
static const int range_error = 0;
^
ngx_http_zip_parsers.c:420:18: error: unused variable 'range_en_main'
[-Werror,-Wunused-const-variable]
static const int range_en_main = 1;
^
The default compiler is clang if it matters.
Missing LICENSE file
I could not find any license file in this git repository nor any mention in the readme. Which license does the project use? Could you add the LICENSE file?
Dynamically generating file list
From the README:
It then scans the response body for a list of files. The syntax is a space-separated list of the file checksum (CRC-32), size (in bytes), location (properly URL-encoded), and file name. One file per line.
Regarding the list of files, is there a module that you know of, or some other easy way, to generate the list in the correct format, given some path? e.g. given path /some/dir
, generate a list of all files under that path (recursively)?
File descriptors are not closed until all subrequests have finished
When I want to deliver a zip that contains a lot of files
# find . | wc -l
2192
I get this error:
2015/03/21 14:04:11 [crit] 2312#0: *2213 open() "/home/XXX" failed (24: Too many open files), client: XXX.XXX.XXX.XXX, server: ~^XXX$, request: "GET /XXX HTTP/1.1", subrequest: "/XXX", host: "XXX"
I am able to download the whole zip when I resume the download. With a fast connection I am able to download more bytes per retry than with a slow one.
End-of-central-directory signature not found
I continue to get the above error with .zip.cpgz
file being unzipped. I'm doing this with a very simple test.
-
A node server is running as a reverse proxy in nginx
-
I'm sending the line item in the response
519956fa 8 /storage/test.txt test.txt
The file is downloading, the crc32 hash is right, the byte size is right, I can access the file directly.
Any ideas?
Thank you
zip files created cannot be unzipped on Mac OS X (10.6) with default unarchiver (Error- 1 Operation not permitted)
I have a web app where I use nginx + mod_zip (HEAD version of repo).
Every zip created with mod_zip give me an 'Error- 1 Operation not permitted' error when I use the default unarchiver from Mac OS, using StuffIt expander solves this issue.
Any reason why these zips cannot be unarchived using the default tool?
TIA
Christiaan
symlinks does not work
Symlinks are stored as files, not as symlinks.
Some documentation for profis:
Header 0x756e
Java implementation: http://svn.apache.org/repos/asf/ant/core/trunk/src/main/org/apache/tools/zip/AsiExtraField.java
Zip Doku: http://www.pkware.com/documents/casestudies/APPNOTE.TXT and
http://mdfs.net/Docs/Comp/Archiving/Zip/ExtraField
Add option: actual file modification dates
Please add support for giving actual file modification dates inside zip.
Possibly as an option in config:
- None (default, give request date).
- From text file list content (I have the dates already in DB for other purposes, so I'd prefer this).
- From filesystem, text file list's own last-mod.date.
- From filesystem, each file's own.
Invalid archive produced when upstream connection is reset
I'm wondering if anyone can help answer this question; there seems to be limited information about it on web in general. We have a ruby back-end producing the correct manifest for mod_zip
. The files themselves reside on AWS S3 and are a mix of image stills and motion files, which can get quite large.
Give this setup mod_zip
will produce an invalid archive as often as it produces a valid archive.
I compiled debug mode into NGINX 1.10.1 along with the latest version of mod_zip
module.
From what I can gleam about proxying files from an upstream, our manifest looks like /amazons3/path/to/file.suffix
. The relevant bit of the NGINX config looks like:
location ~ "^/(amazons3)/(?<file>.*)" {
resolver 8.8.8.8 8.8.4.4; # Google public DNS
set $s3_domain s3.amazonaws.com;
proxy_http_version 1.1;
proxy_set_header Host $s3_domain;
proxy_set_header Authorization '';
# We DO NOT want gzip content returned to us.
# See: https://github.com/evanmiller/mod_zip/issues/21
#
# Compiled --with-debug and debug logging turned on, we were
# sending to S3: Accept-Encoding: gzip, deflate
#
proxy_set_header Accept-Encoding '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_intercept_errors on;
proxy_buffering off;
# NOTE HTTP not HTTPS because mod_zip resolves domain to ip address
proxy_pass http://$s3_domain/bucket/$file;
}
After seeing a number of [error] recv() failed (104: Connection reset by peer) while sending to client
in the NGINX error.log, I decided to turn on debug
to see what was going on. The relevant bit of the log file looks like this:
[debug] recv: fd:12 4096 of 4096
[debug] http output filter "/amazons3/path/to/file_1.mov?"
[debug] http copy filter: "/amazons3/path/to/file_1.mov?"
[debug] mod_zip: entering subrequest body filter
[debug] mod_zip: No range for subrequest to satisfy
[debug] http postpone filter "/amazons3/path/to/file_1.mov?" 00000000022CD1A8
[debug] write new buf t:0 f:0 0000000000000000, pos 00000000022CE260, size: 4096 file: 0, size: 0
[debug] http write filter: l:0 f:1 s:4096
[debug] http write filter limit 0
[debug] SSL buf copy: 4096
[debug] SSL to write: 4096
[debug] SSL_write: 4096
[debug] http write filter 0000000000000000
[debug] http copy filter: 0 "/amazons3/path/to/file_1.mov?"
[debug] recv: fd:12 4096 of 4096
[debug] http output filter "/amazons3/path/to/file_1.mov?"
[debug] http copy filter: "/amazons3/path/to/file_1.mov?"
[debug] mod_zip: entering subrequest body filter
[debug] mod_zip: No range for subrequest to satisfy
[debug] http postpone filter "/amazons3/path/to/file_1.mov?" 00000000022CD1A8
[debug] write new buf t:0 f:0 0000000000000000, pos 00000000022CE260, size: 4096 file: 0, size: 0
[debug] http write filter: l:0 f:1 s:4096
[debug] http write filter limit 0
[debug] SSL buf copy: 4096
[debug] SSL to write: 4096
[debug] SSL_write: 4096
[debug] http write filter 0000000000000000
[debug] http copy filter: 0 "/amazons3/path/to/file_1.mov?"
[debug] recv: fd:12 2800 of 4096
[debug] http output filter "/amazons3/path/to/file_1.mov?"
[debug] http copy filter: "/amazons3/path/to/file_1.mov?"
[debug] mod_zip: entering subrequest body filter
[debug] mod_zip: No range for subrequest to satisfy
[debug] http postpone filter "/amazons3/path/to/file_1.mov?" 00000000022CD1A8
[debug] write new buf t:0 f:0 0000000000000000, pos 00000000022CE260, size: 2800 file: 0, size: 0
[debug] http write filter: l:0 f:1 s:2800
[debug] http write filter limit 0
[debug] SSL buf copy: 2800
[debug] SSL to write: 2800
[debug] SSL_write: 2800
[debug] http write filter 0000000000000000
[debug] http copy filter: 0 "/amazons3/path/to/file_1.mov?"
[debug] recv: fd:12 -1 of 4096
[error] recv() failed (104: Connection reset by peer) while sending to client, client: 999.999.999.999, server: _, request: "POST /streaming-zipper HTTP/1.1", subrequest: "/amazons3/path/to/file_1.mov", upstream: "http://54.231.114.116:80/bucket/path/to/file_1.mov", host: "sts.d.c", referrer: "https://sts.d.c/streaming-zipper"
[debug] finalize http upstream request: 502
[debug] finalize http proxy request
[debug] free rr peer 1 0
[debug] close http upstream connection: 12
[debug] free: 0000000002247600, unused: 48
[debug] reusable connection: 0
[debug] http output filter "/amazons3/path/to/file_1.mov?"
[debug] http copy filter: "/amazons3/path/to/file_1.mov?"
[debug] mod_zip: entering subrequest body filter
[debug] mod_zip: No range for subrequest to satisfy
[debug] http postpone filter "/amazons3/path/to/file_1.mov?" 00007FFFCE7BC590
[debug] write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0
[debug] http write filter: l:0 f:1 s:0
[debug] http copy filter: 0 "/amazons3/path/to/file_1.mov?"
[debug] http finalize request: 0, "/amazons3/path/to/file_1.mov?" a:1, c:2
[debug] http wake parent request: "/streaming-zipper?"
[debug] http posted request: "/streaming-zipper?"
[debug] http writer handler: "/streaming-zipper?"
[debug] http output filter "/streaming-zipper?"
[debug] http copy filter: "/streaming-zipper?"
[debug] mod_zip: entering main request body filter
[debug] mod_zip: restarting subrequests
[debug] mod_zip: sending pieces, starting with piece 8 of total 15
[debug] mod_zip: no ranges / sending piece type 0
[debug] http postpone filter "/streaming-zipper?" 00000000022CD198
[debug] write new buf t:0 f:0 0000000000000000, pos 00000000022CF5E0, size: 99 file: 0, size: 0
[debug] http write filter: l:0 f:1 s:99
[debug] http write filter limit 0
[debug] SSL buf copy: 99
[debug] SSL to write: 99
[debug] SSL_write: 99
[debug] http write filter 0000000000000000
[debug] mod_zip: no ranges / sending piece type 1
[debug] mod_zip: subrequest for "/amazons3/path/to/file_2.jpg?"
[debug] mod_zip: have a wait context for "/amazons3/path/to/file_1.mov?"
[debug] mod_zip: wait "/amazons3/path/to/file_1.mov?" done
[debug] http subrequest "/amazons3/path/to/file_2.jpg?"
[debug] mod_zip: subrequest for "/amazons3/path/to/file_2.jpg?" result 0, allocating some mem on main request's pool
[debug] mod_zip: subrequest for "/amazons3/path/to/file_2.jpg?" result 0
[debug] mod_zip: sent 2 pieces, last rc = -2
[debug] http copy filter: -2 "/streaming-zipper?"
[debug] http writer output filter: -2, "/streaming-zipper?"
[debug] event timer: 3, old: 1481918834155, new: 1481918834368
[debug] http posted request: "/amazons3/path/to/file_2.jpg?"
[debug] rewrite phase: 0
[debug] test location: "/"
[debug] test location: ~ "\.(php|aspx?|jsp|cfm|go)$"
[debug] test location: ~ "^/(amazons3)/(?<file>.*)"
[debug] http regex set $file to "path/to/file_2.jpg"
[debug] using configuration "^/(amazons3)/(?<file>.*)"
What I gather from reading this debug trace is that the peer, S3, is spontaneously closing the request to NGINX, which treats that as a 502. NGINX finalizes that sub-request for file_1.mov
and then immediately moves on to the next file in the manifest, file_2.jpg
.
I'm not sure why S3 is prematurely closing the request, but it would appear that mod_zip
doesn't try to resume the rest of file_1.mov
because, for whatever reason, it isn't asking for ranges from S3 to begin with.
So this brings me to the question, can mod_zip
properly work with streaming really large files from S3? Is there something incorrect in the NGINX location /amazons3/
directive? (The config is pieced together from a number of articles on the web.) OR should we be downloading the files to the NGINX server and then zip the files from there?
If anybody has any knowledge about mod_zip
and AWS S3, I'd appreciate you're insight.
Thanks.
Sheldon
No check for r->upstream validity in ngx_http_zip_generate_pieces
Hello,
When there is no upstream and content is generated with openresty's content_by_lua or the likes, the
value of r->upstream is NULL, so ngx_http_variable_unknown_header causes nginx to crash (ngx_http_zip_file.c:253).
The old ngx_http_upstream_header_variable has a check for upstream null value before calling ngx_http_variable_unknown_header.
Pull request - #69
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.