Comments (67)
We plan to add static files and proxy support. The proxy code is inherited from old internal product and it is non functional at the moment.
from unit.
I think unit is first to serve application not proxying... And new application are based single page model and use static to bootstrap the page... And request microservice.... And unit with static while be perfect to serve this type of application in standalone aproch, and with container is more simpler to manage ....
from unit.
@hongzhidao And proxying too. Both features are planned till NGINX Conf 2019.
from unit.
@mcarbonneaux It's planned in Q3.
from unit.
Perfect, looking forward to it.
from unit.
@VBart I think the priority of proxying is higher than serving of static files.
from unit.
Static files is a must if we want to put directly behind a load balancer
from unit.
@VBart
I'm wondering doesproxying
both includehttp proxing
andtcp proxying
?
At the first iteration it will be very basic HTTP proxying.
@melck Current goal is to finish initial support for serving static files till NGINX Conf 2019.
We're 4 days away from the launch, how's it coming together?
Currently the code is under review and testing. The plan is to release Unit 1.11.0 on 18 Sep 2019 with basic proxy and serving static support.
from unit.
The 1.11 are ready to test?
What is the config option to use static along php?
Unit 1.11 is now ready to test. For installation instructions see: https://unit.nginx.org/installation/
For static files configuration see: https://unit.nginx.org/configuration/#static-files
from unit.
I have not tried but it seems that there is no regex support because it is not mentioned in the documentation, so I cannot write it as .(png|jpg) like nginx.
Regular expressions is planned for router. Having regexp in the match
object, next we will be able to introduce regexp captures and variables that can be used in different config options.
from unit.
@mostafahussein thanks for your config example. I guess something like try_files
should simplify your case, when it will be available in Unit.
from unit.
The static files and proxying will be handled by the router process.
from unit.
Yes, they will be configured in JSON conf. Upstreams will be processed by regular worker threads (engines). Files will be processed by the same threads with offload of files operations to specialized threads pools.
from unit.
Hi @VBart , Q3 has passed
Any chance it'll be released soon?
from unit.
Do we know if this feature will be part of Unit 1.8? If so, is there a scheduled release date for that? :)
from unit.
@kawaii Unit 1.8 (expected in the end of February) will have initial request routing capabilities. Serving of static files will be implemented after.
from unit.
My aim is to use Unit as the basis for docker services; the container will only run Unit, which will serve a complete application (i.e. PHP application + static files, or Python microservice, etc). Is this one of the goals of the Unit project, or should I look elsewhere?
Unit should fit this case pretty good. Unit is a building block for web-services. Depending on the position, it can do different tasks and we're working on making it perfect for more cases.
from unit.
I'm on it.
from unit.
Proxy feature is also useful. Moreover the code is already in unit, such as 'source, stream, upstream'.
Does it mean that proxy will be implemented in short term?
And when is the product stable?
We hope the code can be updated more frequent as much as possible.
And be more complete such as files in 'NXT_LIB_SRC0',
the implementation of nxt_event_conn_create and nxt_event_conn_connect, etc.
Thanks for your great job again.
from unit.
Very curious how to use. Now everything is done inside worker process via languages.
But maybe 'static, proxy' is a bit different, we are used to these in NGINX.
Show us in advance?
from unit.
So the static file root and proxy upstream will be specified in JSON conf, and UNIT will have two special threads (engine) do these things?
from unit.
when static are planned to be added to unit ?
from unit.
@toidi
Development of this feature has already started. Most likely it will be finished by the end of the year.
from unit.
Ping.
Is the feature in progress?
@VBart
from unit.
@hongzhidao the internal request routing feature is in progress. Unfortunately, there is no way to finish it till the end of the year and add support of static files serving. Anyway, it will be the next goal for the beginning of 2019.
from unit.
I want to replace all my (nginx + gunicorn) by nginx-unit but this feature is needed to do it. I think it will propulse nginx-unit (in container world at least).
@VBart Do you have any idea of when this can be implemented ? Or how we could contribute ?
from unit.
@melck Current goal is to finish initial support for serving static files till NGINX Conf 2019.
from unit.
Current goal is to finish initial support for serving static files till NGINX Conf 2019.
And how about proxying feature?
from unit.
@VBart
I'm wondering does proxying
both include http proxing
and tcp proxying
?
from unit.
@melck Current goal is to finish initial support for serving static files till NGINX Conf 2019.
We're 4 days away from the launch, how's it coming together?
from unit.
@VBart, Great job.
BTW, is it OK to preview the patch? We are willing to test it and give feedback.
from unit.
@hongzhidao
I've attached a patch for initial support of serving static files. Currently we are discussing how to configure it properly.
With the patch the document root directory can be specified in pass
:
{
"settings": {
"http": {
"static": {
"mime_types": {
"text/plain": [
".log",
"README",
"CHANGES"
]
}
}
}
},
"listeners": {
"127.0.0.1:8080": {
"pass": "routes"
}
},
"routes": [
{
"action": {
"pass": "/data/www/example.com/media"
}
}
]
}
Obviously this method has problems as file hierarchy is mixed with internal request pass'ing:
"action": {
"pass": "http://127.0.0.1:8000"
}
"action": {
"pass": "/path/to/root"
}
"action": {
"pass": "applications/cms"
}
For passing request to file system we can use some kind of prefix/scheme:
"action": {
"pass": "static:/path/to/root"
}
"action": {
"pass": "local:/path/to/root"
}
"action": {
"pass": "fs:/path/to/root"
}
"action": {
"pass": "dir:/path/to/root"
}
"action": {
"pass": "files:/path/to/root"
}
Or we can implement different action verbs instead:
"action": {
"proxy": "http://127.0.0.1:8000"
}
"action": {
"serve"/"sendfile": "/path/to/root"
}
"action": {
"pass": "applications/cms"
}
"action": {
"return": 301,
"location": "https://example.com"
}
from unit.
@VBart
My thoughts.
- Now, the design of
routes
is very clear, it only includesmatch
andaction
. And the work ofpass
in `action is to point the next route or the place where generate HTTP content via prefix. So it's better to keep the structure and add new generate content type by extending the prefix value. What about this way? It seems to be more flexible.
"action": {
"pass": "applications/cms"
}
"action": {
"pass":"assets/asset1"
}
"action": {
"pass": "proxys/p1"
}
/* It's OK */
"action": {
"return": 301,
"location": "https://example.com"
}
"applications": {
"cms": {}
}
"assets":{
"static1": {
"index": "...",
"root": "...",
"sendfile": "on|off"
}
}
"proxys" {
"p1": {
"upstream": "http://127.0.0.1:8000" // or upstream name.
}
}
- mine types.
I think it's OK, since it's a global setting.
"settings": {
"http": {
"mime_types": {
"text/plain": [
".log",
"README",
"CHANGES"
]
}
}
},
-
Introduce nxt_http_static.c, nxt_http_proxy.c and nxt_http_application.c.
These are to generate HTTP content, they are on the same level.
Now, the file nxt_router.c is too large, what about refactoring out nxt_http_application.c? -
sendfile
feature is welcome. And can you also attach proxying patch? -
The legency code
nxt_buf_filter.c
andnxt_buf_pool.c
seems good, will you consider using them? -
I tested by
wrk
, compare to nginx withsendfile off
.
-- UNIT
Running 1m test @ http://{ip:port}/index.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.62ms 9.40ms 419.17ms 99.38%
Req/Sec 32.69k 2.60k 69.23k 97.09%
19476448 requests in 1.00m, 14.57GB read
Requests/sec: 324199.95
Transfer/sec: 248.27MB
-- NGINX
Running 1m test @ http://{ip:port}/index.html
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 4.60ms 5.33ms 214.08ms 89.38%
Req/Sec 26.80k 2.51k 54.80k 84.13%
15995182 requests in 1.00m, 12.66GB read
Requests/sec: 266242.77
Transfer/sec: 215.81MB
CPU(s): 48
Thanks for your great job.
from unit.
Why not just use an application with type "static", which would require no changes to the routing at all?
i.e.
{
"settings": {
"http": {
"static": {
/* Optional global settings */
"mime_types": {
"text/plain": [
".log",
"README",
"CHANGES"
]
},
"index": "index.html"
}
}
},
"listeners": {
"127.0.0.1:8080": {
"pass": "routes"
}
},
"applications": {
"assets": {
"type": "static",
"root": "/data/www/example.com/media",
/* Optional per-app settings */
"mime_types": {},
"index": "index.txt"
},
"cms": {
/* Regular app */
}
},
"routes": [
{
"match": {
"uri": "/assets/*"
},
"action": {
"pass": "applications/assets"
}
},
{
"action": {
"pass": "applications/cms"
}
}
]
}
from unit.
Additionally, for my use-case (serving a mix of dynamic and static content, a standard PHP app), it would be great if we could add a new route condition to emulate nginx's "try_files" directive.
i.e.
"routes": [
{
/* Serve explicit scripts */
"match": {
"uri": "*.php"
},
"action": {
"pass": "applications/php_app_with_route"
}
},
{
/* Serve static files */
"match": {
"file_exists": "/path/to/root"
},
"action": {
"pass": "applications/static_app"
}
},
{
/* Fall back to PHP if no static file was matched */
"action": {
"pass": "applications/php_app_with_script"
}
}
]
Alternately this could be something like
"match": {
"try_static": "applications/static_site_name"
}
which would avoid repeating the site root, although that might break the encapsulation of the application object.
from unit.
- Now, the design of
routes
is very clear, it only includesmatch
andaction
. And the work ofpass
in `action is to point the next route or the place where generate HTTP content via prefix. So it's better to keep the structure and add new generate content type by extending the prefix value. What about this way? It seems to be more flexible."assets":{ "static1": { "index": "...", "root": "...", "sendfile": "on|off" } }
The problem with this approach is that users will have to give names to locations (i.e. directories). Giving to directories another fictional names looks like overkill.
In most cases you only need to configure a document root directory (and this is the key). Index setting is rarely different from index.html
or index.php
.
As of sendfile on/off
and similar settings, they are rarely configured per location. Moreover, I'd like to avoid dumping out most of these performance related settings on users. Usually users don't tune them with deep knowledge and just copy'n'paste from some weird howtos on the Internet. My idea is that Unit should be smart enough to use best method to serve each file. Depending on the situation and size of the file, it can try to preread file and then set a task for sendfile() in a thread pool. Of course, when necessary we may expose some tuning options. But for most cases Unit should behave the fastest way out of the box.
"proxys" { "p1": { "upstream": "http://127.0.0.1:8000" // or upstream name. } }
We have plans for something like:
{
"listeners": { … },
"applications": { … },
"upstreams": {
"tier2": { … }
"testing": { … }
}
}
and
{
"pass": "upstreams/tier2"
}
but this is for multiple servers and load-balancing.
For a simple case when there's only one server (which is also quite common), it would be easier to have short in-place option instead of forcing users to configure a separate object with another name.
- Introduce nxt_http_static.c, nxt_http_proxy.c and nxt_http_application.c.
These are to generate HTTP content, they are on the same level.
Now, the file nxt_router.c is too large, what about refactoring out nxt_http_application.c?
All in good time, I hope. There are a lot of plans about rewriting huge parts of it, including ports system. Currently, it would be worthless task to split the code that anyway is going to be rewritten.
sendfile
feature is welcome. And can you also attach proxying patch?
Sure, but I'll publish it a bit later as @igorsysoev right now works on it fixing some bug.
- The legency code
nxt_buf_filter.c
andnxt_buf_pool.c
seems good, will you consider using them?
Will see later if it will be useful or not.
from unit.
Why not just use an application with type "static", which would require no changes to the routing at all?
Well, pretty the same reasons as I've mentioned above: forcing users to give names for their directories with static files seems to be overkill.
As of abusing applications
object for this, I should note that Unit hasn't been planned to be an application server only. It's a general purpose server and proxy (like nginx). In many cases there won't be any applications at all, if Unit will only be used as a proxy or as a file server. So we should be careful to not overuse application terminology.
from unit.
My idea is that Unit should be smart enough to use best method to serve each file.
Just a reminder. A tool or admin pane will be produced for generating conf.json.
I think there would be many developers like to contribute such tools.
Since I believe UNIT will be welcome and popular in future.
So consider this factor. NGINX conf is also smart enough, but it's not easy to generate.
But UNIT use JSON format, it's not a problem, but I still think flexible
is important.
Sure, but I'll publish it a bit later as @igorsysoev right now works on it fixing some bug.
Can you share it now, thanks again.
At the first iteration it will be very basic HTTP proxying.
but this is for multiple servers and load-balancing.
Will this version include upstream?
I should note that Unit hasn't been planned to be an application server only. It's a general purpose server and proxy
Sounds good. Will http cache be in the plan?
from unit.
forcing users to give names for their directories with static files seems to be overkill
Understood - if you're aiming to be mostly zero-config regarding sendfile, gzip, etc, that does seem redundant.
I should note that Unit hasn't been planned to be an application server only.
Apologies if this is the wrong place to ask, but is that still one of the use cases?
My aim is to use Unit as the basis for docker services; the container will only run Unit, which will serve a complete application (i.e. PHP application + static files, or Python microservice, etc). Is this one of the goals of the Unit project, or should I look elsewhere?
from unit.
Additionally, for my use-case (serving a mix of dynamic and static content, a standard PHP app), it would be great if we could add a new route condition to emulate nginx's "try_files" directive.
I think about something like:
"action" {
"pass": "static:/path/to/root";
"on_error": {
"pass": "..."
}
}
or you can skip to the next route step:
"action" {
"pass": "static:/path/to/root";
"on_error": "next";
}
The problem with the try_files
directive in nginx is that it has a race-condition. Since it's separated from the static module, there's time between try_files
checks and opening file for serving. We'd like to avoid this in Unit.
from unit.
Sounds good, thanks for the information
from unit.
@VBart Take a look, please. Just for style.
- with
static
patch.
a) rename alloc as size.
b) unify return error process.
if (ret != NXT_OK) {
return ret;
}
- with
http parse
.
Will you put the parse of HTTP proxy response into nxt_http_parse.c?
If yes, what about simplifying some of the existing structure?
a) nxt_http_request_parse_t, rp => nxt_http_parser_t, parser.
b) nxt_http_parse_request_init() => nxt_http_parse_init()
c) it seems the fieldoffset
in nxt_http_request_parse_t is unused.
diff --git a/src/nxt_http_parse.c b/src/nxt_http_parse.c
--- a/src/nxt_http_parse.c
+++ b/src/nxt_http_parse.c
@@ -1080,10 +1080,6 @@ nxt_http_parse_complex_target(nxt_http_r
}
}
- if (state >= sw_quoted) {
- return NXT_HTTP_PARSE_INVALID;
- }
-
args:
for (/* void */; p < rp->target_end; p++) {
diff --git a/src/nxt_http_parse.h b/src/nxt_http_parse.h
--- a/src/nxt_http_parse.h
+++ b/src/nxt_http_parse.h
@@ -34,11 +34,11 @@ typedef union {
struct nxt_http_request_parse_s {
+ nxt_mp_t *mem_pool;
+
nxt_int_t (*handler)(nxt_http_request_parse_t *rp,
u_char **pos, u_char *end);
- size_t offset;
-
nxt_str_t method;
u_char *target_start;
@@ -53,11 +53,8 @@ struct nxt_http_request_parse_s {
nxt_http_ver_t version;
nxt_list_t *fields;
- nxt_mp_t *mem_pool;
-
nxt_str_t field_name;
nxt_str_t field_value;
-
uint32_t field_hash;
/* target with "/." */
diff --git a/src/nxt_http_static.c b/src/nxt_http_static.c
--- a/src/nxt_http_static.c
+++ b/src/nxt_http_static.c
@@ -31,7 +31,7 @@ nxt_http_pass_t *
nxt_http_static_handler(nxt_task_t *task, nxt_http_request_t *r,
nxt_http_pass_t *pass)
{
- size_t alloc, encode;
+ size_t size, encode;
u_char *p;
struct tm tm;
nxt_buf_t *fb;
@@ -72,9 +72,9 @@ nxt_http_static_handler(nxt_task_t *task
mtype = NULL;
}
- alloc = pass->name.length + r->path->length + index.length + 1;
+ size = pass->name.length + r->path->length + index.length + 1;
- f->name = nxt_mp_nget(r->mem_pool, alloc);
+ f->name = nxt_mp_nget(r->mem_pool, size);
if (nxt_slow_path(f->name == NULL)) {
goto fail;
}
@@ -148,15 +148,15 @@ nxt_http_static_handler(nxt_task_t *task
nxt_http_field_name_set(field, "ETag");
- alloc = NXT_TIME_T_HEXLEN + NXT_OFF_T_HEXLEN + 3;
+ size = NXT_TIME_T_HEXLEN + NXT_OFF_T_HEXLEN + 3;
- p = nxt_mp_nget(r->mem_pool, alloc);
+ p = nxt_mp_nget(r->mem_pool, size);
if (nxt_slow_path(p == NULL)) {
goto fail;
}
field->value = p;
- field->value_length = nxt_sprintf(p, p + alloc, "\"%xT-%xO\"",
+ field->value_length = nxt_sprintf(p, p + size, "\"%xT-%xO\"",
nxt_file_mtime(&fi),
nxt_file_size(&fi))
- p;
@@ -224,19 +224,19 @@ nxt_http_static_handler(nxt_task_t *task
nxt_http_field_name_set(field, "Location");
encode = nxt_encode_uri(NULL, r->path->start, r->path->length);
- alloc = r->path->length + encode * 2 + 1;
+ size = r->path->length + encode * 2 + 1;
if (r->args->length > 0) {
- alloc += 1 + r->args->length;
+ size += 1 + r->args->length;
}
- p = nxt_mp_nget(r->mem_pool, alloc);
+ p = nxt_mp_nget(r->mem_pool, size);
if (nxt_slow_path(p == NULL)) {
goto fail;
}
field->value = p;
- field->value_length = alloc;
+ field->value_length = size;
if (encode > 0) {
p = (u_char *) nxt_encode_uri(p, r->path->start, r->path->length);
@@ -302,7 +302,7 @@ nxt_http_static_extract_extension(nxt_st
static void
nxt_http_static_body_handler(nxt_task_t *task, void *obj, void *data)
{
- size_t alloc;
+ size_t size;
nxt_buf_t *fb, *b, **next, *out;
nxt_off_t rest;
nxt_int_t n;
@@ -317,9 +317,9 @@ nxt_http_static_body_handler(nxt_task_t
n = 0;
do {
- alloc = nxt_min(rest, NXT_HTTP_STATIC_BUF_SIZE);
+ size = nxt_min(rest, NXT_HTTP_STATIC_BUF_SIZE);
- b = nxt_buf_mem_alloc(r->mem_pool, alloc, 0);
+ b = nxt_buf_mem_alloc(r->mem_pool, size, 0);
if (nxt_slow_path(b == NULL)) {
goto fail;
}
@@ -332,7 +332,7 @@ nxt_http_static_body_handler(nxt_task_t
*next = b;
next = &b->next;
- rest -= alloc;
+ rest -= size;
} while (rest > 0 && ++n < NXT_HTTP_STATIC_BUF_COUNT);
@@ -498,7 +498,7 @@ nxt_http_static_mtypes_init(nxt_mp_t *mp
ret = nxt_http_static_mtypes_hash_add(mp, hash, &extension, type);
if (nxt_slow_path(ret != NXT_OK)) {
- return NXT_ERROR;
+ return ret;
}
}
@@ -544,6 +544,8 @@ nxt_http_static_mtypes_hash_add(nxt_mp_t
lhq.proto = &nxt_http_static_mtypes_hash_proto;
lhq.pool = mp;
+ /* never return NXT_DECLINED */
+
return nxt_lvlhsh_insert(hash, &lhq);
}
diff --git a/src/nxt_router.c b/src/nxt_router.c
--- a/src/nxt_router.c
+++ b/src/nxt_router.c
@@ -1447,7 +1447,7 @@ nxt_router_conf_create(nxt_task_t *task,
ret = nxt_router_conf_process_static(task, tmcf->router_conf, static_conf);
if (nxt_slow_path(ret != NXT_OK)) {
- return NXT_ERROR;
+ return ret;
}
router = tmcf->router_conf->router;
@@ -1815,7 +1815,7 @@ nxt_router_conf_process_static(nxt_task_
ret = nxt_http_static_mtypes_init(mp, &rtcf->mtypes_hash);
if (nxt_slow_path(ret != NXT_OK)) {
- return NXT_ERROR;
+ return ret;
}
if (conf == NULL) {
@@ -1849,7 +1849,7 @@ nxt_router_conf_process_static(nxt_task_
ret = nxt_http_static_mtypes_hash_add(mp, &rtcf->mtypes_hash,
&extension, type);
if (nxt_slow_path(ret != NXT_OK)) {
- return NXT_ERROR;
+ return ret;
}
continue;
@@ -1869,7 +1869,7 @@ nxt_router_conf_process_static(nxt_task_
ret = nxt_http_static_mtypes_hash_add(mp, &rtcf->mtypes_hash,
&extension, type);
if (nxt_slow_path(ret != NXT_OK)) {
- return NXT_ERROR;
+ return ret;
}
}
}
from unit.
with static patch. a) rename alloc as size. b) unify return error process.
if (ret != NXT_OK) {
return ret;
}
Why? If we don't expect any response code other than NXT_OK or NXT_ERROR, then it's better to write it explicitly. When I read code and see return ret
, I don't know what else ret
can contain (except NXT_OK), then I have to check the code of called function. As a result, it makes code harder to read.
from unit.
If we don't expect any response code other than NXT_OK or NXT_ERROR, then it's better to write it explicitly.
Make sense. I thought/guess it's a convention in nginx/unit/njs. See grep -r 'return ret' src/ -B 3
.Both it's OK for me. A bit more, if the return value is NXT_OK or NXT_ERROR, I'd like return ret
.
from unit.
Do you plan to include X-accel support when serving static files and proxy ?
from unit.
@melck Yes, but not in the next release this week.
from unit.
@VBart Take a look, please. Just for style.
Thanks for the patch. I'll incorporate some changes.
Will you put the parse of HTTP proxy response into nxt_http_parse.c?
Yes.
If yes, what about simplifying some of the existing structure? a) nxt_http_request_parse_t, rp => nxt_http_parser_t, parser.
But it contains many request fields, that are redundant for parsing response.
b) nxt_http_parse_request_init() => nxt_http_parse_init()
It makes seance only if it will work with both request and response parsing structures.
c) it seems the field `offset` in nxt_http_request_parse_t is unused.
This was committed: 56f4085. Thanks.
from unit.
My idea is that Unit should be smart enough to use best method to serve each file.
Just a reminder. A tool or admin pane will be produced for generating conf.json.
I think there would be many developers like to contribute such tools.
Since I believe UNIT will be welcome and popular in future.
So consider this factor. NGINX conf is also smart enough, but it's not easy to generate.
But UNIT use JSON format, it's not a problem, but I still thinkflexible
is important.
Sure. But each option adds complexity, increases amount of documentation and takes effort from users to learn. We should be careful there and avoid adding options without valid cases when manual tuning really necessary.
Sure, but I'll publish it a bit later as @igorsysoev right now works on it fixing some bug.
Can you share it now, thanks again.
Unfortunately I have nothing to share right now as @igorsysoev still works on the code. It also means that proxying won't be a part of 1.11, and will be moved to 1.12. Sorry. 😞
At the first iteration it will be very basic HTTP proxying.
but this is for multiple servers and load-balancing.Will this version include upstream?
Not in this version. Only basic straight HTTP proxy to one server.
I should note that Unit hasn't been planned to be an application server only. It's a general purpose server and proxy
Sounds good. Will http cache be in the plan?
Yes.
from unit.
So 1.11 will support Static files but not proxy ?
Only basic straight HTTP proxy to one server.
Will this support HTTPS variable in PHP as of 1.12 ? and when we can expect the release ? I mean a possible date range not a specific time.
from unit.
So 1.11 will support Static files but not proxy ?
Yes.
Only basic straight HTTP proxy to one server.
Will this support HTTPS variable in PHP as of 1.12 ? and when we can expect the release ? I mean a possible date range not a specific time.
1.12 is planned around the middle of October, but HTTP proxy feature has nothing to do with PHP applications and their request environment.
As of support for overwriting request variables (including HTTPS one) there's no specific release date scheduled right now, but it can be expected till the end of the year.
from unit.
The 1.11 are ready to test?
What is the config option to use static along php?
from unit.
- Removed unnecessary checking.
diff --git a/src/nxt_http_parse.c b/src/nxt_http_parse.c
--- a/src/nxt_http_parse.c
+++ b/src/nxt_http_parse.c
@@ -1081,10 +1081,6 @@ nxt_http_parse_complex_target(nxt_http_r
}
}
- if (state >= sw_quoted) {
- return NXT_HTTP_PARSE_INVALID;
- }
-
args:
for (/* void */; p < rp->target_end; p++) {
- Style
diff --git a/src/nxt_http_request.c b/src/nxt_http_request.c
--- a/src/nxt_http_request.c
+++ b/src/nxt_http_request.c
@@ -41,7 +41,6 @@ nxt_http_init(nxt_task_t *task, nxt_runt
nxt_int_t ret;
ret = nxt_h1p_init(task, rt);
-
if (ret != NXT_OK) {
return ret;
}
diff --git a/src/nxt_http_response.c b/src/nxt_http_response.c
--- a/src/nxt_http_response.c
+++ b/src/nxt_http_response.c
@@ -18,7 +18,7 @@ static nxt_int_t nxt_http_response_field
nxt_lvlhsh_t nxt_response_fields_hash;
-static nxt_http_field_proc_t nxt_response_fields[] = {
+static nxt_http_field_proc_t nxt_response_fields[] = {
{ nxt_string("Status"), &nxt_http_response_status, 0 },
{ nxt_string("Server"), &nxt_http_response_skip, 0 },
{ nxt_string("Date"), &nxt_http_response_field,
@@ -41,7 +41,7 @@ nxt_http_response_hash_init(nxt_task_t *
}
-nxt_int_t
+static nxt_int_t
nxt_http_response_status(void *ctx, nxt_http_field_t *field,
uintptr_t data)
{
@@ -65,7 +65,7 @@ nxt_http_response_status(void *ctx, nxt_
}
-nxt_int_t
+static nxt_int_t
nxt_http_response_skip(void *ctx, nxt_http_field_t *field, uintptr_t data)
{
field->skip = 1;
@@ -74,7 +74,7 @@ nxt_http_response_skip(void *ctx, nxt_ht
}
-nxt_int_t
+static nxt_int_t
nxt_http_response_field(void *ctx, nxt_http_field_t *field, uintptr_t offset)
{
nxt_http_request_t *r;
- Run
make tests
failed before runingmake
.
./configure --tests && make tests
src/nxt_main.h:12:25: error: nxt_version.h: No such file or directory
But I'm not sure it's an issue.
sendfile
support.
If users are concerned about the performance of sendingstatic
file,sendfile
is welcome to be supported. Does it plan to include in 1.12?
from unit.
1. Removed unnecessary checking.
diff --git a/src/nxt_http_parse.c b/src/nxt_http_parse.c --- a/src/nxt_http_parse.c +++ b/src/nxt_http_parse.c @@ -1081,10 +1081,6 @@ nxt_http_parse_complex_target(nxt_http_r } } - if (state >= sw_quoted) { - return NXT_HTTP_PARSE_INVALID; - } - args: for (/* void */; p < rp->target_end; p++) {
Why is it unnecessary? It rejects invalid URIs with cropped encoded symbols, e.g. /%2
2. Style [...] 3. Run `make tests` failed before runing `make`.
Thanks.
4. `sendfile` support. If users are concerned about the performance of sending `static` file, `sendfile` is welcome to be supported. Does it plan to include in 1.12?
I'm not sure that sendfile()
will be ready by 1.12, as proper implementation requires much more logic.
from unit.
It rejects invalid URIs with cropped encoded symbols, e.g. /%2
Get it.
Added test.
diff --git a/src/test/nxt_http_parse_test.c b/src/test/nxt_http_parse_test.c
--- a/src/test/nxt_http_parse_test.c
+++ b/src/test/nxt_http_parse_test.c
@@ -180,6 +180,11 @@ static nxt_http_parse_test_case_t nxt_h
}}
},
{
+ nxt_string("GET /%2 HTTP/1.0\r\n\r\n"),
+ NXT_HTTP_PARSE_INVALID,
+ NULL, { NULL }
+ },
+ {
nxt_string("GET /%20 HTTP/1.0\r\n\r\n"),
NXT_DONE,
&nxt_http_parse_test_request_line,
BTW, I tried in NGINX and it passed.
from unit.
A bit rework.
-
Do you think it's better to add cleanup in mem pool for ensuring close file?
Since some errors may happen while read/recv/send etc.
I'm not sure it's enough to do the thing of close file in buf completion. -
use
file_pos == file_end
to indicate the end of reading file. (instead of r->out)
diff --git a/src/nxt_http_static.c b/src/nxt_http_static.c
--- a/src/nxt_http_static.c
+++ b/src/nxt_http_static.c
@@ -11,6 +11,8 @@
#define NXT_HTTP_STATIC_BUF_SIZE (128 * 1024)
+static void nxt_http_static_file_cleanup(nxt_task_t *task, void *obj,
+ void *data);
static void nxt_http_static_extract_extension(nxt_str_t *path,
nxt_str_t *extension);
static void nxt_http_static_body_handler(nxt_task_t *task, void *obj,
@@ -255,6 +257,8 @@ nxt_http_static_handler(nxt_task_t *task
body_handler = NULL;
}
+ nxt_mp_cleanup(r->mem_pool, nxt_http_static_file_cleanup, task, f, NULL);
+
nxt_http_request_header_send(task, r, body_handler);
r->state = &nxt_http_static_send_state;
@@ -273,6 +277,15 @@ fail:
static void
+nxt_http_static_file_cleanup(nxt_task_t *task, void *obj, void *data)
+{
+ nxt_file_t *f = obj;
+
+ nxt_file_close(task, f);
+}
+
+
+static void
nxt_http_static_extract_extension(nxt_str_t *path, nxt_str_t *extension)
{
u_char ch, *p, *end;
@@ -372,13 +385,13 @@ nxt_http_static_buf_completion(nxt_task_
r = data;
fb = r->out;
- if (nxt_slow_path(fb == NULL || r->error)) {
+ rest = fb->file_end - fb->file_pos;
+
+ if (nxt_slow_path(r->error || rest == 0)) {
goto clean;
}
- rest = fb->file_end - fb->file_pos;
size = nxt_buf_mem_size(&b->mem);
-
size = nxt_min(rest, (nxt_off_t) size);
n = nxt_file_read(fb->file, b->mem.start, size, fb->file_pos);
@@ -393,15 +406,10 @@ nxt_http_static_buf_completion(nxt_task_
goto clean;
}
+ fb->file_pos += n;
+
if (n == rest) {
- nxt_file_close(task, fb->file);
- r->out = NULL;
-
b->next = nxt_http_buf_last(r);
-
- } else {
- fb->file_pos += n;
- b->next = NULL;
}
b->mem.pos = b->mem.start;
@@ -414,11 +422,6 @@ clean:
nxt_mp_free(r->mem_pool, b);
nxt_mp_release(r->mem_pool);
-
- if (fb != NULL) {
- nxt_file_close(task, fb->file);
- r->out = NULL;
- }
}
from unit.
- I think it's enough, as buf completion is always called during request termination in case of errors (otherwise, having the request related buffers in output queue after request mempool cleanup will result in segfault).
- Currently it's the indication of file close.
Note that with your patch closing file descriptors will happen later (only when the whole request will be sent to client). If client will stop reading, then it means having descriptor open for an extent of sending time-out. From the resource economy point of view, it's better to close files as soon as possible,
from unit.
Basic static file support has been committed in 08a8d15 and released with Unit 1.11.0.
from unit.
Thanks! I will test it!
from unit.
It's support http static cache header? Etag or modification time and http 304 handcheck ?
from unit.
It's support http static cache header? Etag or modification time and http 304 handcheck ?
Currently there's no way to add custom response headers. Here's a relevant ticket: #313
Last-Modified and ETag are already supported, but 304 responses aren't yet.
from unit.
I've seen last-modified in the source code.
why static handler cannot respond last-modified and cannot respond 304 http status (in place of 200) when input header if-modified-since date is lower than file date?
It's possible to configure static file in the same path than php path?
from unit.
In nxt_http_static.c near nxt_file_info in line 123 you can read in r->fields the http header if-modified-since and convert de date in date time number and compare with date time number of the file and set status to NXT_HTTP_NOT_MODIFIED en return directly (in place of returning the file with 200).
And if if-none-match is found in field you can calculate the etag with the same method you used... And compare it with the header value.
from unit.
I've seen last-modified in the source code.
why static handler cannot respond last-modified and cannot respond 304 http status (in place of 200) when input header if-modified-since date is lower than file date?
Because right now we don't have header filter subsystem like in nginx. It has to be written for Unit architecture. In nginx all output is passed through a chain of header and body filters. This allows to handle many situations and compress content independent of the source. Something similar we plan to introduce in Unit.
If we put this handling inside the static content handler, then it will be a temporary in-place hack that will work only for static files, but won't work for applications and proxying.
Our goal is to gradually build flexible architecture for a full-fledged web-server and every functionality is introduced when the relevant part of architecture is ready. Having multiple content handlers (applications, static, and soon proxy), now we can start work on chain filters for output.
With chain filters, then it will be possible to add not modified and range responses, compression, generic content caching.
It's possible to configure static file in the same path than php path?
Sure. You can split requests in router and pass all .php
requests to application, like this:
{
"routes": [
{
"match": {
"uri": "*.php"
},
"action": {
"pass": "applications/php-app"
}
},
{
"action": {
"share": "/www/php-app/"
}
}
],
"applications": {
"php-app": {
"type": "php",
"root": "/www/php-app/"
}
}
}
from unit.
It's possible to configure static file in the same path than php path
Here is my working example for similar case I have replaced nginx static settings with this settings
Current limitation after the replacement:
- Still not getting client ip in unit but i guess it should be solved when unit support the proxy feature
- I have not tried but it seems that there is no regex support because it is not mentioned in the documentation, so I cannot write it as
\.(png|jpg)
like nginx.
{
"listeners": {
"*:80": {
"pass": "routes"
}
},
"routes": [
{
"match": {
"uri": [
"*.manifest",
"*.appcache",
"*.html",
"*.json",
"*.rss",
"*.atom",
"*.jpg",
"*.jpeg",
"*.gif",
"*.png",
"*.ico",
"*.cur",
"*.gz",
"*.svg",
"*.svgz",
"*.mp4",
"*.ogg",
"*.ogv",
"*.webm",
"*.htc",
"*.css",
"*.js",
"*.ttf",
"*.ttc",
"*.otf",
"*.eot",
"*.woff",
"*.woff2",
"/robot.txt"
]
},
"action": {
"share": "/var/www/html/app/public"
}
},
{
"action": {
"pass": "applications/laravel"
}
}
],
"applications": {
"laravel": {
"type": "php",
"user": "appuser",
"group": "appuser",
"root": "/var/www/html/app/public",
"script": "index.php",
"index": "index.php",
}
}
}
from unit.
@VBart you are welcome, when I wrote it yesterday I thought about contributing with this example in the docs to provide other examples than fallback to the static path. will this be possible or not needed ?
from unit.
@mostafahussein while right now, probably, it's the way to go, but this huge list of extensions doesn't look nice and that's not how we want to see Unit configuration. May be some simple version of this example we can add to the documentation.
@artemkonev please look ^^^
from unit.
thank for your configuration example!
And for cache header handling... Unit static handler is to handle static file ... Static file http header is part of static file handling... When you respond 304 it's like you respond the entire file... It's a static response...
I'm not chocked that the static file handler handler this specifically...
And I'm not sure that other handler need to handle that at this stage in generic manner...
Proxy http with cache must use asynchronous heuristique to check the freshness of the upstream static file... Else while juste pass the header upstream to the client without changing it...
And I think unit is more like application server (and generally the end point generator) ... Not a http reverse proxy load balancer... If need this I use full nginx!
They must serve static file or dynamique content (generated by script or compiled code) at this end...
Is why serving static file is important for application server... More and more application are single page application in Javascript plus html5... Then call rest api... With unit that support static file you can unit the two (api plus static) in a single unit instance.... And all in unique deployment container image....
from unit.
Related Issues (20)
- RHEL Packages: GPG check fails when explicitly enabled HOT 9
- Query control API to show the loaded modules HOT 9
- Change server name header HOT 1
- Add error code and error message as variables at error page HOT 1
- Docker: for python based images, use the -slim version HOT 14
- Building Containers for amd64 on ARM instances fails HOT 5
- Crypto HOT 2
- Coredump if port 443 is occupied by another process ( Almalinux 9 ) HOT 4
- Problem Too many redirect
- nginx configuration to redirect to existing maintenance page with env var condition HOT 1
- Support Python 3.11/3.12 on RHEL 8/9 HOT 2
- unitctl: fix up the output format tag's default HOT 1
- Inconsistency in Supported Language Count Between Repository Description and README HOT 1
- build fails on freebsd and libressl HOT 1
- grpc idle timeout is affected by `client_header_timeout header` HOT 2
- The issue about try_files cmd HOT 2
- review unit type choice HOT 1
- How to modify the value of nginx-unit backlog parameter? I want to increase it to meet my performance needs. It's time to change the default value, 511 is no longer applicable HOT 26
- Listeners 127.0.0.1:xx is not working, The prompt already exists, but I haven't seen the configuration anywhere HOT 3
- Upgrade wasm-wasi-component to latest Wasmtime HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from unit.