GithubHelp home page GithubHelp logo

haproxy / haproxy Goto Github PK

View Code? Open in Web Editor NEW
4.5K 119.0 763.0 78.58 MB

HAProxy Load Balancer's development branch (mirror of git.haproxy.org)

Home Page: https://git.haproxy.org/

License: Other

Makefile 0.57% C 95.79% Perl 0.05% Shell 1.01% Python 0.17% Lua 0.14% HTML 0.01% SmPL 0.04% Vim Script 0.07% C++ 2.16%
haproxy load-balancer reverse-proxy proxy http http2 cache fastcgi high-availability https

haproxy's Introduction

The HAProxy documentation has been split into a number of different files for
ease of use.

Please refer to the following files depending on what you're looking for :

  - INSTALL for instructions on how to build and install HAProxy
  - BRANCHES to understand the project's life cycle and what version to use
  - LICENSE for the project's license
  - CONTRIBUTING for the process to follow to submit contributions

The more detailed documentation is located into the doc/ directory :

  - doc/intro.txt for a quick introduction on HAProxy
  - doc/configuration.txt for the configuration's reference manual
  - doc/lua.txt for the Lua's reference manual
  - doc/SPOE.txt for how to use the SPOE engine
  - doc/network-namespaces.txt for how to use network namespaces under Linux
  - doc/management.txt for the management guide
  - doc/regression-testing.txt for how to use the regression testing suite
  - doc/peers.txt for the peers protocol reference
  - doc/coding-style.txt for how to adopt HAProxy's coding style
  - doc/internals for developer-specific documentation (not all up to date)

haproxy's People

Contributors

a-denoyelle avatar aerostitch avatar bedis avatar ben51degrees avatar bjacquin avatar capflam avatar cbonte avatar chipitsine avatar cognet avatar daniel-corbett avatar darlelet avatar ddosen avatar devnexen avatar emericbr avatar godbach avatar haproxyfred avatar horms avatar jjh74 avatar jmagnin avatar lukastribus avatar nmerdan avatar piba-nl avatar rlebreton avatar thierry-f-78 avatar timwolla avatar vincentbernat avatar wdauchy avatar wlallemand avatar wtarreau avatar zaga00 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

haproxy's Issues

TLSv1.3 breaks when KeyUpdate messages are used

As reported by Adam Langley:
https://www.mail-archive.com/[email protected]/msg32495.html

HAProxy breaks KeyUpdate messages in TLSv1.3 due to the code that disables TLS renegotiation.

Chromium enabled KeyUpdate messages in Canary, here:
Send KeyUpdates after the first post-handshake write
chromium/chromium@68df3af

Which led to:
I cannot sign in to GitHub with the last Canary
https://bugs.chromium.org/p/chromium/issues/detail?id=923685

and the revert:
Disable sending KeyUpdates by default
chromium/chromium@cee722a

We fixed this in haproxy with commit 526894f which must be backported to 1.8 and 1.9.

At some point in time, this will be enabled in chromium again.

Also see openssl issues:
openssl/openssl#8068
openssl/openssl#8069

option httpclose causes 400 with http/2

Sander Klein reported this issue here on the mailing list: https://www.mail-archive.com/[email protected]/msg29067.html

Output of haproxy -vv and uname -a

haproxy 1.8.4

What's the configuration?

unknown

Steps to reproduce the behavior

apparently nothing special

Actual behavior

From the message :
"After enabling all requests to a certain backend started to give 400's while requests to other backend
worked as expected. I get the following in haproxy.log:

Feb 21 14:31:35 localhost haproxy[22867]:
2001:bad:coff:ee:cd97:5710:4515:7c73:52553 [21/Feb/2018:14:31:30.690] backend-name/backend-04 1/0/1/-1/4758 400 1932 - - CH-- 518/215/0/0/0 0/0 {host.name.tld|Mozilla/5.0 (Mac||https://referred.name.tld/some/string?f=%7B%22la_la_la%22:%7B%22v%22:%22thingy%22%7D%7D} {} "GET /some/path/here/filename.jpg HTTP/1.1"

The backend server is nginx which proxies to a nodejs application. When
looking at the request on nginx it gives an HTTP 499 error.
"

Do you have any idea what may have caused this?

Nothing, failed to reproduce. Since 1.8.4 we fixed many H2 issues, and some half-closed issues at the stream layer, thus it might be that this problem is now fixed. No more tests can be done without more info.

Do you have an idea how to solve the issue?

Removing httpclose fixes the issue.

HAProxy - CA/CRL chain becomes broken in time under heavy load

Output of haproxy -vv and uname -a

HA-Proxy version 1.8.4-1deb90d 2018/02/08
Copyright 2000-2018 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
  OPTIONS = USE_SLZ=1 USE_REGPARM=1 USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace

Linux *redacted* 3.10.0-327.18.2.el7.x86_64 #1 SMP Thu May 12 11:03:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

What's the configuration?

#### global
   chroot                              /var/lib/haproxy
   stats socket                        *redacted*
   user                                haproxy
   group                               haproxy
   daemon
   debug
   log                                 *redacted* local2
   maxconn                             50000
   tune.ssl.default-dh-param           2048
   log-send-hostname
   ssl-default-bind-options no-sslv3
   ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
   ssl-default-server-options no-sslv3
   ssl-default-server-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

#### defaults
   mode                                http
   log                                 global
   option                              httplog
   option                              dontlognull
   option                              forwardfor  except 127.0.0.0/8
   option                              redispatch
   retries                             3
   timeout connect                     5s
   timeout client                      60s
   timeout server                      60s
   timeout queue                       5s
   timeout http-request                30s
   timeout http-keep-alive             15s
   timeout check                       10s
   timeout tunnel                      120m
   timeout client-fin                  5s
   timeout server-fin                  5s

#### frontend  frontend_ssl_auth
   bind *redacted* ssl crt /etc/ssl/hostname_le.pem no-sslv3 ca-file /etc/ssl/clients_chain.pem verify required crl-file /etc/ssl/doxxbet/clients_chain.pem
   log-format %ci:%cp\ [%t]\ {%{+Q}[ssl_c_s_dn(cn)],%[ssl_c_verify]}\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r
   maxconn 10000
   default_backend be_backend1

#### backend be_backend1
  balance roundrobin
   fullconn 4000
   default-server maxconn 1000 inter 5s rise 2 fall 4
   option httpchk GET /healthcheck
   http-check expect status 200
   server server_1 *redacted* check weight 10
   server server_2 *redacted* check weight 10
   server server_3 *redacted* check weight 10
   server server_4 *redacted* check weight 10
   http-request set-header X-Forwarded-Port %[dst_port]
   http-response del-header Server
   http-response del-header X-AspNet-Version
   http-response del-header X-Powered-By
   http-response del-header X-Application-Context

Steps to reproduce the behavior

  1. There are 27 CAs with 27 corresponding CRLs and everything works as expected when we start haproxy, because we are using complex CA structure with multiple sub_ca trees and so on.
  2. Server is processing a lot of requests (around 20 mil per day), some of them are long-polling SignalR connections and some of them are normal http/https. There are also around 20 frontends and around 50 backends. Some frontends using 2-way ssl some not.
  3. When clients chains is changed (for example reissued CRL) we update it on filesystem and restart haproxy. This mechanism is done via script in crontab on regular basis, but i've already checked logs and restarts doesn't correspond with our issue
  4. Do a load of requests and just wait :( It's hard to reproduce, because it doesn't have connection to anything special, at least I didn't realise it.

This issue was seen also in older releases of HAProxy, starting from version

Actual behavior

When HAProxy started, all works as expected. Sometimes it's working around few weeks, sometimes more but sometimes less. Then something happens and frontend start to decline clients as not trusted

Feb  8 10:20:10 redacted haproxy[20999]: redacted:22941 [08/Feb/2019:10:20:10.345] frontend_ssl_auth/1: SSL client certificate not trusted

Funny thing is, it affects only just some sub_ca's in clients_chain. Some clients are working and some not, depends on which CA signed theirs certificates.

We already checked clients_chain.pem file when this situation happens and file is OK. It seems to me like chain data in memory is broken. Fix is simple, just restart whole haproxy, but we would like to have this issue fixed because it's painfull for us.

Expected behavior

Do you have any idea what may have caused this?

It seems to me some part of HAProxy in some situation write something to memory part where certificates were stored, so it brakes some CAs in chain, or maybe CRLs.

Do you have an idea how to solve the issue?

h2: incorrect sign extension in h2_peek_frame_hdr()

This problem was already diagnosed. In 1.8 (and only this version), h2_peek_frame_hdr() computes the frame length using an incorrect shift :

h->len = *b->p << 16;

Here, b->p is a (char*) thus we have a large negative value when chars are signed, resulting in a negative length. Fortunately the length is always checked later so this bug has no impact. The required fix is trivial :

h->len = *(uint8_t *)b->p << 16;

1.9 is safe thanks to the buffer API changes.

h2: make sure never to send a GOAWAY before SETTINGS

This issue was reported around September 2018 (no config here), it affects at least 1.8.13, and most 1.9-dev by then. It was detected with an HTTP/2 client written in Go that when haproxy wants to gracefully close an unused idle connection due to a process restart, it politely emits the GOAWAY frame to indicate it's closing, but if the client didn't even send the H2 preface, it complains "goaway received before settings".

It should be checked that the settings were already sent before sending GOAWAY, and if not, then the GOAWAY frame should not be sent at all.

It's possible that the latest changes on 1.9 addressed this since GOAWAY is sent in more controlled conditions, but this needs to be double-checked. For 1.8 it's likely that an extra h2c flag would be needed to indicate the fact that SETTINGS were sent to take care of this.

the last-session field in the backend is not always updated

This is an entry for a bug met a few months ago during a test, that could not be reproduced.

Output of haproxy -vv and uname -a

haproxy-1.9-dev earlier than 1.9-dev10, no exact version known.
Linux, don't remember what exact version, should be irrelevant here.

What's the configuration?

I don't have it anymore unfortunately. It was a test config so very basic, with a stats page, and did have a low "maxconn" setting on the servers (1 or 2 servers only).

Steps to reproduce the behavior

  1. inject enough HTTP traffic that the servers reach their maxconn value and that the backend always has some pending connections in the queue
  2. open the stats page.

Actual behavior

When the problem appears, the "last session" field for the servers and/or the backend doesn't evolve anymore, as if when picking a connection directly from the queue didn't update this field. This behavior is misleading because it can make one think that a server failed to process traffic for a while when this is not true, and result in unneeded troubleshooting.

Expected behavior

The last session field should continue to reflect the fact that the servers and the backend continue to process connections.

Do you have any idea what may have caused this?

No and I couldn't reproduce it outside of this environment when tested again later on 1.9-dev10.

Do you have an idea how to solve the issue?

I think a code review should reveal a location where a last_sess update is missing when a connection is assigned to a server, and this will tell us in what exact situation the problem happens.

ebtree: alignment issues on Sparc

It was reported here about version 1.7 on Solaris/Sparc64 :

https://www.mail-archive.com/[email protected]/msg25937.html

The problem is caused by the presence of __attribute__((packed)) on the ebtree definition. Indeed, ((packed)) automatically disables alignment. Here we only want it so that eb32_nodes are not needlessly inflated twice and that the uint32 immediately follows the eb_node. But it also causes eb_nodes to be misaligned when the struct is placed anywhere within another struct, and its pointers cannot be accessed on some 64-bit platforms which don't support unaligned accesses.

A short-term solution would possibly consist in disabling the __attribute__((packed)) on unsafe platforms (possibly only Sparc64 and MIPS64 by now).

Other long-term solution should be studied to force alignment of the beginning of the struct even when the end is not filled with holes. Among the potential solutions, probably that placing this struct into a transparent union containing a single pointer would work.

This issue is tagged minor because apparently there are almost no more users of such platforms and the option of building for 32 bits often remains a valid workaround.

Conditional jump or move depends on uninitialised value(s)

Output of haproxy -vv and uname -a

Linux *snip* 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
HA-Proxy version 2.0-dev0-32211a-258 2019/02/01 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_ZLIB=1 USE_THREAD=1 USE_OPENSSL=1 USE_SYSTEMD=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64).
Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTTP       side=FE
              h2 : mode=HTX        side=FE|BE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace

What's the configuration?

defaults
	option http-use-htx

listen test
	mode http
	bind *:8080

	http-request set-header Host example.com

	server example example.com:443 ssl verify none sni req.hdr(host)

Steps to reproduce the behavior

  1. valgrind --track-origins=yes ./haproxy -d -f ./haproxy.cfg
  2. curl localhost:8080

Actual behavior

==24645== Conditional jump or move depends on uninitialised value(s)
==24645==    at 0x4E3F99: si_update_both (stream_interface.c:853)
==24645==    by 0x450ABE: process_stream (stream.c:2502)
==24645==    by 0x515228: process_runnable_tasks (task.c:432)
==24645==    by 0x4900D3: run_poll_loop (haproxy.c:2621)
==24645==    by 0x4900D3: run_thread_poll_loop (haproxy.c:2686)
==24645==    by 0x40A431: main (haproxy.c:3315)
==24645==  Uninitialised value was created by a heap allocation
==24645==    at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==24645==    by 0x51F049: __pool_refill_alloc (memory.c:166)
==24645==    by 0x42B618: pool_alloc_dirty (memory.h:259)
==24645==    by 0x42B618: pool_alloc (memory.h:272)
==24645==    by 0x42B618: http_alloc_txn (proto_http.c:7176)
==24645==    by 0x517B67: frontend_accept (frontend.c:151)
==24645==    by 0x44A731: stream_new (stream.c:315)
==24645==    by 0x44AD13: stream_create_from_cs (stream.c:84)
==24645==    by 0x4B24C0: h1s_new_cs (mux_h1.c:245)
==24645==    by 0x4B24C0: h1s_create (mux_h1.c:317)
==24645==    by 0x4B3C72: h1_init (mux_h1.c:413)
==24645==    by 0x503F43: conn_install_mux (connection.h:839)
==24645==    by 0x503F43: conn_install_mux_fe (connection.h:1117)
==24645==    by 0x503F43: conn_complete_session (session.c:453)
==24645==    by 0x505277: session_accept_fd (session.c:291)
==24645==    by 0x4EBAB7: listener_accept (listener.c:635)
==24645==    by 0x510C1C: fdlist_process_cached_events (fd.c:441)
==24645==    by 0x510C1C: fd_process_cached_events (fd.c:460)

Expected behavior

valgrind is silent.

Do you have any idea what may have caused this?

The http_hdr_rewind call in int connect_server(struct stream*) accesses the sov member of struct http_msg s->txn->req:

rewind = s->txn ? http_hdr_rewind(&s->txn->req) : co_data(&s->req);

This member is undefined when running in HTX mode.

This issue was found by adding if statements accessing various members of struct http_msg in various places:

$ git diff include/proto/proto_http.h                                                                                                                                          diff --git i/include/proto/proto_http.h w/include/proto/proto_http.h
index 2e2163ee..d042fbc9 100644
--- i/include/proto/proto_http.h
+++ w/include/proto/proto_http.h
@@ -175,6 +175,7 @@ struct buffer *http_error_message(struct stream *s);
  */
 static inline int http_hdr_rewind(const struct http_msg *msg)
 {
+       if (msg->sov) printf("ACCESS OF UNDEFINED VALUE\n");
        return msg->eoh + msg->eol - msg->sov;
 }

causes this additional message to appear:

==25659== Conditional jump or move depends on uninitialised value(s)
==25659==    at 0x528DC2: http_hdr_rewind (proto_http.h:178)
==25659==    by 0x52E055: connect_server (backend.c:1517)
==25659==    by 0x466C7D: sess_update_stream_int (stream.c:928)
==25659==    by 0x46AD15: process_stream (stream.c:2305)
==25659==    by 0x588B86: process_runnable_tasks (task.c:432)
==25659==    by 0x4C4ECB: run_poll_loop (haproxy.c:2621)
==25659==    by 0x4C5243: run_thread_poll_loop (haproxy.c:2686)
==25659==    by 0x4C6C90: main (haproxy.c:3315)
==25659==  Uninitialised value was created by a heap allocation
==25659==    at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==25659==    by 0x594D2F: __pool_refill_alloc (memory.c:166)
==25659==    by 0x420A07: pool_alloc_dirty (memory.h:259)
==25659==    by 0x420A29: pool_alloc (memory.h:272)
==25659==    by 0x43813D: http_alloc_txn (proto_http.c:7176)
==25659==    by 0x58B6A7: frontend_accept (frontend.c:151)
==25659==    by 0x464D8A: stream_new (stream.c:315)
==25659==    by 0x464347: stream_create_from_cs (stream.c:84)
==25659==    by 0x4F87C2: h1s_new_cs (mux_h1.c:245)
==25659==    by 0x4F8A40: h1s_create (mux_h1.c:317)
==25659==    by 0x4F8E4C: h1_init (mux_h1.c:413)
==25659==    by 0x56F7BB: conn_install_mux (connection.h:839)

and

diff --git i/include/proto/proto_http.h w/include/proto/proto_http.h
index 2e2163ee..ef89b222 100644
--- i/include/proto/proto_http.h
+++ w/include/proto/proto_http.h
@@ -173,8 +173,9 @@ struct buffer *http_error_message(struct stream *s);
  * equals the sum of the two before forwarding and is zero after forwarding,
  * so the difference cancels the rewinding.
  */
-static inline int http_hdr_rewind(const struct http_msg *msg)
+static inline int http_hdr_rewind(struct http_msg *msg)
 {
+       msg->sov = 0;
        return msg->eoh + msg->eol - msg->sov;

causes valgrind to remain silent.

Do you have an idea how to solve the issue?

Either initialize the value in HTX mode or remove the call to http_hdr_rewind for HTX mode (it does not appear to change anything).

h2: study creation of a CLOSING state

As discussed here, we would benefit from the creation of an intermediary state before the CLOSED state for H2 streams :

https://www.mail-archive.com/[email protected]/msg32456.html

The point is that when a connection is waiting for the last stream to disappear, it's closed as soon as the END_STREAM is sent, and the WINDOW_UPDATE frames that the client sends meet a closed TCP connection lead to TCP resets being sent, causing confusion for the client. It's worth noting that the client doesn't always have the choice of behavior since its TCP stack could abort the stream from an out of order packet.

A new CLOSING state could be introduced between the final END_STREAM and before CLOSED : the stream would then switch to a the new state once the sum of positive increments on the WINDOW_UPDATE streams would match the remaining data sent. It's still a bit trick as a client may very well emit big jumps since it advertises a window and not ACKs, so very likely only the positive increments should be accounted for.

This discussion should also involve the IETF HTTP-WG to check if there's any expected risk in doing so, and if it could benefit other implementations, and/or fix some reported problems like PRIORITY dependencies on closed streams.

SRV record weight conflicting with agent-check

Output of haproxy -vv and uname -a

/ # haproxy -vv
HA-Proxy version 1.8.17 2019/01/08
Copyright 2000-2019 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-null-dereference -Wno-unused-label
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2q  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.0.2q  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.42 2018-03-20
Running on PCRE version : 8.42 2018-03-20
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace

/ # uname -a
Linux core-haproxy-ingress-controller-5hmk6 4.15.0-1036-azure #38~16.04.1-Ubuntu SMP Fri Dec 7 03:21:52 UTC 2018 x86_64 Linux

What's the configuration?

backend platform-platform-re-worker-websocket
    mode http
    balance roundrobin
    timeout tunnel 60s
    server-template server-dns 100 _websocket._tcp.platform-re-worker.platform.svc.cluster.local resolvers kubernetes resolve-prefer ipv4 init-addr none check inter 2s weight 100 agent-check agent-port 9991

Steps to reproduce the behavior

I am running haproxy deployed using haproxy-ingress in a kubernetes environment with CoreDNS as the DNS service. Dynamic scaling is done using SRV lookup, which allows the correct pod port to be discovered. The platform-re-worker has an agent-check which is returning a weight percentage, so that we can control the number of connections each server receives. Therefore I have also specified "weight 100" on the server-template line.

Actual behavior

Using the stats port, I can get the current weights of the two platform-re-worker pods that are running:

$ curl -s 'localhost:1936?stats;csv' | grep -E 'weight|re-worker-websocket.*,UP,' | cut -d, -f1,2,18,19
# pxname,svname,status,weight
platform-platform-re-worker-websocket,server-dns1,UP,50
platform-platform-re-worker-websocket,server-dns2,UP,55
platform-platform-re-worker-websocket,BACKEND,UP,105

The problem is that the weights keep flipping back and forth between the expected values between 0 and 100, and the value 1:

$ curl -s 'localhost:1936?stats;csv' | grep -E 'weight|re-worker-websocket.*,UP,' | cut -d, -f1,2,18,19
# pxname,svname,status,weight
platform-platform-re-worker-websocket,server-dns1,UP,1
platform-platform-re-worker-websocket,server-dns2,UP,1
platform-platform-re-worker-websocket,BACKEND,UP,2

The weights change all the time, every second.

Expected behavior

I would expect the weight to be based on the configured weight, and the percentage returned by the agent-check.

Do you have any idea what may have caused this?

I believe this to be a conflict between setting the weights based on the weight from the SRV record lookup, and the weight set as the agent-check percentage of the configured "weight 100". If I disable SRV lookups the weights are set correctly, based only on the configured weight and agent-check.

backend platform-platform-re-worker-8001
    mode http
    balance roundrobin
    timeout tunnel 60s
    server-template server-dns 100 platform-re-worker.platform.svc.cluster.local:8001 resolvers kubernetes resolve-prefer ipv4 init-addr none check inter 2s weight 100 agent-check agent-port 9991
/ # curl -s 'localhost:1936?stats;csv' | grep -E 'weight|re-worker-8001.*,UP,' | cut -d, -f1,2,18,19
# pxname,svname,status,weight
platform-platform-re-worker-8001,server-dns1,UP,55
platform-platform-re-worker-8001,server-dns2,UP,50
platform-platform-re-worker-8001,BACKEND,UP,105

There is also some issue with how haproxy is setting the weights from the SRV records. This is what the SRV records look like:

/ # dig _websocket._tcp.platform-re-worker.platform.svc.cluster.local SRV

; <<>> DiG 9.12.3 <<>> _websocket._tcp.platform-re-worker.platform.svc.cluster.local SRV
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46191
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2

;; QUESTION SECTION:
;_websocket._tcp.platform-re-worker.platform.svc.cluster.local. IN SRV

;; ANSWER SECTION:
_websocket._tcp.platform-re-worker.platform.svc.cluster.local. 5 IN SRV 0 50 8001 10-182-1-38.platform-re-worker.platform.svc.cluster.local.
_websocket._tcp.platform-re-worker.platform.svc.cluster.local. 5 IN SRV 0 50 8001 10-182-2-26.platform-re-worker.platform.svc.cluster.local.

;; ADDITIONAL SECTION:
10-182-1-38.platform-re-worker.platform.svc.cluster.local. 5 IN A 10.182.1.38
10-182-2-26.platform-re-worker.platform.svc.cluster.local. 5 IN A 10.182.2.26

;; Query time: 1 msec
;; SERVER: 10.128.0.10#53(10.128.0.10)
;; WHEN: Fri Feb 22 16:12:47 UTC 2019
;; MSG SIZE  rcvd: 501

As you see the SRV records specify weight "50". It's strange that haproxy was setting the weight to "1" as a result of the SRV lookups. Maybe there is a bug in that area as well. But, as mentioned above, disabling SRV records makes the weights work as they should.

Do you have an idea how to solve the issue?

I think that if a user explicitly configures a weight and an agent-check, the weight returned by the SRV lookup should be ignored. Not using SRV lookups serves as a workaround, but I would still like to use them to discover ports, and use agent-checks at the same time.

Truncated responses when using abortonclose + shutr

This is a summary of several old reports indicating that haproxy would truncate some responses in 1.8 and 1.7 if option abortonclose is set and the client closes. It could be related to comparable reports causing truncated compressed objects. It was reported that 1.7.0 didn't have the problem and it looked related to the long chain the stream processing fixes that went into 1.7 to address unkillable connections.

It is uncertain whether current versions are now safe from this, nor even if the issue was fixed in 1.7 and 1.8 as a byproduct of another fix. This entry was added not to forget it and to add other potential victims report it with extra information. Barring any new info on it, maybe we can close it and we'll reopen it when we get more info?

Possible memleak through spoe_appctx pool

Output of haproxy -vv and uname -a

$ haproxy -vv
HA-Proxy version 1.8.17-1ppa1~xenial 2019/01/15
Copyright 2000-2019 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE2 version : 10.21 2016-01-12
PCRE2 library supports JIT : yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace
$ uname -a
Linux openio-1 4.15.0-38-generic #41~16.04.1-Ubuntu SMP Wed Oct 10 20:16:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

What's the configuration?

$ cat /etc/haproxy/haproxy.cfg
global
  log 127.0.0.1:514 local5 info
  chroot /var/lib/haproxy

  user haproxy
  group haproxy
  daemon

  stats socket /run/haproxy/stats.sock mode 0660 level admin
  stats timeout 30s
  
  # SSL
  ca-base /etc/ssl/certs
  crt-base /etc/ssl/private
  # Default ciphers to use on SSL-enabled listening sockets.
  # For more information, see ciphers(1SSL). This list is from:
  #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
  ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
  ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
  ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
  ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets

defaults
  log global
  unique-id-format %{+X}o\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid
  unique-id-header X-Unique-ID
  mode http
  option  httplog clf
  option  dontlognull
  option  log-separate-errors
  option  log-health-checks
  option  http-server-close
  timeout connect 5s
  timeout client  60s
  timeout server  60s
  timeout http-request 15s
  timeout queue 1m
  timeout http-keep-alive 10s
  timeout check 10s
  timeout tunnel 1h
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http
  
###########################
# Listen Definition
###########################
listen stats
  stats enable
  stats uri /stats
  stats realm Haproxy\ Statistics
  stats auth [REDACTED]
  stats admin if TRUE
  bind [IP]:[PORT]
  mode http
  timeout client 5000
  timeout connect 4000
  timeout server 30000
  balance
# end listen stats

###########################
# Frontend Definition
###########################

frontend frontend1
  log-format "%ci:%cp [%t] %ft %b/t=%Tt q=%Tq w=%Tw c=%Tc r=%Tr/%ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %ID"
  default_backend backend1
  bind [IP]:[PORT]
  reqadd X-Forwarded-Proto:\ http
  mode http
  option forwardfor
# end frontend frontend1

frontend frontend2
  log-format "%ci:%cp [%t] %ft %b/t=%Tt q=%Tq w=%Tw c=%Tc r=%Tr/%ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %ID"
  default_backend backend2
  bind [IP]:[PORT]
  reqadd X-Forwarded-Proto:\ http
  mode http
  option forwardfor
# end frontend frontend2

frontend frontend3
  log-format "%ci:%cp [%t] %ft %b/t=%Tt q=%Tq w=%Tw c=%Tc r=%Tr/%ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %ID"
  default_backend backend3
  bind [IP]:[PORT]
  reqadd X-Forwarded-Proto:\ http
  mode http
  option forwardfor
# end frontend frontend3

frontend frontend4
  bind [IP]:[PORT] name service
  default_backend backend4
  log-format "%ci:%cp [%t] %ft %b/t=%Tt w=%Tw c=%Tc/%B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq"
  mode tcp
# end frontend frontend4

###########################
# Backend Definition
###########################

backend backend1
  balance roundrobin
  mode http
  server service1 [IP]:[PORT] check inter 5s
  [...]
# end backend backend1

backend backend2
  balance roundrobin
  mode http
  server service2 [IP]:[PORT] check inter 5s
  [...]
# end backend backend2

backend backend3
  balance roundrobin
  mode http
  server service3 [IP]:[PORT] check inter 5s
# end backend backend3

backend backend4
  server service4 [IP]:[PORT] check inter 5s
  server service42 [IP]:[PORT] check inter 5s backup
  mode tcp
  option tcp-check
# end backend backend4

Steps to reproduce the behavior

  1. Ask for pool information at intervals
echo "show pools" | nc -U /var/run/haproxy/stats.sock
Dumping pools usage. Use SIGQUIT to flush them.
  - Pool cache_st (16 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 0 failures, 2 users [SHARED]
  - Pool email_alert (48 bytes) : 17 allocated (816 bytes), 1 used, 0 failures, 4 users [SHARED]
  - Pool tcpcheck_ru (64 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 5 users [SHARED]
  - Pool spoe_appctx (128 bytes) : 2043604 allocated (261581312 bytes), 2043604 used, 0 failures, 3 users [SHARED]
  - Pool spoe_ctx (144 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool h2s (160 bytes) : 51 allocated (8160 bytes), 33 used, 0 failures, 3 users [SHARED]
  - Pool h2c (240 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool http_txn (288 bytes) : 5 allocated (1440 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool connection (384 bytes) : 17 allocated (6528 bytes), 2 used, 0 failures, 1 users [SHARED]
  - Pool hdr_idx (416 bytes) : 5 allocated (2080 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool dns_resolut (480 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool dns_answer_ (576 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool stream (848 bytes) : 10 allocated (8480 bytes), 1 used, 0 failures, 1 users [SHARED]
  - Pool requri (1024 bytes) : 2 allocated (2048 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool trash (16400 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users
  - Pool buffer (16408 bytes) : 8 allocated (131264 bytes), 2 used, 0 failures, 1 users [SHARED]
Total: 17 pools, 261742288 bytes allocated, 261621232 used.
echo "show pools" | nc -U /var/run/haproxy/stats.sock
Dumping pools usage. Use SIGQUIT to flush them.
  - Pool cache_st (16 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 0 failures, 2 users [SHARED]
  - Pool email_alert (48 bytes) : 17 allocated (816 bytes), 1 used, 0 failures, 4 users [SHARED]
  - Pool tcpcheck_ru (64 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 5 users [SHARED]
  - Pool spoe_appctx (128 bytes) : 2048491 allocated (262206848 bytes), 2048491 used, 0 failures, 3 users [SHARED]
  - Pool spoe_ctx (144 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool h2s (160 bytes) : 51 allocated (8160 bytes), 33 used, 0 failures, 3 users [SHARED]
  - Pool h2c (240 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool http_txn (288 bytes) : 5 allocated (1440 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool connection (384 bytes) : 17 allocated (6528 bytes), 2 used, 0 failures, 1 users [SHARED]
  - Pool hdr_idx (416 bytes) : 5 allocated (2080 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool dns_resolut (480 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool dns_answer_ (576 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool stream (848 bytes) : 10 allocated (8480 bytes), 1 used, 0 failures, 1 users [SHARED]
  - Pool requri (1024 bytes) : 2 allocated (2048 bytes), 0 used, 0 failures, 1 users [SHARED]
  - Pool trash (16400 bytes) : 0 allocated (0 bytes), 0 used, 0 failures, 1 users
  - Pool buffer (16408 bytes) : 8 allocated (131264 bytes), 2 used, 0 failures, 1 users [SHARED]
Total: 17 pools, 262367824 bytes allocated, 262246768 used.

Actual behavior

Pool spoe_appctx has grown in size, although spoe isn't even active. RAM usage keeps rising until OOM occurs.

Below is the graph showing memory consumption of haproxy process.

haproxy_memory

Expected behavior

RAM usage doesn't grow, and spoe_appctx pool size neither.

Do you have any idea what may have caused this?

No

Do you have an idea how to solve the issue?

No :(

haproxy with alpinelinux reg-test error

Output of haproxy -vv and uname -a

+ uname -a
Linux mrsdalloway 3.10.0-862.2.3.el7.x86_64 #1 SMP Wed May 9 18:05:47 UTC 2018 x86_64 Linux
+ /usr/local/sbin/haproxy -vv
HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2q  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.0.2q  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.42 2018-03-20
Running on PCRE version : 8.42 2018-03-20
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
        [SPOE] spoe
        [COMP] compression
        [CACHE] cache
        [TRACE] trace

What's the configuration?

reg test configs

Steps to reproduce the behavior

I have run the build on centos image because the buildah is easy to install therefore I don't need docker on the build machine.

  1. yum install -y buildah git
  2. git clone https://github.com/docker-library/haproxy.git
  3. cd haproxy/1.9/alpine/
  4. add after zlib-dev ~line 23 curl \ python3 \ git \
  5. add after cp -R /usr/src/haproxy/examples/errorfiles ... ~line 45
          && git clone https://github.com/vtest/VTest.git \
          && cd VTest \
          && make vtest \
          && cd /usr/src/haproxy \
          && VTEST_PROGRAM=/usr/src/VTest/vtest HAPROXY_PROGRAM=/usr/local/sbin/haproxy \
             make reg-tests \
          ; egrep -r ^ /tmp/haregtests*/* \
  1. run buildah bud .

Actual behavior

########################## Starting vtest ##########################
Testing with haproxy version: 1.9.2
#    top  TEST ./reg-tests/http-rules/h00002.vtc FAILED (0.951) exit=2
#    top  TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED (7.796) exit=2
2 tests failed, 0 tests skipped, 31 tests passed
########################## Gathering results ##########################
###### Test case: ./reg-tests/http-rules/h00002.vtc ######
## test results in: "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe"
---- s1    0.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a::ffff:10:0) == "2001:db8:c001:c01a:0:ffff:10:0" failed
###### Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ######
## test results in: "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495"
---- c2    7.0 EXPECT resp.http.mailsreceived (11) == "16" failed
make: *** [Makefile:1102: reg-tests] Error 1

Log outputs

+ egrep -r ^ /tmp/haregtests-2019-01-22_20-34-28.NHIDOb/failedtests.log /tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe /tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/failedtests.log:###### Test case: ./reg-tests/http-rules/h00002.vtc ######
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/failedtests.log:## test results in: "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/failedtests.log:---- s1    0.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a::ffff:10:0) == "2001:db8:c001:c01a:0:ffff:10:0" failed
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/failedtests.log:###### Test case: ./reg-tests/mailers/k_healthcheckmail.vtc ######
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/failedtests.log:## test results in: "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/failedtests.log:---- c2    7.0 EXPECT resp.http.mailsreceived (11) == "16" failed
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/INFO:Test case: ./reg-tests/http-rules/h00002.vtc
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    stats socket "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/stats.sock" level admin mode 600
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    stats socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:  defaults
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    # option http-use-htx
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    log global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    option httplog
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    timeout connect         15ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    timeout client          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    timeout server          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:  frontend fe1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    # accept-proxy so test client can send src ip
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    bind "fd@${fe1}" accept-proxy
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    # ipmask tests w/src
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Srciphdr %[src]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Srcmask1 %[src,ipmask(24)] # 192.168.1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Srcmask2 %[src,ipmask(16)] # 192.168.0.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Srcmask3 %[src,ipmask(8)] # 192.0.0.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    # ipmask tests from headers
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test1mask128 %[req.hdr_ip(Addr1),ipmask(24,128)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test2mask64 %[req.hdr_ip(Addr2),ipmask(24,64)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test2mask128 %[req.hdr_ip(Addr2),ipmask(24,128)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test2mask120 %[req.hdr_ip(Addr2),ipmask(24,120)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test2maskff00 %[req.hdr_ip(Addr2),ipmask(24,ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test2maskfee0 %[req.hdr_ip(Addr2),ipmask(24,ffff:ffff:ffff:ffff:ffff:ffff:ffff:fee0)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test3mask64 %[req.hdr_ip(Addr3),ipmask(24,64)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test3mask64v2 %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff:0:0:0:0)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test3mask64v3 %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff::)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test3maskff %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff:0:ffff:ffff:0)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test3maskv2 %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff:c001:c001:0000:0000)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    # ipv4 mask applied to ipv4 mapped address
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test4mask32 %[req.hdr_ip(Addr4),ipmask(32,64)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test5mask24 %[req.hdr_ip(Addr5),ipmask(24)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test6mask24 %[req.hdr_ip(Addr6),ipmask(24)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request set-header Test6mask25 %[req.hdr_ip(Addr6),ipmask(25)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    # track addr/mask in stick table
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request track-sc0 src,ipmask(24) table be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request track-sc1 hdr_ip(Addr4),ipmask(32) table be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    http-request track-sc2 hdr_ip(Addr3),ipmask(24,64) table be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    default_backend be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:  backend be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    stick-table type ipv6 size 20 expire 360s store gpc0,conn_cnt
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg:    server s1 127.0.0.1:40284
egrep: /tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/stats.sock: No such device or address
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    stats socket "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/stats.sock" level admin mode 600
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    stats socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:  defaults
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    # option http-use-htx
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    log global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    option httplog
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    timeout connect         15ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    timeout client          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    timeout server          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:  frontend fe2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    bind "fd@${fe2}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    # concat f1_f2 + _ + f3__f5 tests
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.field1) hdr(Field1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.field2) hdr(Field2)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.fieldhdr) hdr(Fieldhdr)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.fieldconcat) hdr(Field1),concat(_,sess.field2,)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldconcat2 %[var(sess.field1),concat(_,sess.field2,)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldconcat3 %[hdr(Field1),concat(_,sess.field2,)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldconcat %[var(sess.fieldconcat)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldstrcmp %[hdr(Fieldhdr),strcmp(sess.fieldconcat)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request deny unless { hdr(Fieldhdr),strcmp(sess.fieldconcat) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    # field tests
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldtest1 %[hdr(Fieldhdr),field(5,_)] #f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.fieldtest1var) hdr(Fieldtest1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.okfield) path,lower,field(4,/,1) #ok
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Okfieldtest %[var(sess.okfield)] #ok
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.qsfield) url_param(qs),upper,field(2,_,2) #IT_IS
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Qsfieldtest %[var(sess.qsfield)] #IT_IS
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Qsfieldconcat %[var(sess.qsfield),concat(_,sess.okfield,)] #IT_IS_ok
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldtest2 %[var(sess.fieldhdr),field(2,_,0)] #f2_f3__f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldtest3 %[var(sess.fieldconcat),field(2,_,2)] #f2_f3
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldtest4 %[hdr(Fieldconcat2),field(-2,_,3)] #f2_f3_
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldtest5 %[hdr(Fieldconcat3),field(-3,_,0)] #f1_f2_f3
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Fieldtest1strcmp %[str(f5),strcmp(sess.fieldtest1var)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request deny unless { str(f5),strcmp(sess.fieldtest1var) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request deny unless { str(ok),strcmp(sess.okfield) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request deny unless { str(IT_IS),strcmp(sess.qsfield) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    # word tests
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Wordtest1 %[hdr(Fieldhdr),word(4,_)] #f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.wordtest1var) hdr(Wordtest1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.okword) path,upper,word(3,/,1) #OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Okwordtest %[var(sess.okword)] #OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-var(sess.qsword) url_param(qs),word(1,_,2) #Yes_It
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Qswordtest %[var(sess.qsword)] #Yes_It
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Qswordregmtest %[var(sess.qsword),map_regm(/usr/src/haproxy/./reg-tests/http-rules/h00002.map)] #It_Yes
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Wordtest2 %[var(sess.fieldhdr),word(2,_,0)] #f2_f3__f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Wordtest3 %[var(sess.fieldconcat),word(3,_,2)] #f3__f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Wordtest4 %[hdr(Fieldconcat2),word(-2,_,3)] #f1_f2_f3
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Wordtest5 %[hdr(Fieldconcat3),word(-3,_,0)] #f1_f2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request set-header Wordtest1strcmp %[str(f5),strcmp(sess.wordtest1var)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request deny unless { str(f5),strcmp(sess.wordtest1var) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request deny unless { str(OK),strcmp(sess.okword) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    http-request deny unless { str(Yes_It),strcmp(sess.qsword) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    default_backend be2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    backend be2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg:    server s2 127.0.0.1:46568
egrep: /tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/stats.sock: No such device or address
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:*    top   0.0 TEST ./reg-tests/http-rules/h00002.vtc starting
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** top   0.0 extmacro def pwd=/usr/src/haproxy
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** top   0.0 extmacro def no-htx=#
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** top   0.0 extmacro def localhost=127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** top   0.0 extmacro def bad_backend=127.0.0.1 38532
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** top   0.0 extmacro def bad_ip=192.0.2.255
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** top   0.0 macro def testdir=/usr/src/haproxy/./reg-tests/http-rules
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** top   0.0 macro def tmpdir=/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   top   0.0 === varnishtest "Minimal tests for 1.9 converters: ipmask,concat...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:*    top   0.0 VTEST Minimal tests for 1.9 converters: ipmask,concat,strcmp,field,word
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   top   0.0 === feature ignore_unknown_macro
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   top   0.0 === server s1 {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 Starting server
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 macro def s1_addr=127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 macro def s1_port=40284
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 macro def s1_sock=127.0.0.1 40284
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:*    s1    0.0 Listen on 127.0.0.1 40284
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   top   0.0 === server s2 {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s2    0.0 Starting server
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 Started on 127.0.0.1 40284 (1 iterations)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s2    0.0 macro def s2_addr=127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s2    0.0 macro def s2_port=46568
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s2    0.0 macro def s2_sock=127.0.0.1 46568
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:*    s2    0.0 Listen on 127.0.0.1 46568
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   top   0.0 === haproxy h1 -conf {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s2    0.0 Started on 127.0.0.1 46568 (1 iterations)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_cli_sock=::1 42151
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_cli_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_cli_port=42151
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 setenv(cli, 8)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_fe1_sock=::1 32826
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_fe1_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_fe1_port=32826
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 setenv(fe1, 9)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|\tstats socket "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/stats.sock" level admin mode 600
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    stats socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|  defaults
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    # option http-use-htx
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    log global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    option httplog
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    timeout connect         15ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    timeout client          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    timeout server          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|  frontend fe1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    # accept-proxy so test client can send src ip
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    bind "fd@${fe1}" accept-proxy
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    # ipmask tests w/src
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Srciphdr %[src]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Srcmask1 %[src,ipmask(24)] # 192.168.1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Srcmask2 %[src,ipmask(16)] # 192.168.0.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Srcmask3 %[src,ipmask(8)] # 192.0.0.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    # ipmask tests from headers
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test1mask128 %[req.hdr_ip(Addr1),ipmask(24,128)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test2mask64 %[req.hdr_ip(Addr2),ipmask(24,64)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test2mask128 %[req.hdr_ip(Addr2),ipmask(24,128)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test2mask120 %[req.hdr_ip(Addr2),ipmask(24,120)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test2maskff00 %[req.hdr_ip(Addr2),ipmask(24,ffff:ffff:ffff:ffff:ffff:ffff:ffff:ff00)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test2maskfee0 %[req.hdr_ip(Addr2),ipmask(24,ffff:ffff:ffff:ffff:ffff:ffff:ffff:fee0)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test3mask64 %[req.hdr_ip(Addr3),ipmask(24,64)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test3mask64v2 %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff:0:0:0:0)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test3mask64v3 %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff::)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test3maskff %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff:0:ffff:ffff:0)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test3maskv2 %[req.hdr_ip(Addr3),ipmask(24,ffff:ffff:ffff:ffff:c001:c001:0000:0000)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    # ipv4 mask applied to ipv4 mapped address
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test4mask32 %[req.hdr_ip(Addr4),ipmask(32,64)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test5mask24 %[req.hdr_ip(Addr5),ipmask(24)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test6mask24 %[req.hdr_ip(Addr6),ipmask(24)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request set-header Test6mask25 %[req.hdr_ip(Addr6),ipmask(25)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    # track addr/mask in stick table
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request track-sc0 src,ipmask(24) table be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request track-sc1 hdr_ip(Addr4),ipmask(32) table be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    http-request track-sc2 hdr_ip(Addr3),ipmask(24,64) table be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    default_backend be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|  backend be1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    stick-table type ipv6 size 20 expire 360s store gpc0,conn_cnt
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 conf|    server s1 127.0.0.1:40284
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h1    0.0 haproxy_start
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 opt_worker 0 opt_daemon 0 opt_check_mode 0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 argv|exec "/usr/local/sbin/haproxy" -d  -f "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1/cfg"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 XXX 11 @586
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 PID: 1717
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_pid=1717
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 macro def h1_name=/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   top   0.0 === haproxy h2 -conf {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_cli_sock=::1 39088
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_cli_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_cli_port=39088
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 setenv(cli, 10)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_fe2_sock=::1 45785
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_fe2_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_fe2_port=45785
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 setenv(fe2, 13)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|\tstats socket "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/stats.sock" level admin mode 600
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    stats socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|  defaults
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    # option http-use-htx
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    log global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    option httplog
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    timeout connect         15ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    timeout client          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    timeout server          20ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|  frontend fe2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    bind "fd@${fe2}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    # concat f1_f2 + _ + f3__f5 tests
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.field1) hdr(Field1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.field2) hdr(Field2)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.fieldhdr) hdr(Fieldhdr)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.fieldconcat) hdr(Field1),concat(_,sess.field2,)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldconcat2 %[var(sess.field1),concat(_,sess.field2,)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldconcat3 %[hdr(Field1),concat(_,sess.field2,)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldconcat %[var(sess.fieldconcat)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldstrcmp %[hdr(Fieldhdr),strcmp(sess.fieldconcat)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request deny unless { hdr(Fieldhdr),strcmp(sess.fieldconcat) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    # field tests
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldtest1 %[hdr(Fieldhdr),field(5,_)] #f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.fieldtest1var) hdr(Fieldtest1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.okfield) path,lower,field(4,/,1) #ok
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Okfieldtest %[var(sess.okfield)] #ok
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.qsfield) url_param(qs),upper,field(2,_,2) #IT_IS
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Qsfieldtest %[var(sess.qsfield)] #IT_IS
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Qsfieldconcat %[var(sess.qsfield),concat(_,sess.okfield,)] #IT_IS_ok
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldtest2 %[var(sess.fieldhdr),field(2,_,0)] #f2_f3__f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldtest3 %[var(sess.fieldconcat),field(2,_,2)] #f2_f3
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldtest4 %[hdr(Fieldconcat2),field(-2,_,3)] #f2_f3_
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldtest5 %[hdr(Fieldconcat3),field(-3,_,0)] #f1_f2_f3
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Fieldtest1strcmp %[str(f5),strcmp(sess.fieldtest1var)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request deny unless { str(f5),strcmp(sess.fieldtest1var) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request deny unless { str(ok),strcmp(sess.okfield) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request deny unless { str(IT_IS),strcmp(sess.qsfield) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    # word tests
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Wordtest1 %[hdr(Fieldhdr),word(4,_)] #f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.wordtest1var) hdr(Wordtest1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.okword) path,upper,word(3,/,1) #OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Okwordtest %[var(sess.okword)] #OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-var(sess.qsword) url_param(qs),word(1,_,2) #Yes_It
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Qswordtest %[var(sess.qsword)] #Yes_It
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Qswordregmtest %[var(sess.qsword),map_regm(/usr/src/haproxy/./reg-tests/http-rules/h00002.map)] #It_Yes
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Wordtest2 %[var(sess.fieldhdr),word(2,_,0)] #f2_f3__f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Wordtest3 %[var(sess.fieldconcat),word(3,_,2)] #f3__f5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Wordtest4 %[hdr(Fieldconcat2),word(-2,_,3)] #f1_f2_f3
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Wordtest5 %[hdr(Fieldconcat3),word(-3,_,0)] #f1_f2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request set-header Wordtest1strcmp %[str(f5),strcmp(sess.wordtest1var)]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request deny unless { str(f5),strcmp(sess.wordtest1var) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request deny unless { str(OK),strcmp(sess.okword) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    http-request deny unless { str(Yes_It),strcmp(sess.qsword) eq 0 }
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    default_backend be2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    backend be2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 conf|    server s2 127.0.0.1:46568
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h2    0.0 haproxy_start
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 opt_worker 0 opt_daemon 0 opt_check_mode 0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 argv|exec "/usr/local/sbin/haproxy" -d  -f "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2/cfg"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 XXX 15 @586
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 PID: 1721
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_pid=1721
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.0 macro def h2_name=/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/h2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   top   0.0 === client c1 -connect ${h1_fe1_sock} -proxy2 "192.168.1.101:123...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   c1    0.0 Starting client
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   c1    0.0 Waiting for client
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  c1    0.0 Connect to ::1 32826
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  c1    0.0 connected fd 14 from ::1 55404 to ::1 32826
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   c1    0.0 === txreq -hdr "Addr1: 2001:db8::1" \
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|GET / HTTP/1.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|Addr1: 2001:db8::1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|Addr2: 2001:db8::bad:c0f:ffff\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|Addr3: 2001:db8:c001:c01a:ffff:ffff:10:ffff\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|Addr4: ::FFFF:192.168.1.101\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|Addr5: 192.168.1.2\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|Addr6: 192.168.1.255\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|Host: 127.0.0.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 txreq|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   c1    0.0 === rxresp
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|[WARNING] 021/203457 (1717) : config : log format ignored for frontend 'fe1' since it has no log address.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|Note: setting global.maxconn to 2000.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|Available polling systems :
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|      epoll : pref=300,  test result OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|       poll : pref=200,  test result OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|     select : pref=150,  test result FAILED
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|Total: 3 (2 usable), will use epoll.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|Available filters :
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|\t[SPOE] spoe
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|\t[COMP] compression
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|\t[CACHE] cache
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|\t[TRACE] trace
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|Using epoll() as the polling mechanism.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.accept(0009)=000c from [192.168.1.101:1234] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clireq[000c:ffffffff]: GET / HTTP/1.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clihdr[000c:ffffffff]: Addr1: 2001:db8::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clihdr[000c:ffffffff]: Addr2: 2001:db8::bad:c0f:ffff
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clihdr[000c:ffffffff]: Addr3: 2001:db8:c001:c01a:ffff:ffff:10:ffff
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clihdr[000c:ffffffff]: Addr4: ::FFFF:192.168.1.101
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clihdr[000c:ffffffff]: Addr5: 192.168.1.2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clihdr[000c:ffffffff]: Addr6: 192.168.1.255
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:fe1.clihdr[000c:ffffffff]: Host: 127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  s1    0.0 accepted fd 6 127.0.0.1 34230
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === rxreq
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|GET / HTTP/1.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Addr1: 2001:db8::1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Addr2: 2001:db8::bad:c0f:ffff\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Addr3: 2001:db8:c001:c01a:ffff:ffff:10:ffff\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Addr4: ::FFFF:192.168.1.101\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Addr5: 192.168.1.2\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Addr6: 192.168.1.255\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Host: 127.0.0.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Srciphdr: 192.168.1.101\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Srcmask1: 192.168.1.0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Srcmask2: 192.168.0.0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Srcmask3: 192.0.0.0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test1mask128: 2001:db8::1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test2mask64: 2001:db8::\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test2mask128: 2001:db8::bad:c0f:ffff\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test2mask120: 2001:db8::bad:c0f:ff00\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test2maskff00: 2001:db8::bad:c0f:ff00\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test2maskfee0: 2001:db8::bad:c0f:fee0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test3mask64: 2001:db8:c001:c01a::\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test3mask64v2: 2001:db8:c001:c01a::\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test3mask64v3: 2001:db8:c001:c01a::\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test3maskff: 2001:db8:c001:c01a::ffff:10:0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test3maskv2: 2001:db8:c001:c01a:c001:c001::\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test4mask32: 192.168.1.101\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test5mask24: 192.168.1.0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test6mask24: 192.168.1.0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|Test6mask25: 192.168.1.128\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdr|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 rxhdrlen = 806
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 0] |GET
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 1] |/
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 2] |HTTP/1.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 3] |Addr1: 2001:db8::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 4] |Addr2: 2001:db8::bad:c0f:ffff
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 5] |Addr3: 2001:db8:c001:c01a:ffff:ffff:10:ffff
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 6] |Addr4: ::FFFF:192.168.1.101
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 7] |Addr5: 192.168.1.2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 8] |Addr6: 192.168.1.255
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[ 9] |Host: 127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[10] |Srciphdr: 192.168.1.101
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[11] |Srcmask1: 192.168.1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[12] |Srcmask2: 192.168.0.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[13] |Srcmask3: 192.0.0.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[14] |Test1mask128: 2001:db8::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[15] |Test2mask64: 2001:db8::
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[16] |Test2mask128: 2001:db8::bad:c0f:ffff
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[17] |Test2mask120: 2001:db8::bad:c0f:ff00
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[18] |Test2maskff00: 2001:db8::bad:c0f:ff00
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[19] |Test2maskfee0: 2001:db8::bad:c0f:fee0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[20] |Test3mask64: 2001:db8:c001:c01a::
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[21] |Test3mask64v2: 2001:db8:c001:c01a::
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[22] |Test3mask64v3: 2001:db8:c001:c01a::
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[23] |Test3maskff: 2001:db8:c001:c01a::ffff:10:0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[24] |Test3maskv2: 2001:db8:c001:c01a:c001:c001::
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[25] |Test4mask32: 192.168.1.101
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[26] |Test5mask24: 192.168.1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[27] |Test6mask24: 192.168.1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 http[28] |Test6mask25: 192.168.1.128
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 bodylen = 0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.method == "GET"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.method (GET) == "GET" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.srciphdr == "192.168.1.101"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.srciphdr (192.168.1.101) == "192.168.1.101" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.srcmask1 == "192.168.1.0"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.srcmask1 (192.168.1.0) == "192.168.1.0" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.srcmask2 == "192.168.0.0"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.srcmask2 (192.168.0.0) == "192.168.0.0" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.srcmask3 == "192.0.0.0"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.srcmask3 (192.0.0.0) == "192.0.0.0" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test1mask128 == "2001:db8::1"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test1mask128 (2001:db8::1) == "2001:db8::1" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test2mask64 == "2001:db8::"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test2mask64 (2001:db8::) == "2001:db8::" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test2mask128 == "2001:db8::bad:c0f:ffff"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test2mask128 (2001:db8::bad:c0f:ffff) == "2001:db8::bad:c0f:ffff" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test2mask120 == "2001:db8::bad:c0f:ff00"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test2mask120 (2001:db8::bad:c0f:ff00) == "2001:db8::bad:c0f:ff00" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test2maskff00 == "2001:db8::bad:c0f:ff00"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test2maskff00 (2001:db8::bad:c0f:ff00) == "2001:db8::bad:c0f:ff00" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test2maskfee0 == "2001:db8::bad:c0f:fee0"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test2maskfee0 (2001:db8::bad:c0f:fee0) == "2001:db8::bad:c0f:fee0" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test3mask64 == "2001:db8:c001:c01a::"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test3mask64 (2001:db8:c001:c01a::) == "2001:db8:c001:c01a::" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test3mask64v2 == "2001:db8:c001:c01a::"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test3mask64v2 (2001:db8:c001:c01a::) == "2001:db8:c001:c01a::" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test3mask64v3 == "2001:db8:c001:c01a::"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** s1    0.0 EXPECT req.http.test3mask64v3 (2001:db8:c001:c01a::) == "2001:db8:c001:c01a::" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.0 === expect req.http.test3maskff == "2001:db8:c001:c01a:0:ffff:10...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:---- s1    0.0 EXPECT req.http.test3maskff (2001:db8:c001:c01a::ffff:10:0) == "2001:db8:c001:c01a:0:ffff:10:0" failed
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|[WARNING] 021/203457 (1721) : config : log format ignored for frontend 'fe2' since it has no log address.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|Note: setting global.maxconn to 2000.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|Available polling systems :
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|      epoll : pref=300,  test result OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|       poll : pref=200,  test result OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|     select : pref=150,  test result FAILED
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|Total: 3 (2 usable), will use epoll.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|Available filters :
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|\t[SPOE] spoe
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|\t[COMP] compression
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|\t[CACHE] cache
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|\t[TRACE] trace
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h2    0.0 debug|Using epoll() as the polling mechanism.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:be1.srvcls[000c:adfd]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:be1.clicls[000c:adfd]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  h1    0.0 debug|00000000:be1.closed[000c:adfd]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 rxhdr|HTTP/1.0 504 Gateway Time-out\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 rxhdr|Cache-Control: no-cache\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 rxhdr|Connection: close\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 rxhdr|Content-Type: text/html\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 rxhdr|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 rxhdrlen = 102
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 http[ 0] |HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 http[ 1] |504
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 http[ 2] |Gateway Time-out
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 http[ 3] |Cache-Control: no-cache
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 http[ 4] |Connection: close
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 http[ 5] |Content-Type: text/html
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** c1    0.0 bodylen = 0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:***  c1    0.0 closing fd 14
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   c1    0.0 Ending
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:*    top   0.0 RESETTING after ./reg-tests/http-rules/h00002.vtc
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h1    0.0 Reset and free h1 haproxy 1717
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h1    0.0 Wait
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h1    0.0 Stop HAproxy pid=1717
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 Kill(2)=0: No error information
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h1    0.0 STDOUT poll 0x10
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h1    0.1 WAIT4 pid=1717 status=0x0002 (user 0.002217 sys 0.005542)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h2    0.1 Reset and free h2 haproxy 1721
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h2    0.1 Wait
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h2    0.1 Stop HAproxy pid=1721
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.1 Kill(2)=0: No error information
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**** h2    0.1 STDOUT poll 0x10
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   h2    0.2 WAIT4 pid=1721 status=0x0002 (user 0.005549 sys 0.001109)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s1    0.2 Waiting for server (4/-1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:**   s2    0.2 Waiting for server (5/-1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:*    top   0.2 TEST ./reg-tests/http-rules/h00002.vtc FAILED
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.43f293fe/LOG:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/INFO:Test case: ./reg-tests/mailers/k_healthcheckmail.vtc
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    stats socket "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/stats.sock" level admin mode 600
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    stats socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        lua-load /usr/src/haproxy/./reg-tests/mailers/k_healthcheckmail.lua
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:defaults
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    frontend femail
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        mode tcp
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        bind "fd@${femail}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        tcp-request content use-service lua.mailservice
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    frontend luahttpservice
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        bind "fd@${luahttpservice}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        http-request use-service lua.luahttpservice
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    frontend fe1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        bind "fd@${fe1}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        default_backend b1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        http-response lua.bug
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    backend b1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        option httpchk /svr_healthcheck
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        option log-health-checks
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        email-alert mailers mymailers
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        email-alert level info
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        email-alert from [email protected]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        email-alert to [email protected]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        server broken 127.0.0.1:65535 check
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        server srv_lua ::1:35724 check inter 500
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:        server srv1 127.0.0.1:38256 check inter 500
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:    mailers mymailers
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:#      timeout mail 20s
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:#      timeout mail 200ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:      mailer smtp1 ::1:33546
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg:
egrep: /tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/stats.sock: No such device or address
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:*    top   0.0 TEST ./reg-tests/mailers/k_healthcheckmail.vtc starting
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** top   0.0 extmacro def pwd=/usr/src/haproxy
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** top   0.0 extmacro def no-htx=#
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** top   0.0 extmacro def localhost=127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** top   0.0 extmacro def bad_backend=127.0.0.1 38532
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** top   0.0 extmacro def bad_ip=192.0.2.255
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** top   0.0 macro def testdir=/usr/src/haproxy/./reg-tests/mailers
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** top   0.0 macro def tmpdir=/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   0.0 === varnishtest "Lua: txn:get_priv() scope"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:*    top   0.0 VTEST Lua: txn:get_priv() scope
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   0.0 === feature ignore_unknown_macro
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   0.0 === server s1 {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s1    0.0 Starting server
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    0.0 macro def s1_addr=127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    0.0 macro def s1_port=38256
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    0.0 macro def s1_sock=127.0.0.1 38256
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:*    s1    0.0 Listen on 127.0.0.1 38256
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   0.0 === haproxy h1 -conf {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s1    0.0 Started on 127.0.0.1 38256 (1 iterations)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_cli_sock=::1 42846
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_cli_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_cli_port=42846
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 setenv(cli, 6)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_femail_sock=::1 33546
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_femail_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_femail_port=33546
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 setenv(femail, 7)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_luahttpservice_sock=::1 35724
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_luahttpservice_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_luahttpservice_port=35724
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 setenv(luahttpservice, 8)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_fe1_sock=::1 35979
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_fe1_addr=::1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_fe1_port=35979
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 setenv(fe1, 9)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|\tstats socket "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/stats.sock" level admin mode 600
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    stats socket "fd@${cli}" level admin
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    global
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        lua-load /usr/src/haproxy/./reg-tests/mailers/k_healthcheckmail.lua
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|defaults
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    frontend femail
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        mode tcp
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        bind "fd@${femail}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        tcp-request content use-service lua.mailservice
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    frontend luahttpservice
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        bind "fd@${luahttpservice}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        http-request use-service lua.luahttpservice
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    frontend fe1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        bind "fd@${fe1}"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        default_backend b1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        http-response lua.bug
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    backend b1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        mode http
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        option httpchk /svr_healthcheck
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        option log-health-checks
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        email-alert mailers mymailers
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        email-alert level info
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        email-alert from [email protected]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        email-alert to [email protected]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        server broken 127.0.0.1:65535 check
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        server srv_lua ::1:35724 check inter 500
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|        server srv1 127.0.0.1:38256 check inter 500
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|    mailers mymailers
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|#      timeout mail 20s
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|#      timeout mail 200ms
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|      mailer smtp1 ::1:33546
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 conf|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   h1    0.0 haproxy_start
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 opt_worker 0 opt_daemon 0 opt_check_mode 0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 argv|exec "/usr/local/sbin/haproxy" -d  -f "/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1/cfg"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 XXX 11 @586
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 PID: 1811
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_pid=1811
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    0.0 macro def h1_name=/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/h1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   0.0 === client c1 -connect ${h1_luahttpservice_sock} {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 Starting client
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 Waiting for client
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  c1    0.0 Connect to ::1 35724
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  c1    0.0 connected fd 10 from ::1 57534 to ::1 35724
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 === timeout 2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 === txreq -url "/setport" -hdr "vtcport1: 33546"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 txreq|GET /setport HTTP/1.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 txreq|vtcport1: 33546\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 txreq|Host: 127.0.0.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 txreq|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 === rxresp
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[WARNING] 021/203505 (1811) : config : missing timeouts for frontend 'femail'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | While not properly invalid, you will certainly encounter various problems
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | with such a configuration. To fix this, please ensure that all following
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[WARNING] 021/203505 (1811) : config : missing timeouts for frontend 'luahttpservice'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | While not properly invalid, you will certainly encounter various problems
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | with such a configuration. To fix this, please ensure that all following
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[WARNING] 021/203505 (1811) : config : missing timeouts for frontend 'fe1'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | While not properly invalid, you will certainly encounter various problems
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | with such a configuration. To fix this, please ensure that all following
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[WARNING] 021/203505 (1811) : config : missing timeouts for backend 'b1'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | While not properly invalid, you will certainly encounter various problems
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | with such a configuration. To fix this, please ensure that all following
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|Note: setting global.maxconn to 2000.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|Available polling systems :
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|      epoll : pref=300,
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug| test result OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|       poll : pref=200,  test result OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|     select : pref=150,
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug| test result FAILED
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|Total: 3 (2 usable), will use epoll.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|Available filters :
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|\t[SPOE] spoe
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|\t[COMP] compression
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|\t[CACHE] cache
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|\t[TRACE] trace
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|Using epoll() as the polling mechanism.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000000:luahttpservice.accept(0008)=000d from [::1:57534] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[WARNING] 021/203505 (1811) : Health check for server b1/broken failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms, status: 0/2 DOWN.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000000:luahttpservice.clireq[000d:ffffffff]: GET /setport HTTP/1.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000000:luahttpservice.clihdr[000d:ffffffff]: vtcport1: 33546
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000000:luahttpservice.clihdr[000d:ffffffff]: Host: 127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[WARNING] 021/203505 (1811) : Server b1/broken is DOWN. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000001:femail.accept(0007)=000e from [::1:45480] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000000:luahttpservice.srvrep[000d:ffffffff]: HTTP/1.1 200 OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000000:luahttpservice.srvhdr[000d:ffffffff]: Transfer-encoding: chunked
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000000:luahttpservice.srvcls[000d:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 rxhdr|HTTP/1.1 200 OK\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 rxhdr|Transfer-encoding: chunked\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 rxhdr|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 rxhdrlen = 47
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 http[ 0] |HTTP/1.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 http[ 1] |200
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 http[ 2] |OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 http[ 3] |Transfer-encoding: chunked
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 len|2\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 chunk|OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 len|0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 bodylen = 2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 === expect resp.status == 200
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 EXPECT resp.status (200) == "200" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 === expect resp.body == "OK"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c1    0.0 EXPECT resp.body (OK) == "OK" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  c1    0.0 closing fd 10
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c1    0.0 Ending
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[info] 021/203505 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000002:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000002:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[info] 021/203505 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   0.0 === delay 2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  top   0.0 delaying 2 second(s)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[info] 021/203505 (1811) : Health check for server b1/broken failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms, status: 0/2 DOWN..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[info] 021/203505 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[info] 021/203505 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|[info] 021/203505 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/broken failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms, status: 0/2 DOWN..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000001:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000001:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.0 debug|00000001:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000003:luahttpservice.accept(0008)=000e from [::1:57540] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000003:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000003:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000003:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000003:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000003:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[WARNING] 021/203506 (1811) : Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 0/2 DOWN.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[WARNING] 021/203506 (1811) : Server b1/srv_lua is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000004:femail.accept(0007)=000d from [::1:45484] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[info] 021/203506 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[info] 021/203506 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[info] 021/203506 (1811) : Server b1/broken is DOWN. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[info] 021/203506 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[info] 021/203506 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|[info] 021/203506 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Server b1/broken is DOWN. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000004:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000004:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    0.7 debug|00000004:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000005:luahttpservice.accept(0008)=000e from [::1:57544] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000005:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000005:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 200 OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000005:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000005:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000005:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|[WARNING] 021/203506 (1811) : Health check for server b1/srv_lua succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 1/2 DOWN.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000006:femail.accept(0007)=000d from [::1:45488] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|[info] 021/203506 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|[info] 021/203506 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|[info] 021/203506 (1811) : Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 0/2 DOWN..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|[info] 021/203506 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|[info] 021/203506 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|[info] 021/203506 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 0/2 DOWN..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000006:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000006:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.2 debug|00000006:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  s1    1.3 accepted fd 5 127.0.0.1 49620
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s1    1.3 === rxreq
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 rxhdr|OPTIONS /svr_healthcheck HTTP/1.0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 rxhdr|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 rxhdrlen = 37
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 http[ 0] |OPTIONS
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 http[ 1] |/svr_healthcheck
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 http[ 2] |HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 bodylen = 0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s1    1.3 === txresp
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 txresp|HTTP/1.1 200 OK\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 txresp|Content-Length: 0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s1    1.3 txresp|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  s1    1.3 shutting fd 5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s1    1.3 Ending
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|[WARNING] 021/203507 (1811) : Health check for server b1/srv1 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 3/3 UP.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|00000007:femail.accept(0007)=000d from [::1:45492] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|[info] 021/203507 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|[info] 021/203507 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|[info] 021/203507 (1811) : Server b1/srv_lua is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|[info] 021/203507 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|[info] 021/203507 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|[info] 021/203507 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Server b1/srv_lua is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|00000007:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|00000007:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.3 debug|00000007:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000008:luahttpservice.accept(0008)=000e from [::1:57552] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000008:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000008:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 200 OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000008:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000008:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000008:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[WARNING] 021/203507 (1811) : Health check for server b1/srv_lua succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 3/3 UP.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[WARNING] 021/203507 (1811) : Server b1/srv_lua is UP. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000009:femail.accept(0007)=000d from [::1:45496] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[info] 021/203507 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[info] 021/203507 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[info] 021/203507 (1811) : Health check for server b1/srv_lua succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 1/2 DOWN..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[info] 021/203507 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[info] 021/203507 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|[info] 021/203507 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/srv_lua succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 1/2 DOWN..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000009:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000009:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    1.7 debug|00000009:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   2.0 === server s2 -repeat 5 -start
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s2    2.0 Starting server
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s2    2.0 macro def s2_addr=127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s2    2.0 macro def s2_port=45275
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** s2    2.0 macro def s2_sock=127.0.0.1 45275
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:*    s2    2.0 Listen on 127.0.0.1 45275
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   2.0 === delay 5
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  top   2.0 delaying 5 second(s)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s2    2.0 Started on 127.0.0.1 45275 (5 iterations)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.2 debug|0000000a:luahttpservice.accept(0008)=000f from [::1:57560] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.2 debug|0000000a:luahttpservice.clireq[000f:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.2 debug|0000000a:luahttpservice.srvrep[000f:ffffffff]: HTTP/1.0 200 OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.2 debug|0000000a:luahttpservice.srvcls[000f:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.2 debug|0000000a:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.2 debug|0000000a:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|[WARNING] 021/203508 (1811) : Health check for server b1/srv1 failed, reason: Layer7 timeout, check duration: 501ms, status: 2/3 UP.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|0000000b:femail.accept(0007)=000e from [::1:45504] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|[info] 021/203508 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|[info] 021/203508 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|[info] 021/203508 (1811) : Health check for server b1/srv1 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 3/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|[info] 021/203508 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|[info] 021/203508 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|[info] 021/203508 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/srv1 succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 3/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|0000000b:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|0000000b:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.3 debug|0000000b:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.7 debug|0000000c:luahttpservice.accept(0008)=000e from [::1:57564] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.7 debug|0000000c:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.7 debug|0000000c:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 200 OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.7 debug|0000000c:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.7 debug|0000000c:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    2.7 debug|0000000c:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.2 debug|0000000d:luahttpservice.accept(0008)=000f from [::1:57568] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.2 debug|0000000d:luahttpservice.clireq[000f:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.2 debug|0000000d:luahttpservice.srvrep[000f:ffffffff]: HTTP/1.0 200 OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.2 debug|0000000d:luahttpservice.srvcls[000f:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.2 debug|0000000d:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.2 debug|0000000d:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|[WARNING] 021/203509 (1811) : Health check for server b1/srv1 failed, reason: Layer7 timeout, check duration: 501ms, status: 1/3 UP.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|0000000e:femail.accept(0007)=000e from [::1:45512] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|[info] 021/203509 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|[info] 021/203509 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|[info] 021/203509 (1811) : Health check for server b1/srv_lua succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 3/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|[info] 021/203509 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|[info] 021/203509 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|[info] 021/203509 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/srv_lua succeeded, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms, status: 3/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|0000000e:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|0000000e:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.3 debug|0000000e:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|0000000f:luahttpservice.accept(0008)=000e from [::1:57572] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|0000000f:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|0000000f:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|0000000f:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|0000000f:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|0000000f:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|[WARNING] 021/203509 (1811) : Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 2/3 UP.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|00000010:femail.accept(0007)=000d from [::1:45516] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|[info] 021/203509 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|[info] 021/203509 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|[info] 021/203509 (1811) : Server b1/srv_lua is UP. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|[info] 021/203509 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|[info] 021/203509 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|[info] 021/203509 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Server b1/srv_lua is UP. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|00000010:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|00000010:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    3.7 debug|00000010:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000011:luahttpservice.accept(0008)=000f from [::1:57580] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000011:luahttpservice.clireq[000f:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000011:luahttpservice.srvrep[000f:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000011:luahttpservice.srvcls[000f:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000011:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000011:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|[WARNING] 021/203509 (1811) : Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 1/3 UP.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000012:femail.accept(0007)=000e from [::1:45524] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|[info] 021/203509 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|[info] 021/203509 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|[info] 021/203509 (1811) : Health check for server b1/srv1 failed, reason: Layer7 timeout, check duration: 501ms, status: 2/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|[info] 021/203509 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|[info] 021/203509 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|[info] 021/203509 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/srv1 failed, reason: Layer7 timeout, check duration: 501ms, status: 2/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000012:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000012:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.2 debug|00000012:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.3 debug|[WARNING] 021/203510 (1811) : Health check for server b1/srv1 failed, reason: Layer7 timeout, check duration: 501ms, status: 0/2 DOWN.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.3 debug|[WARNING] 021/203510 (1811) : Server b1/srv1 is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.3 debug|00000013:femail.accept(0007)=000e from [::1:45526] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.3 debug|[info] 021/203510 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|[info] 021/203510 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|[info] 021/203510 (1811) : Health check for server b1/srv1 failed, reason: Layer7 timeout, check duration: 501ms, status: 1/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|[info] 021/203510 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|[info] 021/203510 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|[info] 021/203510 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/srv1 failed, reason: Layer7 timeout, check duration: 501ms, status: 1/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|00000013:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|00000013:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.4 debug|00000013:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000014:luahttpservice.accept(0008)=000e from [::1:57586] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000014:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000014:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000014:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000014:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000014:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[WARNING] 021/203510 (1811) : Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 0/2 DOWN.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[WARNING] 021/203510 (1811) : Server b1/srv_lua is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[ALERT] 021/203510 (1811) : backend 'b1' has no server available!
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000015:femail.accept(0007)=000d from [::1:45530] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[info] 021/203510 (1811) : ############# Mailservice Called #############
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[info] 021/203510 (1811) : #### Send your mailbody
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[info] 021/203510 (1811) : Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 2/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[info] 021/203510 (1811) : ...
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[info] 021/203510 (1811) : #### Body recieved OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|[info] 021/203510 (1811) : Mail queued for delivery to /dev/null subject: Subject: [HAproxy Alert] Health check for server b1/srv_lua failed, reason: Layer7 wrong status, code: 403, info: "Forbidden", check duration: 0ms, status: 2/3 UP..
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000015:femail.srvcls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000015:femail.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    4.7 debug|00000015:femail.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.2 debug|00000016:luahttpservice.accept(0008)=000f from [::1:57592] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.2 debug|00000016:luahttpservice.clireq[000f:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.2 debug|00000016:luahttpservice.srvrep[000f:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.2 debug|00000016:luahttpservice.srvcls[000f:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.2 debug|00000016:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.2 debug|00000016:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.7 debug|00000017:luahttpservice.accept(0008)=000e from [::1:57594] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.7 debug|00000017:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.7 debug|00000017:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.7 debug|00000017:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.7 debug|00000017:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    5.7 debug|00000017:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.2 debug|00000018:luahttpservice.accept(0008)=000f from [::1:57600] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.2 debug|00000018:luahttpservice.clireq[000f:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.2 debug|00000018:luahttpservice.srvrep[000f:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.2 debug|00000018:luahttpservice.srvcls[000f:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.2 debug|00000018:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.2 debug|00000018:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.7 debug|00000019:luahttpservice.accept(0008)=000e from [::1:57602] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.7 debug|00000019:luahttpservice.clireq[000e:ffffffff]: OPTIONS /svr_healthcheck HTTP/1.0
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.7 debug|00000019:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.0 403 Forbidden
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.7 debug|00000019:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.7 debug|00000019:luahttpservice.clicls[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    6.7 debug|00000019:luahttpservice.closed[adfd:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   top   7.0 === client c2 -connect ${h1_luahttpservice_sock} {
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 Starting client
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 Waiting for client
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  c2    7.0 Connect to ::1 35724
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  c2    7.0 connected fd 13 from ::1 57606 to ::1 35724
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 === timeout 2
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 === txreq -url "/checkMailCounters"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 txreq|GET /checkMailCounters HTTP/1.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 txreq|Host: 127.0.0.1\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 txreq|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 === rxresp
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.accept(0008)=000e from [::1:57606] ALPN=<none>
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.clireq[000e:ffffffff]: GET /checkMailCounters HTTP/1.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.clihdr[000e:ffffffff]: Host: 127.0.0.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.srvrep[000e:ffffffff]: HTTP/1.1 200 OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.srvhdr[000e:ffffffff]: mailsreceived: 11
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.srvhdr[000e:ffffffff]: mailconnectionsmade: 11
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.srvhdr[000e:ffffffff]: Transfer-encoding: chunked
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:***  h1    7.0 debug|0000001a:luahttpservice.srvcls[000e:ffffffff]
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 rxhdr|HTTP/1.1 200 OK\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 rxhdr|mailsreceived: 11\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 rxhdr|mailconnectionsmade: 11\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 rxhdr|Transfer-encoding: chunked\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 rxhdr|\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 rxhdrlen = 91
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 http[ 0] |HTTP/1.1
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 http[ 1] |200
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 http[ 2] |OK
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 http[ 3] |mailsreceived: 11
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 http[ 4] |mailconnectionsmade: 11
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 http[ 5] |Transfer-encoding: chunked
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 len|c\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 chunk|MailCounters
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 len|0\r
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 bodylen = 12
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 === expect resp.status == 200
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 EXPECT resp.status (200) == "200" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 === expect resp.body == "MailCounters"
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** c2    7.0 EXPECT resp.body (MailCounters) == "MailCounters" match
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   c2    7.0 === expect resp.http.mailsreceived == 16
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:---- c2    7.0 EXPECT resp.http.mailsreceived (11) == "16" failed
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:*    top   7.0 RESETTING after ./reg-tests/mailers/k_healthcheckmail.vtc
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   h1    7.0 Reset and free h1 haproxy 1811
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   h1    7.0 Wait
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   h1    7.0 Stop HAproxy pid=1811
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    7.0 Kill(2)=0: No error information
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**** h1    7.0 STDOUT poll 0x10
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   h1    7.1 WAIT4 pid=1811 status=0x0002 (user 0.008681 sys 0.022789)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s1    7.1 Waiting for server (4/-1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:**   s2    7.1 Waiting for server (5/-1)
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:*    top   7.1 TEST ./reg-tests/mailers/k_healthcheckmail.vtc FAILED
/tmp/haregtests-2019-01-22_20-34-28.NHIDOb/vtc.1382.6981b495/LOG:

Expected behavior

No failed tests 😄

Do you have any idea what may have caused this?

For one test @TimWolla means that the error could be the following.

The difference here is that the test expects an IPv6 address that's not
maximally compressed, while you get a IPv6 address that is maximally
compressed. I would guess that this is the difference in behaviour
between glibc and musl (as you are using an Alpine container).

Do you have an idea how to solve the issue?

I would make it simple and adopt the check line to accept compressed and uncompressed IPv6 addresses.
For the other failed tests I have no idea.

1.9.x build failure on macOS

Output of haproxy -vv and uname -a

Darwin Kento.local 18.2.0 Darwin Kernel Version 18.2.0: Thu Dec 20 20:46:53 PST 2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64

(Reproducible on the 3 most recent versions of macOS. Homebrew/homebrew-core#36908 Homebrew/homebrew-core#35421)

What's the configuration?

(paste your output here)

Steps to reproduce the behavior

  1. make CC=clang CFLAGS= LDFLAGS= TARGET=generic USE_KQUEUE=1 USE_POLL=1 USE_PCRE=1 USE_OPENSSL=1 USE_THREAD=1 USE_ZLIB=1 ADDLIB=-lcrypto

Actual behavior

src/ssl_sock.c:337:1: error: argument to 'section' attribute is not valid for this target: mach-o section specifier requires a segment and section separated by a comma
__decl_rwlock(ssl_ctx_lru_rwlock);
^
include/common/hathreads.h:189:2: note: expanded from macro '__decl_rwlock'
        INITCALL1(STG_LOCK, ha_rwlock_init, &(lock))
        ^
include/common/initcall.h:90:2: note: expanded from macro 'INITCALL1'
        _DECLARE_INITCALL(stage, __LINE__, function, arg1, 0, 0)
        ^
include/common/initcall.h:78:2: note: expanded from macro '_DECLARE_INITCALL'
        __DECLARE_INITCALL(__VA_ARGS__)
        ^
include/common/initcall.h:65:42: note: expanded from macro '__DECLARE_INITCALL'
            __attribute__((__used__,__section__("init_"#stg))) =   \
src/ssl_sock.c:9088:1: error: argument to 'section' attribute is not valid for this target: mach-o section specifier requires a segment whose length is between 1 and 16 characters
INITCALL1(STG_REGISTER, cli_register_kw, &cli_kws);
^
include/common/initcall.h:90:2: note: expanded from macro 'INITCALL1'
        _DECLARE_INITCALL(stage, __LINE__, function, arg1, 0, 0)
        ^
include/common/initcall.h:78:2: note: expanded from macro '_DECLARE_INITCALL'
        __DECLARE_INITCALL(__VA_ARGS__)
        ^
include/common/initcall.h:65:42: note: expanded from macro '__DECLARE_INITCALL'
            __attribute__((__used__,__section__("init_"#stg))) =   \

and more similar errors.

See full log from Homebrew package manager CI. (Backup version in case the log is removed after awhile. )

Expected behavior

Build the binary successfully.

Do you have any idea what may have caused this?

Perhaps d13a928 and 5794fb0.

Do you have an idea how to solve the issue?

Haproxy 1.9.2 crashes when agent-check is used

Output of haproxy -vv and uname -a

# haproxy -vv
HA-Proxy version 1.9.2 2019/01/17 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = native
  CC      = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits -DTLS_TICKETS_NO=4
  OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.1a-v1  21 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.1a-v1  21 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.41 2017-07-05
Running on PCRE version : 8.41 2017-07-05
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace

# uname -a
Linux test1 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

What's the configuration?

backend bk_rr
    balance roundrobin
    option http-server-close

    server test1 10.1.1.1:80 check agent-check
    server test2 10.1.1.2:80 check agent-check

Steps to reproduce the behavior

  1. As soon as haproxy started, it crashes.
  2. if removing agent-check, it starts normally.
(gdb) bt full
#0  0x00000000004b5e84 in process_chk_conn (state=<optimized out>, context=0x2073fb0, t=0x21b8830) at src/checks.c:2165
        cs = 0x0
        conn = <optimized out>
        rv = <optimized out>
        check = 0x2073fb0
        proxy = 0x0
#1  process_chk (t=0x21b8830, context=0x2073fb0, state=<optimized out>) at src/checks.c:2327
        check = 0x2073fb0
#2  0x000000000053a77a in process_runnable_tasks () at src/task.c:435
        t = 0x21b8830
        state = <optimized out>
        ctx = <optimized out>
        process = <optimized out>
        t = <optimized out>
        max_processed = 200
#3  0x00000000004bc81d in run_poll_loop () at src/haproxy.c:2619
        next = <optimized out>
        exp = <optimized out>
#4  run_thread_poll_loop (data=data@entry=0x1fbdb60) at src/haproxy.c:2684
        ptif = 0x7f0050 <per_thread_init_list>
        ptdf = <optimized out>
        start_lock = 0
#5  0x000000000041e94b in main (argc=<optimized out>, argv=0x7ffdeea928e8) at src/haproxy.c:3313
        tids = 0x1fbdb60
        threads = 0x1f98b80
        i = <optimized out>
        old_sig = {__val = {0, 0, 0, 0, 0, 0, 0, 140029491259232, 24, 32962176, 335544638, 140029507024752, 33038688, 140029487866124, 0, 0}}
        blocked_sig = {__val = {18446744067199990583, 18446744073709551615 <repeats 15 times>}}
        err = <optimized out>
        retry = <optimized out>
        limit = {rlim_cur = 1500228, rlim_max = 1500228}
        errmsg = '\000' <repeats 96 times>, "\002\000\000"
        pidfd = <optimized out>

Actual behavior

Expected behavior

Do you have any idea what may have caused this?

Do you have an idea how to solve the issue?

Add Fast CGI as proto for servers

Output of haproxy -vv and uname -a

Future version

Which functionality do you think we should add?

I would like to have the possibility to use php-fpm or any other fast-cgi appserver directly as backend servers.

The mailing list discussion started at haproxy 1.9.2 announcement

https://www.mail-archive.com/[email protected]/msg32461.html

The Idea is like this.

haproxy -> *.php           => php-fpm
        -> *.static-files  => webserver, cache

What are you trying to do?

I want to reduce the hops from the app to the client.
Currently is the only possible solution this flow.

Client -> haproxy -> webserver -> fast-cgi app ( e. g. php-fpm)

My whish is like this

Client -> haproxy -> fast-cgi app ( e. g. php-fpm)

RFC 8484 - DNS Queries over HTTPS (DoH)

Output of haproxy -vv and uname -a

Future version

What should haproxy do differently? Which functionality do you think we should add?

I would like to have a option for a DoH url in the resolver, as we have now a robust HTTP/2 implementation ;-).

RFC: https://tools.ietf.org/html/rfc8484

There are now several provider for DoH which are listed here.
https://github.com/curl/curl/wiki/DNS-over-HTTPS

Companies can run there own DoH Servers similar to the classical server, a possible setup is described here.

https://www.aaflalo.me/2018/10/tutorial-setup-dns-over-https-server/

Suggestion

My suggestion is a similar syntax as for spoe.

resolvers mydoh
  type doh # new option [dns|doh|dot]
  dohservers be_MyDoH # The backend for the doh servers
  ...             # some other options

backend be_MyDoH 
  option http-use-htx # RECOMMENDED as DoH needs HTTP/2 
                                   # https://tools.ietf.org/html/rfc8484#section-5.2
  ...             # some other options
  server DoH_1 ip:port ...
  server DoH_2 ip:port ...

Haproxy 1.9.2 is incorrectly routing at random

Output of haproxy -vv and uname -a

HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace
Linux gke-haproxy-2019-01-19-2-default-pool-40424170-2s8z 4.14.65+ #1 SMP Sun Sep 9 02:18:33 PDT 2018 x86_64 GNU/Linux

What's the configuration?

global
    log 127.0.0.1    local0
    log 127.0.0.1    local1 notice
    maxconn 4096
    pidfile /var/run/haproxy.pid
    stats socket /var/run/haproxy.stat mode 600 level admin
    daemon
    hard-stop-after 30s
    tune.ssl.default-dh-param 1024
    tune.ssl.cachesize 100000
    ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
    ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
    ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
    ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
    tune.bufsize 16384
    tune.maxrewrite 1024
    ssl-engine rdrand
    ssl-mode-async
    nbthread 4

defaults
    log global
    mode http
    compression algo gzip
    compression type text/html text/plain text/css application/javascript application/octet-stream application/json
    option httplog
    option dontlognull
    option redispatch
    option tcp-smart-accept
    option tcp-smart-connect
    option forwardfor if-none
    option splice-auto
    option socket-stats
    option http-buffer-request
    timeout check 5s
    timeout client 300s
    timeout tunnel 60000s
    timeout connect 20s
    timeout http-keep-alive 300s
    timeout http-request 30s
    timeout queue 20s
    timeout server 50s
    hash-balance-factor 125
    balance hdr(Cookie)
    hash-type consistent djb2
    stats enable
    stats hide-version
    stats auth foo:foo
    stats uri /statz
    default-server inter 5s fall 3 rise 1
    errorfile 503 /etc/haproxy/errors/503.http

Steps to reproduce the behavior

This is very hard to reproduce. because the issues are happening at random.
We have several backends, one that provides rest handlers (/api path) and others that serve html/css (everything else) files from google cloud storage. At random (< 2% of requests) we notice haproxy sending the request to the incorrect backend, so rest requests go into cloud storage and vice versa.

1.8.12 is the production server we have and does not have this issue. The moment we switch to haproxy 1.9.2 we see a spike in 404 errors from haproxy and 500 in our backends which are getting malformed requests

Do you have any idea what may have caused this?

We have enough tests to verify that the routing logic is correct, so all the tests pass. Since this issue is happening with a small percentage, it could be any sort of bug in the new haproxy code.

Do you have an idea how to solve the issue?

None at the moment.

New compression algorithms zopfli

Output of haproxy -vv and uname -a

Future version

What should haproxy do differently? Which functionality do you think we should add?

Haproxy could add a additional compression algorithm.

Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

https://github.com/google/zopfli

ZopfliDeflate creates a valid deflate stream in memory, see:
http://www.ietf.org/rfc/rfc1951.txt

ZopfliZlibCompress creates a valid zlib stream in memory, see:
http://www.ietf.org/rfc/rfc1950.txt

ZopfliGzipCompress creates a valid gzip stream in memory, see:
http://www.ietf.org/rfc/rfc1952.txt

crash with ssh connection

Output of haproxy -vv and uname -a

haproxy_1      | Restarting OpenBSD Secure Shell server: sshd.
haproxy_1      | [NOTICE] 041/132231 (1) : New worker #1 (22) forked




haproxy_1      | [WARNING] 041/132346 (1) : Former worker #589340752 (24) exited with code 255 (Unknown signal 127)
haproxy_1      | *** Error in `haproxy': free(): invalid pointer: 0x000055a12320a048 ***
haproxy_1      | ======= Backtrace: =========
haproxy_1      | /lib/x86_64-linux-gnu/libc.so.6(+0x70bfb)[0x7f793cb43bfb]
haproxy_1      | /lib/x86_64-linux-gnu/libc.so.6(+0x76fc6)[0x7f793cb49fc6]
haproxy_1      | /lib/x86_64-linux-gnu/libc.so.6(+0x7780e)[0x7f793cb4a80e]
haproxy_1      | haproxy(+0xd51d8)[0x55a122efb1d8]
haproxy_1      | haproxy(__signal_process_queue+0x8b)[0x55a122f86c8b]
haproxy_1      | haproxy(+0xd74a7)[0x55a122efd4a7]
haproxy_1      | haproxy(main+0x1c94)[0x55a122e595d4]
haproxy_1      | /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f793caf32e1]
haproxy_1      | haproxy(_start+0x2a)[0x55a122e59afa]
haproxy_1      | ======= Memory map: ========
haproxy_1      | 55a122e26000-55a123002000 r-xp 00000000 08:01 527449                     /usr/local/sbin/haproxy
haproxy_1      | 55a123201000-55a123206000 r--p 001db000 08:01 527449                     /usr/local/sbin/haproxy
haproxy_1      | 55a123206000-55a12321f000 rw-p 001e0000 08:01 527449                     /usr/local/sbin/haproxy
haproxy_1      | 55a12321f000-55a12323b000 rw-p 00000000 00:00 0 
haproxy_1      | 55a124fd3000-55a12509d000 rw-p 00000000 00:00 0                          [heap]
haproxy_1      | 7f7934000000-7f7934021000 rw-p 00000000 00:00 0 
haproxy_1      | 7f7934021000-7f7938000000 ---p 00000000 00:00 0 
haproxy_1      | 7f793bc61000-7f793bc77000 r-xp 00000000 08:01 407990                     /lib/x86_64-linux-gnu/libgcc_s.so.1
haproxy_1      | 7f793bc77000-7f793be76000 ---p 00016000 08:01 407990                     /lib/x86_64-linux-gnu/libgcc_s.so.1
haproxy_1      | 7f793be76000-7f793be77000 r--p 00015000 08:01 407990                     /lib/x86_64-linux-gnu/libgcc_s.so.1
haproxy_1      | 7f793be77000-7f793be78000 rw-p 00016000 08:01 407990                     /lib/x86_64-linux-gnu/libgcc_s.so.1
haproxy_1      | 7f793be78000-7f793be8c000 r-xp 00000000 08:01 408031                     /lib/x86_64-linux-gnu/libresolv-2.24.so
haproxy_1      | 7f793be8c000-7f793c08b000 ---p 00014000 08:01 408031                     /lib/x86_64-linux-gnu/libresolv-2.24.so
haproxy_1      | 7f793c08b000-7f793c08c000 r--p 00013000 08:01 408031                     /lib/x86_64-linux-gnu/libresolv-2.24.so
haproxy_1      | 7f793c08c000-7f793c08d000 rw-p 00014000 08:01 408031                     /lib/x86_64-linux-gnu/libresolv-2.24.so
haproxy_1      | 7f793c08d000-7f793c08f000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793c08f000-7f793c094000 r-xp 00000000 08:01 408010                     /lib/x86_64-linux-gnu/libnss_dns-2.24.so
haproxy_1      | 7f793c094000-7f793c293000 ---p 00005000 08:01 408010                     /lib/x86_64-linux-gnu/libnss_dns-2.24.so
haproxy_1      | 7f793c293000-7f793c294000 r--p 00004000 08:01 408010                     /lib/x86_64-linux-gnu/libnss_dns-2.24.so
haproxy_1      | 7f793c294000-7f793c295000 rw-p 00005000 08:01 408010                     /lib/x86_64-linux-gnu/libnss_dns-2.24.so
haproxy_1      | 7f793c295000-7f793c29f000 r-xp 00000000 08:01 408012                     /lib/x86_64-linux-gnu/libnss_files-2.24.so
haproxy_1      | 7f793c29f000-7f793c49f000 ---p 0000a000 08:01 408012                     /lib/x86_64-linux-gnu/libnss_files-2.24.so
haproxy_1      | 7f793c49f000-7f793c4a0000 r--p 0000a000 08:01 408012                     /lib/x86_64-linux-gnu/libnss_files-2.24.so
haproxy_1      | 7f793c4a0000-7f793c4a1000 rw-p 0000b000 08:01 408012                     /lib/x86_64-linux-gnu/libnss_files-2.24.so
haproxy_1      | 7f793c4a1000-7f793c4a7000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793c4a7000-7f793c4b2000 r-xp 00000000 08:01 408016                     /lib/x86_64-linux-gnu/libnss_nis-2.24.so
haproxy_1      | 7f793c4b2000-7f793c6b1000 ---p 0000b000 08:01 408016                     /lib/x86_64-linux-gnu/libnss_nis-2.24.so
haproxy_1      | 7f793c6b1000-7f793c6b2000 r--p 0000a000 08:01 408016                     /lib/x86_64-linux-gnu/libnss_nis-2.24.so
haproxy_1      | 7f793c6b2000-7f793c6b3000 rw-p 0000b000 08:01 408016                     /lib/x86_64-linux-gnu/libnss_nis-2.24.so
haproxy_1      | 7f793c6b3000-7f793c6c7000 r-xp 00000000 08:01 408006                     /lib/x86_64-linux-gnu/libnsl-2.24.so
haproxy_1      | 7f793c6c7000-7f793c8c7000 ---p 00014000 08:01 408006                     /lib/x86_64-linux-gnu/libnsl-2.24.so
haproxy_1      | 7f793c8c7000-7f793c8c8000 r--p 00014000 08:01 408006                     /lib/x86_64-linux-gnu/libnsl-2.24.so
haproxy_1      | 7f793c8c8000-7f793c8c9000 rw-p 00015000 08:01 408006                     /lib/x86_64-linux-gnu/libnsl-2.24.so
haproxy_1      | 7f793c8c9000-7f793c8cb000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793c8cb000-7f793c8d2000 r-xp 00000000 08:01 408008                     /lib/x86_64-linux-gnu/libnss_compat-2.24.so
haproxy_1      | 7f793c8d2000-7f793cad1000 ---p 00007000 08:01 408008                     /lib/x86_64-linux-gnu/libnss_compat-2.24.so
haproxy_1      | 7f793cad1000-7f793cad2000 r--p 00006000 08:01 408008                     /lib/x86_64-linux-gnu/libnss_compat-2.24.so
haproxy_1      | 7f793cad2000-7f793cad3000 rw-p 00007000 08:01 408008                     /lib/x86_64-linux-gnu/libnss_compat-2.24.so
haproxy_1      | 7f793cad3000-7f793cc68000 r-xp 00000000 08:01 407972                     /lib/x86_64-linux-gnu/libc-2.24.so
haproxy_1      | 7f793cc68000-7f793ce68000 ---p 00195000 08:01 407972                     /lib/x86_64-linux-gnu/libc-2.24.so
haproxy_1      | 7f793ce68000-7f793ce6c000 r--p 00195000 08:01 407972                     /lib/x86_64-linux-gnu/libc-2.24.so
haproxy_1      | 7f793ce6c000-7f793ce6e000 rw-p 00199000 08:01 407972                     /lib/x86_64-linux-gnu/libc-2.24.so
haproxy_1      | 7f793ce6e000-7f793ce72000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793ce72000-7f793cee4000 r-xp 00000000 08:01 408028                     /lib/x86_64-linux-gnu/libpcre.so.3.13.3
haproxy_1      | 7f793cee4000-7f793d0e3000 ---p 00072000 08:01 408028                     /lib/x86_64-linux-gnu/libpcre.so.3.13.3
haproxy_1      | 7f793d0e3000-7f793d0e4000 r--p 00071000 08:01 408028                     /lib/x86_64-linux-gnu/libpcre.so.3.13.3
haproxy_1      | 7f793d0e4000-7f793d0e5000 rw-p 00072000 08:01 408028                     /lib/x86_64-linux-gnu/libpcre.so.3.13.3
haproxy_1      | 7f793d0e5000-7f793d0e7000 r-xp 00000000 08:01 408724                     /usr/lib/x86_64-linux-gnu/libpcreposix.so.3.13.3
haproxy_1      | 7f793d0e7000-7f793d2e6000 ---p 00002000 08:01 408724                     /usr/lib/x86_64-linux-gnu/libpcreposix.so.3.13.3
haproxy_1      | 7f793d2e6000-7f793d2e7000 r--p 00001000 08:01 408724                     /usr/lib/x86_64-linux-gnu/libpcreposix.so.3.13.3
haproxy_1      | 7f793d2e7000-7f793d2e8000 rw-p 00002000 08:01 408724                     /usr/lib/x86_64-linux-gnu/libpcreposix.so.3.13.3
haproxy_1      | 7f793d2e8000-7f793d3eb000 r-xp 00000000 08:01 407997                     /lib/x86_64-linux-gnu/libm-2.24.so
haproxy_1      | 7f793d3eb000-7f793d5ea000 ---p 00103000 08:01 407997                     /lib/x86_64-linux-gnu/libm-2.24.so
haproxy_1      | 7f793d5ea000-7f793d5eb000 r--p 00102000 08:01 407997                     /lib/x86_64-linux-gnu/libm-2.24.so
haproxy_1      | 7f793d5eb000-7f793d5ec000 rw-p 00103000 08:01 407997                     /lib/x86_64-linux-gnu/libm-2.24.so
haproxy_1      | 7f793d5ec000-7f793d623000 r-xp 00000000 08:01 527434                     /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0
haproxy_1      | 7f793d623000-7f793d822000 ---p 00037000 08:01 527434                     /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0
haproxy_1      | 7f793d822000-7f793d824000 r--p 00036000 08:01 527434                     /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0
haproxy_1      | 7f793d824000-7f793d825000 rw-p 00038000 08:01 527434                     /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0
haproxy_1      | 7f793d825000-7f793da8f000 r-xp 00000000 08:01 527430                     /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
haproxy_1      | 7f793da8f000-7f793dc8f000 ---p 0026a000 08:01 527430                     /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
haproxy_1      | 7f793dc8f000-7f793dcad000 r--p 0026a000 08:01 527430                     /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
haproxy_1      | 7f793dcad000-7f793dcbb000 rw-p 00288000 08:01 527430                     /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
haproxy_1      | 7f793dcbb000-7f793dcbe000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793dcbe000-7f793dd21000 r-xp 00000000 08:01 527435                     /usr/lib/x86_64-linux-gnu/libssl.so.1.1
haproxy_1      | 7f793dd21000-7f793df20000 ---p 00063000 08:01 527435                     /usr/lib/x86_64-linux-gnu/libssl.so.1.1
haproxy_1      | 7f793df20000-7f793df24000 r--p 00062000 08:01 527435                     /usr/lib/x86_64-linux-gnu/libssl.so.1.1
haproxy_1      | 7f793df24000-7f793df2a000 rw-p 00066000 08:01 527435                     /usr/lib/x86_64-linux-gnu/libssl.so.1.1
haproxy_1      | 7f793df2a000-7f793df31000 r-xp 00000000 08:01 408033                     /lib/x86_64-linux-gnu/librt-2.24.so
haproxy_1      | 7f793df31000-7f793e130000 ---p 00007000 08:01 408033                     /lib/x86_64-linux-gnu/librt-2.24.so
haproxy_1      | 7f793e130000-7f793e131000 r--p 00006000 08:01 408033                     /lib/x86_64-linux-gnu/librt-2.24.so
haproxy_1      | 7f793e131000-7f793e132000 rw-p 00007000 08:01 408033                     /lib/x86_64-linux-gnu/librt-2.24.so
haproxy_1      | 7f793e132000-7f793e14a000 r-xp 00000000 08:01 408029                     /lib/x86_64-linux-gnu/libpthread-2.24.so
haproxy_1      | 7f793e14a000-7f793e349000 ---p 00018000 08:01 408029                     /lib/x86_64-linux-gnu/libpthread-2.24.so
haproxy_1      | 7f793e349000-7f793e34a000 r--p 00017000 08:01 408029                     /lib/x86_64-linux-gnu/libpthread-2.24.so
haproxy_1      | 7f793e34a000-7f793e34b000 rw-p 00018000 08:01 408029                     /lib/x86_64-linux-gnu/libpthread-2.24.so
haproxy_1      | 7f793e34b000-7f793e34f000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793e34f000-7f793e352000 r-xp 00000000 08:01 407982                     /lib/x86_64-linux-gnu/libdl-2.24.so
haproxy_1      | 7f793e352000-7f793e551000 ---p 00003000 08:01 407982                     /lib/x86_64-linux-gnu/libdl-2.24.so
haproxy_1      | 7f793e551000-7f793e552000 r--p 00002000 08:01 407982                     /lib/x86_64-linux-gnu/libdl-2.24.so
haproxy_1      | 7f793e552000-7f793e553000 rw-p 00003000 08:01 407982                     /lib/x86_64-linux-gnu/libdl-2.24.so
haproxy_1      | 7f793e553000-7f793e56c000 r-xp 00000000 08:01 408054                     /lib/x86_64-linux-gnu/libz.so.1.2.8
haproxy_1      | 7f793e56c000-7f793e76b000 ---p 00019000 08:01 408054                     /lib/x86_64-linux-gnu/libz.so.1.2.8
haproxy_1      | 7f793e76b000-7f793e76c000 r--p 00018000 08:01 408054                     /lib/x86_64-linux-gnu/libz.so.1.2.8
haproxy_1      | 7f793e76c000-7f793e76d000 rw-p 00019000 08:01 408054                     /lib/x86_64-linux-gnu/libz.so.1.2.8
haproxy_1      | 7f793e76d000-7f793e775000 r-xp 00000000 08:01 407980                     /lib/x86_64-linux-gnu/libcrypt-2.24.so
haproxy_1      | 7f793e775000-7f793e975000 ---p 00008000 08:01 407980                     /lib/x86_64-linux-gnu/libcrypt-2.24.so
haproxy_1      | 7f793e975000-7f793e976000 r--p 00008000 08:01 407980                     /lib/x86_64-linux-gnu/libcrypt-2.24.so
haproxy_1      | 7f793e976000-7f793e977000 rw-p 00009000 08:01 407980                     /lib/x86_64-linux-gnu/libcrypt-2.24.so
haproxy_1      | 7f793e977000-7f793e9a5000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793e9a5000-7f793e9c8000 r-xp 00000000 08:01 407954                     /lib/x86_64-linux-gnu/ld-2.24.so
haproxy_1      | 7f793eb5a000-7f793ebc2000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793ebc4000-7f793ebc8000 rw-p 00000000 00:00 0 
haproxy_1      | 7f793ebc8000-7f793ebc9000 r--p 00023000 08:01 407954                     /lib/x86_64-linux-gnu/ld-2.24.so
haproxy_1      | 7f793ebc9000-7f793ebca000 rw-p 00024000 08:01 407954                     /lib/x86_64-linux-gnu/ld-2.24.so
haproxy_1      | 7f793ebca000-7f793ebcb000 rw-p 00000000 00:00 0 
haproxy_1      | 7ffd89cfa000-7ffd89d1b000 rw-p 00000000 00:00 0                          [stack]
haproxy_1      | 7ffd89d2e000-7ffd89d30000 r--p 00000000 00:00 0                          [vvar]
haproxy_1      | 7ffd89d30000-7ffd89d32000 r-xp 00000000 00:00 0                          [vdso]
haproxy_1      | ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]

uname - a:

root@a0ba2b3cb947:/# uname -a
Linux a0ba2b3cb947 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 GNU/Linux

What's the configuration?

run from an image (name: haproxy-spec) with Dockerfile:

FROM haproxy:1.9

RUN apt-get update && apt-get install -y openssh-server socat
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

#
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh

the modified docker-entrypoint.sh (added a line with service ssh restart):

#!/bin/sh
service ssh restart
set -e

# first arg is `-f` or `--some-option`
if [ "${1#-}" != "$1" ]; then
        set -- haproxy "$@"
fi

if [ "$1" = 'haproxy' ]; then
        shift # "haproxy"
        # if the user wants "haproxy", let's add a couple useful flags
        #   -W  -- "master-worker mode" (similar to the old "haproxy-systemd-wrapper"; allows for reload via "SIGUSR2")
        #   -db -- disables background mode
        set -- haproxy -W -db "$@"
fi

exec "$@"

config in docker-compose.yml:

  haproxy:
    image: haproxy-spec
    # image: dockercloud/haproxy
    restart: always
    ports:
      - "80:80"
      - "2222:22"
    volumes:
      #- /var/run/docker.sock:/var/run/docker.sock
      - ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg


haproxy configuration

global
    #log /dev/log local0
    #log /dev/log local1 notice
    #chroot /var/lib/haproxy
    #stats socket /run/haproxy/admin.sock mode 660 level admin
    #stats timeout 30s
    user root
    group root
    daemon

    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private

    # Default ciphers to use on SSL-enabled listening sockets.
    # For more information, see ciphers(1SSL). This list is from:
    #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
    ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
    ssl-default-bind-options no-sslv3

defaults
    log global
    mode http
    #option httplog
    #option dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000

# frontend ssh-in
#     bind *:22
#     mode tcp
#     option tcplog
#     default_backend back_ssh
#
# backend back_ssh
#     mode tcp
#     option tcp-check
#     tcp-check expect string SSH-2.0-
#     server ssh_server haproxy:22 check

frontend http-in
    bind *:80
    default_backend servers

backend servers
        #server 'orion-eridanis.com' orion:1026 maxconn 32
        server 'idm-eridanis.com' keyrock:3000 maxconn 32

Steps to reproduce the behavior

  1. try to connect via ssh to the container: ssh root@localhost:2222
  2. ask for password, connection refused each time, even with the right password, (3-4 times retry)
  3. at the end haproxy crashes with V1.9.4, no connection possible,
    Note: with 1.8 (1.8.18), no haproxy crash but still can't connect via ssh ?

Actual behavior

Expected behavior:

1: no crash
2: to be able to connect through ssh (may be the config is not OK ?)

Do you have any idea what may have caused this?

Do you have an idea how to solve the issue?

dns resolver fails to resolve 1 of 2 servers in the same backend

Output of haproxy -vv and uname -a

HA-Proxy version 1.8.12-1~jessie 2018/06/27
Copyright 2000-2018 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.1t  3 May 2016
Running on OpenSSL version : OpenSSL 1.0.2l  25 May 2017 (VERSIONS DIFFER!)
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.35 2014-04-04
Running on PCRE version : 8.35 2014-04-04
PCRE library supports JIT : yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace
Linux haproxy33 3.16.0-6-amd64 #1 SMP Debian 3.16.56-1+deb8u1 (2018-05-08) x86_64 GNU/Linux

What's the configuration?

resolvers localdns
    nameserver local01 127.0.0.1:53
...
backend test_dns
    default-server init-addr none resolvers localdns
    server foo-bar-01-13653  foo-bar-01.domain:13653 check
    server  foo-bar-01-52393  foo-bar-01.domain:52393 check

Steps to reproduce the behavior

  1. Add 2 servers with the same fqdn to the single backend with run time dns resolver configured
  2. Restart haproxy
  3. Check if both servers' fqdns being resolved and available: show servers state

Actual behavior

One of servers is in failed state due to resolution error (srv_op_state: 32):

$ socat haproxy.sock - <<< "show servers state" | grep foo-bar
10 test_dns 1 foo-bar-01-13653 10.10.10.10 2 0 1 1 685 15 3 4 6 0 0 0 foo-bar-01.domain 13653
10 test_dns 2 foo-bar-01-52393 - 0 32 1 1 685 1 0 0 14 0 0 0 foo-bar-01.domain 52393

Also, half dns requests marked as failed:

Resolvers section localdns
 nameserver local01:
  sent:        270
  snd_error:   0
  valid:       135
  update:      1
  cname:       0
  cname_error: 0
  any_err:     135
  nx:          0
  timeout:     0
  refused:     0
  other:       0
  invalid:     0
  too_big:     0
  truncated:   0
  outdated:    0

According to tcpdump haproxy tries to make AAAA request mark response as failed (any_err) and only then performs failback to the A request.

Expected behavior

Both declared servers' fqdns being successfully resolved in runtime.

Do you have any idea what may have caused this?

Do you have an idea how to solve the issue?

keep-alive should be disabled when a frontend is full

Output of haproxy -vv and uname -a

all versions since at least 1.6

What's the configuration?

No particular configuration. HTTP with client-side keep-alive is needed.

Steps to reproduce the behavior

Establish enough HTTP connections to a frontend to reach its maxconn. Send one request and keep the connection alive.

Actual behavior

The frontend will not accept extra connections since it's saturated.

Expected behavior

The responses should be turned to close mode when coming from a saturated frontend to release slots as fast as possible.

Do you have any idea what may have caused this?

The main problem is that frontends don't reach the FULL state anymore. This was apparently caused by the per-listener maxconn limitations.

Do you have an idea how to solve the issue?

We need to make sure a frontend can reach the FULL state again when it cannot accept new connections anymore and that this state leaves once a connection is released. Then it will be easy to disable keep-alive when the frontend is full.

health checks with POST and data corrupted by extra "connection: close"

This entry was reported here https://www.mail-archive.com/[email protected]/msg28199.html

Output of haproxy -vv and uname -a

haproxy 1.7.8

What's the configuration?

option httpchk POST /myService/endpt HTTP/1.1\r\nContent-Type:\ application/json;charset=UTF-8\r\nContent-Length:\ 169\r\n\r\n{\"inputs\":[{\"id\":1,\"productType\":\"productType\",\"product Description\":\"productDescription\",\"metaDescription\":\"metaDescription\",\"metaTitle\":\"metaTitle\",\"rawxyz\":\"rawxyz\"}]}
http-check expect rstatus (2|3)[0-9][0-9]

Steps to reproduce the behavior

Actual behavior

"
After debugging with wireshark capture I came to know that, haproxy is adding \r\f"Connection: close"\r\f\r\f at the end of the post json data. From this manual, https://www.haproxy.org/download/1.7/doc/configuration.txt, I found that haproxy appends it if httpchk is combined with http-check
expect
. But it should be added to header fields and not after data.

This is causing packet parse failure, as it considering POST data as a part
of header and reporting extra CRLF in headers.
"

Expected behavior

"I would need the http-check expect block to verify error code, but then howwould I avoid adding Connection: close at the end.
"

Do you have any idea what may have caused this?

Limited design of the check subsystem and the abuses we're doing on it :-(

Do you have an idea how to solve the issue?

No simple solution. We do need this close to avoid timeouts or error reports on servers when we close. Ideally we should parse the produced request and append the header before the resulting double CRLF. It might be the less risky change for the short term that could be backported.

Add range iterator item variable for server-template and also general zero-padding converter

Please add range iterator item variable for the server-template directive so that it can be expanded
in the <fqdn>:<port> (see in the example below). Also new general zero-padding converter to pad
values with zeroes would be splendid for server-template (therefor I aggregated both things to one
feature request)...

zeropad(<width>)
  Performs a zero-padding of preceding expression to the given <width>.

  Example:
    server-template s 3 "svc-%[rng.iteritem,zeropad(3)].domain.tld:80" check

    # would be equivalent to:
    server s1 svc-001.domain.tld:80 check
    server s2 svc-002.domain.tld:80 check
    server s3 svc-003.domain.tld:80 check

Thanks for your excellent work on HA-Proxy!

bind-process does not work with threads

With this configuration: (haproxy 1.9.2 and with fresh dev branch),
bind-process with threads is accepted but do nothing: all threads is used for one frontend.

  nbthread 2

frontend one
   bind-process 1/1
   [...]
frontend two
   bind-process 1/2
   [...]

This configuration is also accepted without warning:

  nbthread 2

frontend one
   bind-process 1/45

expected:
. bind-process works with threads
or
. haproxy does not accept threads with bind-process (warning/error)

External check incurs high CPU usage (> 100%)

Output of haproxy -vv and uname -a

HA-Proxy version 1.7.8 2017/07/07
Copyright 2000-2017 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux26
  CPU     = x86_64
  CC      = gcc
  CFLAGS  = -fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
  OPTIONS = USE_LINUX_SPLICE=1 USE_TPROXY=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1 USE_NETFILTER=1 USE_GETSOCKNAME=1 USE_OPENSSL=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_TFO=1 USE_PIE=1 USE_STACKPROTECTOR=1 USE_RELRO_NOW=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1i-fips 6 Aug 2014
Running on OpenSSL version : OpenSSL 1.0.2j-fips  26 Sep 2016 (VERSIONS DIFFER!)
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.33 2013-05-28
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built without Lua support
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [COMP] compression
        [TRACE] trace
        [SPOE] spoe

What's the configuration?

global
maxconn 2000000
daemon
nbproc 1
cpu-map 1 0
stats socket /var/run/haproxy.sock mode 777 level admin
external-check

defaults
maxconn 2000000
mode http
balance source
retries 3
timeout connect 10000
timeout client 45000
timeout server 60000
option dontlognull
option redispatch
option forwardfor 
option http-server-close

listen app-https
bind ... ssl ...
balance source
mode http
option httpclose
option forwardfor
# option tcp-check
option external-check
external-check path "/usr/bin:/bin"
external-check command /bin/true
server app1 ... backup check ssl ... fall 3 rise 5 inter 10000 weight 10
server app2 ... check ssl ... fall 3 rise 5 inter 10000 weight 10

Steps to reproduce the behavior

  1. We have 100+ servers spanning 70+ TCP/HTTP(S) groups specified with various ACLs, etc. running in production with a normal load average of 0.40.
  2. The moment we switch one of our HTTP (SSL) services from either httpchk or tcp-check to an external-check and reload, we immediately begin to see CPU load skyrocket to 2.2-3.2 with real-time peaks observed at 4.1+ at times.
  3. Reverting the check brings CPU load back down to normal averages

Actual behavior

We have a monitor script that auto-refreshes every 5 seconds, reporting uptime amongst other things.

  • At present, an external check which invokes /bin/true sits right at 2.0-2.4 load average.
  • A shell script that collects arguments and HAPROXY_ environment variables and does exit 0 was observed to be in the mid-3.0's load average range.

Expected behavior

I would expect external checks to exhibit similar performance to other checks, something with a < 1.0 cumulative load average.

Do you have any idea what may have caused this?

I'm still poring through haproxy's source code and commit history at this time, but feel it's related to some sort of blocking going on. A separate visual monitor very consistently shows CPU spike at 10s intervals as configured; CPU returns to near 0% utilization when the external check completes.

Do you have an idea how to solve the issue?

Several [unverified] possibilities:

  • Spreading health checks or increasing check frequency
  • Review how commands are being invoked and whether some other blocking is in effect
  • External checks appear to be forked to child PIDs; again, I'm still validating this. Whether it is or isn't, perhaps getting the forked PID out of the event multiplexing flow and allowing it to run on its own. The command should be wrapped with some agent responsible for writing the exit code to a memory address (in-memory caching?) which haproxy can poll for in a much lighter-weight manner and manage server states accordingly. Perhaps this is already implemented in the current event scheduling system. I only share it as a solution I can relate to based on a separate Javascript project (also single-threaded) where I needed basic "multi-threading" functionality.
  • The actual external check itself is a shell script which executes several curl commands. I may be interested in profiling performance of a binary as an alternative to an interpreted shell script (example)

http-response add-headers doesn't work with the stats page

This issue was reported here : https://www.mail-archive.com/[email protected]/msg27447.html

Output of haproxy -vv and uname -a

haproxy 1.5.4
haproxy 1.5.19

What's the configuration?

frontend stats_proxy
        bind <server ip>:<port>ssl crt <certificate path> no-sslv3 no-tlsv1 ciphers <cipher>
        mode http
        default_backend stats_server
        rspadd Cache-Control:\ no-store,no-cache,private
        rspadd Pragma:\ no-cache
        rspadd Strict-Transport-Security:

backend stats_server
        mode http
        option httpclose
        option abortonclose
        stats enable
        stats refresh     60s
        stats hide-version

Steps to reproduce the behavior

access the stats page

Actual behavior

the headers are missing

Expected behavior

the headers should be present

Do you have any idea what may have caused this?

Lukas diagnosed that the behaviour changed in 1.5-dev with commit 70730dd ("MEDIUM: http: enable analysers to have keep-alive on stats"). The commit message mentions :

We ensure to skip filters because we don't want to unexpectedly
block a response nor to mangle response headers.

It's interesting to note that we do support HTTP compression on the stats response but not HTTP rulesets, just to avoid causing trouble, but here it does have the opposite effect.

Do you have an idea how to solve the issue?

Maybe we'd need to have an options on the stats directive to indicate that there is some post-processing to be done on the response so that we don't remove the response analysers. Or maybe we should always keep them, but we'd then face the usual problem of response rules written for a server possibly blocking a generated response.

A possibly reasonable approach would be to remove the rules for the current proxy : if the stats are in the frontend, frontend rules are not processed. If the stats are in the backend, backend rules are not processed but frontend rules are processed. This would seem fairly natural and it is what the reported tried to do.

Http Code 100 problem with htx

Output of haproxy -vv and uname -a

$ docker exec lb-test haproxy -vv
HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace


What's the configuration?

global
    stats timeout 30s
    daemon
    maxconn 65536

defaults
    log	global
    option http-use-htx
    option httplog
    option dontlognull
    option log-health-checks
    timeout connect 10s
    timeout client  60m
    timeout server  60m

    retries 2
    option redispatch
    maxconn 65536
    option http-keep-alive
    http-reuse always
    timeout client-fin 30s
    timeout http-keep-alive 10s


frontend test-1
    mode http
    option forwardfor

    log global
    bind *:80

    default_backend test-backend 

backend test-backend 
    mode http
    server echo_server postman-echo.com:80 check weight 100 maxconn 64 slowstart 1s inter 5000

listen  stats-1
    bind  *:7654
    mode http
    log global 
    stats enable
    stats uri /
    stats refresh 5s
    bind-process 1

Steps to reproduce the behavior

send http request with Expect: 100-continue, i.e.:

curl -v -L --request POST "http://127.0.0.1/post" -d @file_with_more_than_1000characters

or

curl -v -L -H "Expect: 100-continue" --request POST "http://127.0.0.1/post" -d "Test"

To run haproxy in docker:

code_100_test.zip
extract zip

cd code_100_test
docker build -t haproxy .
docker run --rm --network host --name lb-test haproxy

to compare with 1.8 change cwd to 1_8

Actual behavior

Haproxy response 100 code with
transfer-encoding: chunked

Result from curl:

*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> POST /post HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.47.0
> Accept: */*
> Expect: 100-continue
> Content-Length: 4
> Content-Type: application/x-www-form-urlencoded
> 
< HTTP/1.1 100 Continue
< transfer-encoding: chunked
* Illegal or missing hexadecimal sequence in chunked-encoding
* Closing connection 0
curl: (56) Illegal or missing hexadecimal sequence in chunked-encoding

Tcpdump:

15:34:32.215728 IP localhost.57088 > localhost.http: Flags [S]
15:34:32.215744 IP localhost.http > localhost.57088: Flags [S.]
15:34:32.215757 IP localhost.57088 > localhost.http: Flags [.]
15:34:32.215810 IP localhost.57088 > localhost.http: Flags [P.]
POST /post HTTP/1.1
Host: 127.0.0.1
User-Agent: curl/7.47.0
Accept: */*
Expect: 100-continue
Content-Length: 4
Content-Type: application/x-www-form-urlencoded


15:34:32.417391 IP localhost.http > localhost.57088: Flags [P.]HTTP/1.1 100 Continue
transfer-encoding: chunked

0


15:34:32.417405 IP localhost.57088 > localhost.http: Flags [.]
15:34:32.417545 IP localhost.57088 > localhost.http: Flags [F.]
15:34:32.417682 IP localhost.http > localhost.57088: Flags [P.]
HTTP/1.0 400 Bad request
cache-control: no-cache
content-type: text/html

<html><body><h1>400 Bad request</h1>
Your browser sent an invalid request.
</body></html>

15:34:32.417717 IP localhost.57088 > localhost.http: Flags [R]

Expected behavior

  1. curl send req
  2. haproxy send req to server
  3. haproxy recv response from server (100)
  4. haproxy send resp to client (100)
  5. curl send req body
  6. haproxy send req body to server
  7. haproxy recv response from server (200)
  8. haproxy send response to client(200)

Result from curl (for haproxy 1.8)

*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> POST /post HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.47.0
> Accept: */*
> Expect: 100-continue
> Content-Length: 4
> Content-Type: application/x-www-form-urlencoded
> 
< HTTP/1.1 100 Continue
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=utf-8
< Date: Wed, 30 Jan 2019 14:47:47 GMT
< ETag: W/"12a-msUD0LWLNBNDs5uc/Ir6S5Zxlcs"
< Server: nginx
< set-cookie: sails.sid=s%3Ahfts6SdufXP0gZvu4Y0PM1aGjB2UhuHg.X9sH64LDXaDy0NjdSXgjDhrYJPWmu8KlSKMRytVp54M; Path=/; HttpOnly
< Vary: Accept-Encoding
< Content-Length: 298
< 
* Connection #0 to host 127.0.0.1 left intact
{"args":{},"data":"","files":{},"form":{"Test":""},"headers":{"x-forwarded-proto":"https","host":"127.0.0.1","content-length":"4","accept":"*/*","content-type":"application/x-www-form-urlencoded","user-agent":"curl/7.47.0","x-forwarded-port":"80"},"json":{"Test":""},"url":"https://127.0.0.1/post"}

Do you have any idea what may have caused this?

Do you have an idea how to solve the issue?

backend: study how to redispatch to backends on last active death

When reading pendconn_redistribute(), it appears that if the last active server dies with its maxconn saturated and connections are still pending in the backend's queue, there is no way to serve these connections. The problem is that there's no other available server yet at the moment the issue is detected. Ideally when backup servers are inserted into the farm, an equivalent of srv_up() should be called to pick pending connections. But this will not be ideal when use-all-backups is enabled because each server would take up to its maxconn, without considering the other server. An alternative solution could involve a redispatch for all pending connections in the queue in this case (and possibly only in this case). This will be performed once all backup servers will have been assigned to the farm, indicating that all active servers are dead, and the farm will be populated. This would also properly respect the LB algorithm. Some patches were merged last year to address closely related issues and might be extended for this :
6a78e61
a869465

proto/connection.h: conn_(sock|xprt)_polling_changes: function mismatch documetation, possible bug

In file include/proto/connection.h: Functions conn_sock_polling_changes and conn_sock_polling_changes:
In comments for these functions it is written:

Additionally non-zero is also returned if an error was reported on the connection.

But the code is the following:

f &= CO_FL_XPRT_WR_ENA | CO_FL_XPRT_RD_ENA | CO_FL_CURR_WR_ENA |
     CO_FL_CURR_RD_ENA | CO_FL_ERROR;

f = (f ^ (f << 1)) & (CO_FL_CURR_WR_ENA|CO_FL_CURR_RD_ENA);    /* test C ^ D */
return f & (CO_FL_CURR_WR_ENA | CO_FL_CURR_RD_ENA | CO_FL_ERROR);

In the line with XOR (^) the CO_FL_ERROR is definitely cleared (only CO_FL_CURR_WR_ENA CO_FL_CURR_RD_ENA are allowed). So, the next statement (f & ...) will not be true if only one CO_FL_ERROR is set (without WR|RD_ENA). So, the functions mismatch the documentation. Is it ok? Maybe it's a bug?

Socket bind name as sample / log format option

Output of haproxy -vv and uname -a

HA-Proxy version 1.8.14-52e4d43 2018/09/20
Copyright 2000-2018 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow -Wno-null-dereference -Wno-unused-label
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0f  25 May 2017
Running on OpenSSL version : OpenSSL 1.1.0f  25 May 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace

Linux haproxy 3.10.102 #15284 SMP Thu Jun 21 00:42:18 CST 2018 x86_64 GNU/Linux

What should haproxy do differently? Which functionality do you think we should add?

I'd like to use the socket bind name in custom log formats.
Currently no log format options include socket information. The hard-coded error logs do however include the socket name if defined.

There is the so_id sample allowing to discriminate on binding socket, but that's an integer and not very readable in logs.
The only workaround I've found is to conditionally set a session variable based on the so_id sample value.

E.g.:

        tcp-request content set-var(sess.source_type) str(wan) if { so_id 1:2 }
        tcp-request content set-var(sess.source_type) str(lan) if { so_id 3:4 }

What are you trying to do?

There should either be a way to access the socket bind name as a log format, and / or as a sample.

E.g. %fsn for frontend_socket_name in log formats or the so_name sample that could then be accessed as %[so_name]

    bind "${WAN_IP}:80" name wan
    bind "${WAN_IPv6}:80" name wan
    bind "${LAN_IP}:80" name lan
    bind "${LAN_IPv6}:80" name lan

    log-format "%ci:%cp [%t] %f/%fsn %b/%s %Tw/%Tc/%Tt %B %ts \
                %ac/%fc/%bc/%sc/%rc %sq/%bq"

    # or

    log-format "%ci:%cp [%t] %f/%[so_name] %b/%s %Tw/%Tc/%Tt %B %ts \
                %ac/%fc/%bc/%sc/%rc %sq/%bq"

Discourse topic where I suggested the feature.

cache: do not list entries with negative expiration date

This is another old report with little context information. It was noticed right after 1.8 was released that issuing "show cache" on the CLI would show entries with negative expiration date. It should still be true on 1.9 and above though it was not verified.

What happens is that expired objects are not actively removed, they're just not matched anymore, but since they are the oldest, their blocks will be reused so this is not a problem. However the output looks a bit dirty and it would be better to find a solution not to list them, at the very least by not showing them in the dump, and preferably by periodically scanning outdated entries to evict them. This would also help report more accurate statistics on the suitable cache size.

htx_manage_server_side_cookies: is_cookie2 is uninitialised

Output of haproxy -vv and uname -a

HA-Proxy version 2.0-dev0-8861e1-296 2019/02/12 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O0 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64).
Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTTP       side=FE
              h2 : mode=HTX        side=FE|BE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace

What's the configuration?

defaults
	option http-use-htx

listen http
	mode http
	bind *:8080

	cookie foo insert

	# Pretend example.com send a `Set-Cookie` header
	http-request set-header Host example.com
	server example1 example.com:443 ssl verify none sni req.hdr(host) cookie bar
	server example2 example.com:443 ssl verify none sni req.hdr(host) cookie baz

Steps to reproduce the behavior

Apply the following patch:

diff --git i/src/proto_htx.c w/src/proto_htx.c
index 9e285f21..9eb7b911 100644
--- i/src/proto_htx.c
+++ w/src/proto_htx.c
@@ -4288,7 +4288,7 @@ static void htx_manage_client_side_cookies(struct stream *s, struct channel *req
                }
        } /* for each "Cookie header */
 }
-
+#include<assert.h>
 /*
  * Manage server-side cookies. It can impact performance by about 2% so it is
  * desirable to call it only when needed. This function is also used when we
@@ -4309,6 +4309,7 @@ static void htx_manage_server_side_cookies(struct stream *s, struct channel *res
 
        ctx.blk = NULL;
        while (1) {
+               assert(is_cookie2 == 1);
                if (!http_find_header(htx, ist("Set-Cookie"), &ctx, 1)) {
                        if (!http_find_header(htx, ist("Set-Cookie2"), &ctx, 1))
                                break;

Actual behavior

The assert passes.

Expected behavior

The assert does not pass.

Do you have any idea what may have caused this?

I suspect the following:

gcc detects that is_cookie2 is only ever assigned the value 1 and therefore pretends the unintialized value is always 1 and optimizes it away. With -O0 the assert fails as expected.

Do you have an idea how to solve the issue?

Initialize is_cookie2 to 0. But then there is another issue in the function (I did not verify, this is deduced from reading the code):

The Set-Cookie2 header is only ever looked at when no Set-Cookie headers can be found any more. Thus in the following list of headers HAProxy fails to check the Set-Cookie2 header:

HTTP/1.1 200 OK
Set-Cookie2: …
…
Set-Cookie: …

because the ctx has already been advanced past the Set-Cookie2 header when finding the Set-Cookie header.

(if you'd ask me I'd just remove support for Set-Cookie2 in HTX 😉)

SSL Read Failed

SSL read failed (5) - closing connection
While It proccess about 3000 requests per second
I don't see in logs any errors I use

config-frontend: |
option dontlog-normal

haproxy -vv
HA-Proxy version 1.8.14-52e4d43 2018/09/20
Copyright 2000-2018 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow -Wno-null-dereference -Wno-unused-label
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018
Running on OpenSSL version : OpenSSL 1.0.2o  27 Mar 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.42 2018-03-20
Running on PCRE version : 8.42 2018-03-20
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace

 uname -a
Linux hb-router01 4.4.0-131-generic #157-Ubuntu SMP Thu Jul 12 15:51:36 UTC 2018 x86_64 Linux


What's the configuration?

global
    daemon
    nbthread 4
    cpu-map auto:1/1-4 0-3
    stats socket /var/run/haproxy-stats.sock level admin expose-fd listeners
    maxconn 2000
    hard-stop-after 300s
    log localhost:514 format rfc5424 local0
    log-tag ingress
    lua-load /usr/local/etc/haproxy/lua/send-response.lua
    lua-load /usr/local/etc/haproxy/lua/auth-request.lua
    tune.ssl.default-dh-param 2048
    ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK
    ssl-default-bind-options no-sslv3 no-tls-tickets

defaults
    log global
    maxconn 2000
    option redispatch
    option dontlognull
    option http-server-close
    option http-keep-alive
    timeout http-request    300s
    timeout connect         30s
    timeout client          300s
    timeout client-fin      50s
    timeout queue           300s
    timeout server          50s
    timeout server-fin      50s
    timeout tunnel          1h
    timeout http-keep-alive 2m

######
###### Backends
######
backend default-cpronginx-svc-443
    mode tcp
    balance roundrobin
    server 10.244.4.213:443 10.244.4.213:443 weight 1 maxconn 65536 check inter 30s
    server 10.244.6.227:443 10.244.6.227:443 weight 1 maxconn 65536 check inter 30s
    server 10.244.7.217:443 10.244.7.217:443 weight 1 maxconn 65536 check inter 30s
backend upstream-default-backend
    mode http
    balance roundrobin
    server 10.244.4.214:8080 10.244.4.214:8080 weight 1 check inter 30s
    server 10.244.7.218:8080 10.244.7.218:8080 weight 1 check inter 30s

######
###### HTTPS frontend (tcp mode)
######
frontend httpsfront
    bind *:443
    mode tcp
    tcp-request inspect-delay 5s
    tcp-request content accept if { req.ssl_hello_type 1 }
    use_backend default-cpronginx-svc-443 if { req.ssl_sni -i meteotravel.ru }
    default_backend httpsback-shared-backend

######
###### HTTP(S) frontend - shared http mode
######
backend httpsback-shared-backend
    mode tcp
    server shared-https-frontend unix@/var/run/haproxy-https.sock send-proxy-v2
backend httpback-shared-backend
    mode http
    server shared-http-frontend unix@/var/run/haproxy-http.sock send-proxy-v2
frontend httpfront-shared-frontend
    bind *:80
    bind unix@/var/run/haproxy-http.sock accept-proxy
    bind unix@/var/run/haproxy-https.sock ssl alpn h2,http/1.1 crt /ingress-controller/ssl/shared-frontend/ accept-proxy
    mode http
    option httplog
    acl from-https ssl_fc
    acl ssl-offload ssl_fc
    acl host-meteotravel.ru var(txn.hdr_host) -i meteotravel.ru meteotravel.ru:80 meteotravel.ru:443
    stick-table type ip size 200k expire 5m store conn_cur,conn_rate(1s)
    tcp-request content track-sc1 src
    option dontlog-normal
    http-request set-var(txn.hdr_host) req.hdr(host)
    http-request set-var(txn.hdr_proto) hdr(x-forwarded-proto)
    http-request set-header X-Forwarded-Proto https if from-https
    http-request del-header X-SSL-Client-Cert  if ssl-offload
    http-request del-header X-SSL-Client-SHA1  if ssl-offload
    http-request del-header X-SSL-Client-DN    if ssl-offload
    http-request del-header X-SSL-Client-CN    if ssl-offload
    http-request set-var(txn.path) path
    http-request del-header x-forwarded-for
    option forwardfor
    redirect scheme https if !from-https host-meteotravel.ru
    default_backend httpback-default-backend

######
###### HTTP frontend - default backend
######
backend httpback-default-backend
    mode http
    server shared-http-frontend unix@/var/run/haproxy-http-default-backend.sock send-proxy-v2
frontend httpfront-default-backend
    bind unix@/var/run/haproxy-http-default-backend.sock accept-proxy
    mode http
    option httplog
    acl from-https var(txn.hdr_proto) https
    option dontlog-normal
    http-request set-var(txn.hdr_host) req.hdr(host)
    http-request set-var(txn.hdr_proto) hdr(x-forwarded-proto)
    http-request set-header X-Forwarded-Proto https if from-https
    http-request set-var(txn.path) path
    http-request del-header x-forwarded-for
    option forwardfor
    use_backend error413 if  { var(txn.path) -m beg / } { req.body_size gt 104857600 }
    http-response set-header Strict-Transport-Security "max-age=15768000" if from-https 
    default_backend upstream-default-backend

######
###### Error pages
######
backend error413
    mode http
    errorfile 400 /usr/local/etc/haproxy/errors/413.http
    http-request deny deny_status 400
backend error495
    mode http
    errorfile 400 /usr/local/etc/haproxy/errors/495.http
    http-request deny deny_status 400
backend error496
    mode http
    errorfile 400 /usr/local/etc/haproxy/errors/496.http
    http-request deny deny_status 400
listen error503noendpoints
    bind 127.0.0.1:8181
    mode http
    errorfile 503 /usr/local/etc/haproxy/errors/503noendpoints.http

######
###### Stats page
######
listen stats
    bind *:1936
    mode http
    stats enable
    stats realm HAProxy\ Statistics
    stats uri /
    no log
    option forceclose
    stats show-legends

######
###### Monitor URI
######
frontend healthz
    bind *:10253
    mode http
    monitor-uri /healthz
    no log

Steps to reproduce the behavior

  1. Use in kubernetes
  2. I use haproxy in tcp mode with this package as ingress https://github.com/jcmoraisjr/haproxy-ingress
  3. Backends are the nginx servers with mutual tls auth
  4. Use ab -n 100000 -s 1000 -c 1000 -E client/rsa/clientB-key-crt.pem https://meteotravel.ru/ in parallel with 3 threads

Actual behavior

get many errors
SSL read failed (5) - closing connection
SSL read failed (5) - closing connection
SSL read failed (5) - closing connection
SSL read failed (5) - closing connection

Also such errors
SSL handshake failed (5)

I found here the same issues
https://bugzilla.redhat.com/show_bug.cgi?id=1514266
https://marc.info/?l=haproxy&m=129783620218870&w=2
I use nbthread 4 and nbproc 4 But I see that there is no overload of hardware during test

Expected behavior

It must works without errors. Like single backend (nginx) works with the same load

Do you have any idea what may have caused this?

No

Do you have an idea how to solve the issue?

No

1.8 and earlier: bo_putblk() miscounts the buffer's position

This is an already diagnosed bug. It only affects 1.8 and earlier and apparently has no impact there, though the risk of misuse consecutive to the backport of a fix exists.

The function bo_putblk() does this at the end :

        memcpy(b->p, blk, half);
        b->p = b_ptr(b, half);
        if (len > half) {
                memcpy(b->p, blk + half, len - half);
                b->p = b_ptr(b, half);
        }

This results in half (the wrapping point) being counted twice. The second one should have been :

b->p = b_ptr(b, len - half)

Only checks use this function at the moment and these ones do not wrap so they are safe. 1.9 and later are not affected thanks to the buffer API changes. It's unknown if versions prior to 1.8 use this function in a more sensitive context.

1.9.2: when reloading: free(): invalid pointer

Output of haproxy -vv and uname -a

HA-Proxy version 1.9.2 2019/01/16 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
Running on OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace

What's the configuration?

configuration is unkown at this point.

Steps to reproduce the behavior

Not reproducible at this point.

Details

There is a report for 1.9.2 in a situation where haproxy 1.9.2 crashed while or shortly after reloading with a free(): invalid pointer:

https://discourse.haproxy.org/t/haproxy-v1-9-2-crash-during-reload-or-shortly-thereafter/3454

Master-worker is used (by a custom script), but not systemd. The crash happened under load and does not seem to be reproducible in a test environment. User will retry with 1.9.4 once released and with ulimit -c unlimited so that we get a core next time.

Filing an issue here for tracking purposes.

last server is not being removed from backend after consul service is deleted

Output of haproxy -vv and uname -a

$ uname -a
Linux ws02 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ ./haproxy -vv
HA-Proxy version 2.0-dev0-b8e602-315 2019/02/22 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64).
Built with OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
        [SPOE] spoe
        [COMP] compression
        [CACHE] cache
        [TRACE] trace

What's the configuration?

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/run/haproxy.stats
    ssl-default-bind-ciphers PROFILE=SYSTEM
    ssl-default-server-ciphers PROFILE=SYSTEM
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
listen stats
    bind :9000
    mode http
    stats enable
    stats admin if TRUE
    stats refresh 1s
resolvers consul
    nameserver consul 127.0.0.1:8600
frontend web-frontend
    bind *:8000
    default_backend web-backend
backend web-backend
    mode http
    balance leastconn
    option httpchk GET / HTTP/1.1\r\nHost:\ haproxy
    server-template web-backend- 8 _web-service._tcp.service.consul check resolvers consul

Steps to reproduce the behavior

Start haproxy with no servers available:

[WARNING] 054/024235 (18393) : Server web-backend/web-backend-1 is DOWN, reason: Socket error, check duration: 0ms. 7 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 054/024236 (18393) : Server web-backend/web-backend-2 is DOWN, reason: Socket error, check duration: 0ms. 6 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 054/024236 (18393) : Server web-backend/web-backend-3 is DOWN, reason: Socket error, check duration: 0ms. 5 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 054/024236 (18393) : Server web-backend/web-backend-4 is DOWN, reason: Socket error, check duration: 0ms. 4 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 054/024236 (18393) : Server web-backend/web-backend-5 is DOWN, reason: Socket error, check duration: 0ms. 3 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 054/024237 (18393) : Server web-backend/web-backend-6 is DOWN, reason: Socket error, check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 054/024237 (18393) : Server web-backend/web-backend-7 is DOWN, reason: Socket error, check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 054/024237 (18393) : Server web-backend/web-backend-8 is DOWN, reason: Socket error, check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 054/024237 (18393) : backend 'web-backend' has no server available!

Register a service:

$ consul services register -name=web-service -port=80
Registered service: web-service
$ dig +short -t SRV -p 8600 _web-service._tcp.service.consul @127.0.0.1
1 1 80 ws02.node.lab.consul.
[WARNING] 054/024416 (18393) : web-backend/web-backend-1 changed its IP from  to 192.168.90.34 by consul/consul.
[WARNING] 054/024419 (18393) : Server web-backend/web-backend-1 is UP, reason: Layer7 check passed, code: 200, info: "OK", check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

Deregister a service:

$ consul services deregister -id=web-service
Deregistered service: web-service
$ dig +short -t SRV -p 8600 _web-service._tcp.service.consul @127.0.0.1
$

Actual behavior

Nothing happens. Backend stays in UP state.

Expected behavior

Backend should be declared DOWN

Do you have any idea what may have caused this?

Are empty DNS responses handled poorly?

Do you have an idea how to solve the issue?

No

Github releases

First off thanks for developing haproxy!

Since 1.9 releases the releases on githup are no longer synced with the releases on the website. Could this be fixed? I use the new Github watch feature to only notify me of new releases so that I can update my docker images accordingly.

Call to memcpy with overlapping buffers

Output of haproxy -vv and uname -a

Linux *snip* 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
HA-Proxy version 2.0-dev0-afe578-159 2019/01/23 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = 

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace

What's the configuration?

defaults
	mode http
	option http-use-htx

frontend http-frontend
	bind 0.0.0.0:8080
	default_backend default

backend default
	server default example.com:80

Steps to reproduce the behavior

  1. valgrind ./haproxy -f ./haproxy.cfg
  2. curl -I localhost:8080

Actual behavior

Valgrind complains:

==9008== Source and destination overlap in memcpy(0x58b7328, 0x58b7328, 79)
==9008==    at 0x4C32513: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9008==    by 0x49CE2C: memcpy (string3.h:53)
==9008==    by 0x49CE2C: __b_putblk (buf.h:504)
==9008==    by 0x49CE2C: b_putblk (buf.h:520)
==9008==    by 0x49CE2C: h1_process_output (mux_h1.c:1664)
==9008==    by 0x49CE2C: h1_snd_buf (mux_h1.c:2251)
==9008==    by 0x4CD27A: si_cs_send (stream_interface.c:686)
==9008==    by 0x4CD875: si_cs_process (stream_interface.c:573)
==9008==    by 0x4EAEF8: conn_fd_handler (connection.c:190)
==9008==    by 0x4FA1FC: fdlist_process_cached_events (fd.c:441)
==9008==    by 0x4FA1FC: fd_process_cached_events (fd.c:459)
==9008==    by 0x47B92D: run_poll_loop (haproxy.c:2656)
==9008==    by 0x47B92D: run_thread_poll_loop (haproxy.c:2685)
==9008==    by 0x404FD1: main (haproxy.c:3314)
==9008== 
==9008== Source and destination overlap in memcpy(0x58b7328, 0x58b7328, 133)
==9008==    at 0x4C32513: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9008==    by 0x49CE2C: memcpy (string3.h:53)
==9008==    by 0x49CE2C: __b_putblk (buf.h:504)
==9008==    by 0x49CE2C: b_putblk (buf.h:520)
==9008==    by 0x49CE2C: h1_process_output (mux_h1.c:1664)
==9008==    by 0x49CE2C: h1_snd_buf (mux_h1.c:2251)
==9008==    by 0x4CD27A: si_cs_send (stream_interface.c:686)
==9008==    by 0x4CDD59: si_update_both (stream_interface.c:845)
==9008==    by 0x43B541: process_stream (stream.c:2502)
==9008==    by 0x4FE578: process_runnable_tasks (task.c:432)
==9008==    by 0x47B943: run_poll_loop (haproxy.c:2620)
==9008==    by 0x47B943: run_thread_poll_loop (haproxy.c:2685)
==9008==    by 0x404FD1: main (haproxy.c:3314)
==9008== 

Expected behavior

Valgrind does not complain

Do you have any idea what may have caused this?

The caller does not check whether const char * blk is part of struct buffer * b. It only happens for HTX.

Do you have an idea how to solve the issue?

Either make all the caller check this fact or use memmove.

http-response set-header does not work if http-request redirect applies

Output of haproxy -vv and uname -a

Linux *snip* 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
HA-Proxy version 2.0-dev0-2f167b-150 2019/01/18 - https://haproxy.org/
Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_ZLIB=1 USE_THREAD=1 USE_OPENSSL=1 USE_SYSTEMD=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built without PCRE or PCRE2 support (using libc's regex instead)
Encrypted password support via crypt(3): yes
Built with multi-threading support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as <default> cannot be specified using 'proto' keyword)
              h2 : mode=HTX        side=FE|BE
              h2 : mode=HTTP       side=FE
       <default> : mode=HTX        side=FE|BE
       <default> : mode=TCP|HTTP   side=FE|BE

Available filters :
	[SPOE] spoe
	[COMP] compression
	[CACHE] cache
	[TRACE] trace

What should haproxy do differently? Which functionality do you think we should add?

haproxy should respect the configuration option to add custom HTTP response headers using http-response set-header when I use http-request redirect.

What are you trying to do?

I am redirecting the bare domain example.com to different subdomain test.example.com, but need to be able to add the Strict-Transport-Security header to that redirect to be able to add my domain to the HSTS preload list.

See also: https://www.mail-archive.com/[email protected]/msg29294.html

Server goes UP without tcp-check if it resolves again

Hi to all, I have a problem with a haproxy instance (1.9.4) in front of a redis cluster (3 nodes), all inside k8s.

I configured haproxy for a tcp-check like this:

backend bk_redis
  option tcp-check
  tcp-check send AUTH\ RedisTest\r\n
  tcp-check expect string +OK
  tcp-check send PING\r\n
  tcp-check expect string +PONG
  tcp-check send info\ replication\r\n
  tcp-check expect string role:master
  tcp-check send QUIT\r\n
  tcp-check expect string +OK
  default-server  check resolvers kubedns inter 1s downinter 1s fastinter 1s fall 1 rise 30 maxconn 330 no-agent-check on-error mark-down
  server redis-0 redis-ha-server-0.redis-ha.redis-ha.svc.cluster.local:6379
  server redis-1 redis-ha-server-1.redis-ha.redis-ha.svc.cluster.local:6379
  server redis-2 redis-ha-server-2.redis-ha.redis-ha.svc.cluster.local:6379

When the master node goes down it works fine, a replica is promoted to master and haproxy redirects the traffic to that.
The problem is when the old master comes back with a new ip, because haproxy doesn't check again for the master role but instead it puts immediately the old node as UP.

this is the log:

[NOTICE] 058/125637 (1) : New worker #1 (6) forked
[WARNING] 058/125637 (6) : Health check for server bk_redis/redis-0 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 0ms, status: 1/1 UP.
[WARNING] 058/125639 (6) : Health check for server bk_redis/redis-1 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125639 (6) : Server bk_redis/redis-1 is DOWN. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 058/125639 (6) : Health check for server bk_redis/redis-2 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125639 (6) : Server bk_redis/redis-2 is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 058/125657 (6) : Health check for server bk_redis/redis-0 failed, reason: Layer4 timeout, info: " at step 1 of tcp-check (send)", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125657 (6) : Server bk_redis/redis-0 is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 058/125657 (6) : backend 'bk_redis' has no server available!
[WARNING] 058/125706 (6) : Health check for server bk_redis/redis-2 failed, reason: Layer7 invalid response, info: "TCPCHK did not match content 'role:master' at step 6", check duration: 532ms, status: 0/30 DOWN.
[WARNING] 058/125706 (6) : Health check for server bk_redis/redis-1 failed, reason: Layer7 invalid response, info: "TCPCHK did not match content 'role:master' at step 6", check duration: 835ms, status: 0/30 DOWN.
[WARNING] 058/125707 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 1/30 DOWN.
[WARNING] 058/125708 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 2/30 DOWN.
[WARNING] 058/125708 (6) : Health check for server bk_redis/redis-1 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms, status: 0/30 DOWN.
[WARNING] 058/125709 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 3/30 DOWN.
[WARNING] 058/125710 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 4/30 DOWN.
[WARNING] 058/125711 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 5/30 DOWN.
[WARNING] 058/125712 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 6/30 DOWN.
[WARNING] 058/125713 (6) : Server bk_redis/redis-0 was DOWN and now enters maintenance (DNS NX status).
[WARNING] 058/125713 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 7/30 DOWN.
[WARNING] 058/125714 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 8/30 DOWN.
[WARNING] 058/125715 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 9/30 DOWN.
[WARNING] 058/125716 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 10/30 DOWN.
[WARNING] 058/125717 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 11/30 DOWN.
[WARNING] 058/125718 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 12/30 DOWN.
[WARNING] 058/125719 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 13/30 DOWN.
[WARNING] 058/125720 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 14/30 DOWN.
[WARNING] 058/125721 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 15/30 DOWN.
[WARNING] 058/125722 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 16/30 DOWN.
[WARNING] 058/125723 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 17/30 DOWN.
[WARNING] 058/125724 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 18/30 DOWN.
[WARNING] 058/125725 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 19/30 DOWN.
[WARNING] 058/125726 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 20/30 DOWN.
[WARNING] 058/125727 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 21/30 DOWN.
[WARNING] 058/125728 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 22/30 DOWN.
[WARNING] 058/125729 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 23/30 DOWN.
[WARNING] 058/125730 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 24/30 DOWN.
[WARNING] 058/125731 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 4ms, status: 25/30 DOWN.
[WARNING] 058/125732 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 26/30 DOWN.
[WARNING] 058/125733 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 3ms, status: 27/30 DOWN.
[WARNING] 058/125734 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 28/30 DOWN.
[WARNING] 058/125735 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 2ms, status: 29/30 DOWN.
[WARNING] 058/125736 (6) : Health check for server bk_redis/redis-2 succeeded, reason: Layer7 check passed, code: 0, info: "(tcp-check)", check duration: 1ms, status: 1/1 UP.
[WARNING] 058/125736 (6) : Server bk_redis/redis-2 is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING] 058/125945 (6) : bk_redis/redis-0 changed its IP from 10.42.4.85 to 10.42.4.87 by kubedns/namesrv1.
[WARNING] 058/125945 (6) : Server bk_redis/redis-0 ('redis-ha-server-0.redis-ha.redis-ha.svc.cluster.local') is UP/READY (resolves again).
[WARNING] 058/125945 (6) : Server bk_redis/redis-0 administratively READY thanks to valid DNS answer.
[WARNING] 058/125947 (6) : Health check for server bk_redis/redis-0 failed, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1000ms, status: 0/30 DOWN.
[WARNING] 058/125947 (6) : Server bk_redis/redis-0 is DOWN. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.

If you see last lines, when the bk_redis/redis-0 has a new ip (BUT IT WAS DOWN) it goes immediately UP without do the tcp-check (that it start after a second and of course it fails).

How can I avoid this ?
Is there a way to force that when it resolves again the ip it waits for the tcp-check for go UP ?

http-send-name-header is sometimes sent in request data

This bug was sent to the mailing list on 2018-02-06 though the archives missed this e-mail and only have the follow-ups ( https://www.mail-archive.com/[email protected]/msg28933.html ) thus I'm re-adding it here.

Output of haproxy -vv and uname -a

haproxy 1.8.3
haproxy 1.8.1

What's the configuration?

global             
  maxconn 4096
  stats socket /srv/slapgrid/slappart15/var/run/haproxy.sock level admin

defaults
  log global
  mode http
  option httplog
  option dontlognull
  retries 1
  option redispatch
  maxconn 2000
  cookie SERVERID rewrite
  http-send-name-header X-Balancer-Current-Server
  balance roundrobin
  stats uri /haproxy
  stats realm Global\ statistics
  timeout server 305s
  timeout queue 60s
  timeout connect 5s
  timeout client 305s
  option forceclose

listen user
  bind 10.0.15.62:2152
  http-request set-header X-Balancer-Current-Cookie SERVERID
  server user-0 10.0.5.183:2200 cookie user-0 check inter 3s rise 1 fall 2 maxqueue 5 maxconn 1
  option httpchk GET /

Steps to reproduce the behavior

Uncertain. A POST request with some short data (226 bytes) is sent. At the very least, the data accompanying the request must be sent along with the headers in the same packet.

Actual behavior

From the e-mail :
"In a scenario where 20 or more users try to login to the system, around 20% of them fail. After intense debugging and analysis of tcpdumps, as far as I see, the issue seems to come from haproxy: some of the http requests are malformed, due the headers are inserted in a bad way. Specifically, the header "http-send-name-header". If that keyword is removed from the haproxy configuration, the error is no longer reproduced. But http-send-name-header is needed by the system to manage user-backends relations through cookies.

Here an example of a request malformed by haproxy:
Image of a capture showing that "X-Balancer-Current-Server: user-0" was inserted inside data 2 bytes before the end. Data were not overwritten, the header was inserted.
"

Expected behavior

That request must have been:

X-Balancer-Current-Cookie: SERVERID
X-Balancer-Current-Server: user-0

cancel_url=http.......&__ac_password=myPassword

Do you have any idea what may have caused this?

Possible bug in offset computation for the send-name-header when data are present. Studying the code never revealed where the issue could be, and it couldn't be reproduced outside of this environment either. It could be caused by a retry+redispatch given that this was done during a scalability test, hence we can expect a few connection failures.

Do you have an idea how to solve the issue?

Workaround consisting in adding "option http-buffer-request" was confirmed to work, which kind of defeats the guesses above. Maybe the request is read in two operations and the data arrives only after header processing and before attempting to connect.

Worker crashes with a segmentation fault

Output of haproxy -vv and uname -a

Linux chrono 4.9.0-8-amd64 #1 SMP Debian 4.9.130-2 (2018-10-27) x86_64 GNU/Linux
HA-Proxy version 1.8.18-1~bpo9+1 2019/02/08
Copyright 2000-2019 Willy Tarreau <[email protected]>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-lpnOTV/haproxy-1.8.18=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-null-dereference -Wno-unused-label
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.1.0f  25 May 2017
Running on OpenSSL version : OpenSSL 1.1.0j  20 Nov 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE2 version : 10.22 2016-07-29
PCRE2 library supports JIT : yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
	[SPOE] spoe
	[COMP] compression
	[TRACE] trace

What's the configuration?

defaults
	unique-id-format %{+X}o\ FOO-%Ts%rt

frontend fe_http
	mode http
	bind *:8080
	unique-id-header X-Req-ID
	
	default_backend be_http

backend be_http
	mode http
	http-request set-header Host example.com

	server example example.com:80

Steps to reproduce the behavior

ab -c10 -n20  'http://localhost:8080/'

Actual behavior

==28476== Invalid read of size 8
==28476==    at 0x451792: __pool_get_first (memory.h:124)
==28476==    by 0x451792: pool_alloc_dirty (memory.h:154)
==28476==    by 0x451792: pool_alloc (memory.h:230)
==28476==    by 0x451792: http_process_request (proto_http.c:3770)
==28476==    by 0x485E65: process_stream (stream.c:1912)
==28476==    by 0x505E12: process_runnable_tasks (task.c:229)
==28476==    by 0x4B515A: run_poll_loop (haproxy.c:2416)
==28476==    by 0x4B515A: run_thread_poll_loop (haproxy.c:2482)
==28476==    by 0x41A939: main (haproxy.c:3085)
==28476==  Address 0x303643352d4f4f46 is not stack'd, malloc'd or (recently) free'd
==28476== 
==28476== 
==28476== Process terminating with default action of signal 11 (SIGSEGV)
==28476==  General Protection Fault
==28476==    at 0x451792: __pool_get_first (memory.h:124)
==28476==    by 0x451792: pool_alloc_dirty (memory.h:154)
==28476==    by 0x451792: pool_alloc (memory.h:230)
==28476==    by 0x451792: http_process_request (proto_http.c:3770)
==28476==    by 0x485E65: process_stream (stream.c:1912)
==28476==    by 0x505E12: process_runnable_tasks (task.c:229)
==28476==    by 0x4B515A: run_poll_loop (haproxy.c:2416)
==28476==    by 0x4B515A: run_thread_poll_loop (haproxy.c:2482)
==28476==    by 0x41A939: main (haproxy.c:3085)
==28476== 
==28476== HEAP SUMMARY:
==28476==     in use at exit: 943,354 bytes in 2,458 blocks
==28476==   total heap usage: 2,804 allocs, 346 frees, 1,221,346 bytes allocated
==28476== 
==28476== LEAK SUMMARY:
==28476==    definitely lost: 768 bytes in 6 blocks
==28476==    indirectly lost: 0 bytes in 0 blocks
==28476==      possibly lost: 134,998 bytes in 1,239 blocks
==28476==    still reachable: 807,588 bytes in 1,213 blocks
==28476==         suppressed: 0 bytes in 0 blocks
==28476== Rerun with --leak-check=full to see details of leaked memory
==28476== 
==28476== For counts of detected and suppressed errors, rerun with: -v
==28476== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
fish: “valgrind ./haproxy -d -f ./hapr…” terminated by signal SIGSEGV (Address boundary error)

GDB Stack from prod:

(gdb) cont
Continuing.

Program received signal SIGSEGV, Segmentation fault.
__pool_get_first (pool=0x559a84ca8bc0, pool=0x559a84ca8bc0) at include/common/memory.h:124
124	include/common/memory.h: No such file or directory.
(gdb) bt full
#0  __pool_get_first (pool=0x559a84ca8bc0, pool=0x559a84ca8bc0) at include/common/memory.h:124
        p = 0x352d4f4e4f524843
#1  pool_alloc_dirty (pool=0x559a84ca8bc0) at include/common/memory.h:154
        p = <optimized out>
#2  pool_alloc (pool=0x559a84ca8bc0) at include/common/memory.h:230
No locals.
#3  http_process_request (s=0x559a84f97ff0, req=0x559a84f98000, an_bit=2048) at src/proto_http.c:3770
        sess = 0x559a84f512c0
        txn = 0x559a84f98420
        msg = 0x559a84f98480
#4  0x0000559a84452d9f in process_stream (t=t@entry=0x559a84f98b50) at src/stream.c:1912
        max_loops = 199
        ana_list = 2048
        ana_back = 2048
        flags = <optimized out>
        s = 0x559a84f97ff0
        sess = <optimized out>
        rqf_last = <optimized out>
        rpf_last = 2147483648
        rq_prod_last = 7
        rq_cons_last = 0
        rp_cons_last = 7
        rp_prod_last = 0
        req_ana_back = <optimized out>
        req = 0x559a84f98000
        res = 0x559a84f98040
        si_f = 0x559a84f98238
        si_b = 0x559a84f98260
#5  0x0000559a844d75f4 in process_runnable_tasks () at src/task.c:229
        t = <optimized out>
        i = <optimized out>
        max_processed = 200
        local_tasks = {0x559a84f6b380, 0x559a84efb710, 0x559a84fa2cb0, 0xc6f77b1962545000, 0x1b, 0x559a84fa2cb0, 0x6c0, 0x0, 0x0, 0x1b, 0x0, 0x559a844cc7ea <conn_fd_handler+490>, 0x7ffd024f0af0, 0x500000004, 
          0x7ffd024f0af0, 0x7ffd024f0af0}
        local_tasks_count = <optimized out>
        final_tasks_count = <optimized out>
#6  0x0000559a84484d97 in run_poll_loop () at src/haproxy.c:2416
        next = <optimized out>
        exp = <optimized out>
#7  run_thread_poll_loop (data=<optimized out>) at src/haproxy.c:2482
        ptif = <optimized out>
        ptdf = <optimized out>
        start_lock = 0
#8  0x0000559a843e471a in main (argc=<optimized out>, argv=0x7ffd024f0fd8) at src/haproxy.c:3085
        tids = 0x559a84cb6fb0
        threads = 0x559a84ef75c0
        i = 1
        old_sig = {__val = {2048, 94122141193584, 140724642188592, 177, 178, 140724642188912, 140724642188688, 94122132735766, 94122133250393, 140724642188912, 206158430256, 140724642188912, 140724642188720, 
            14337063287711354880, 206158430240, 94122135496320}}
        blocked_sig = {__val = {18446744067199990583, 18446744073709551615 <repeats 15 times>}}
        err = <optimized out>
        retry = <optimized out>
        limit = {rlim_cur = 4056, rlim_max = 4056}
        errmsg = "\000\000\000\000\000\000\000\000P", '\000' <repeats 15 times>, "\003\000\000\000\060", '\000' <repeats 19 times>, "[\000\000\000n", '\000' <repeats 19 times>, "w\000\000\000|\000\000\000\340I\000\000\000\000\000\000\000+U\350O\177\000\000\bbQ\204"
        pidfd = <optimized out>
(gdb) cont
Continuing.

Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.
(gdb) quit

Expected behavior

Don't crash.

Do you have any idea what may have caused this?

Will add once I have more information.

Do you have an idea how to solve the issue?

Will add once I have more information.

Haproxy Add one more x-forwardfor-for when add “option forwardfor”

version 1.7.8
Host: 192.168.1.101
Haproxy configuration

global
    daemon
    maxconn 100000
    stats timeout 2m
    pidfile /tmp/http.pid

defaults
    timeout http-request 10s
    timeout server 120s 
    timeout client 20s 
    timeout connect 4s 
    timeout tunnel 30m
    retries 3
    balance roundrobin
    option abortonclose
    option forwardfor    # Forward X-Forwarded-For
    option redispatch

frontend http
    bind :80
    mode http
    default_backend  test_svr

backend test_svr
    mode http
    server 192.168.1.102:8080 192.168.1.102:8080

The request Header info:

GET /api HTTP/1.0
X-Forwarded-For: 192.168.1.10, 192.168.1.11  ---> from request  
Proxy-Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.109 Safari/537.36 /2.2.1.8
Accept-Encoding: gzip
X-Forwarded-For: 192.168.1.101  ---> Haproxy Added.

Why there are two X-Forwarded-For ???

Why Haproxy added a new X-Forwarded-For but not append It's Host ipaddress to the exsit one?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.