This repository has been deprecated and will no longer be maintained.
To access the last commit of the code, please switch to the master branch.
HAproxy image that autoreconfigures itself when used in Docker Cloud
Home Page: https://cloud.docker.com/
This repository has been deprecated and will no longer be maintained.
To access the last commit of the code, please switch to the master branch.
Whenever a service linked to HAProxy is redeployed I end up with 503 errors when attempting to access the virtual host for service after a successful redeploy of the service. This is the console output I have for a redeploy from HAProxy:
proxy-1 | 2016-02-17T12:32:58.066607490Z INFO:haproxy:==========BEGIN==========
proxy-1 | 2016-02-17T12:32:58.068433250Z INFO:haproxy:Event: container e2bd1288-2f7b-46c7-8075-f3d44b03d46b is terminated
proxy-1 | 2016-02-17T12:32:59.377172101Z INFO:haproxy:Service links: APP(035cb9a5-4dd9-42fe-9280-21dc8e8be22f)
proxy-1 | 2016-02-17T12:32:59.379267548Z INFO:haproxy:Container links: APP_1(1e391941-79ed-447a-a556-30387bba8997)
proxy-1 | 2016-02-17T12:32:59.386043240Z INFO:haproxy:HAProxy configuration remains unchanged
proxy-1 | 2016-02-17T12:32:59.386848017Z INFO:haproxy:===========END===========
proxy-1 | 2016-02-17T12:32:59.390035142Z INFO:haproxy:==========BEGIN==========
proxy-1 | 2016-02-17T12:32:59.390889239Z INFO:haproxy:Event: container 1e391941-79ed-447a-a556-30387bba8997 is running
proxy-1 | 2016-02-17T12:33:00.099824069Z INFO:haproxy:Service links: APP(035cb9a5-4dd9-42fe-9280-21dc8e8be22f)
proxy-1 | 2016-02-17T12:33:00.101004643Z INFO:haproxy:Container links: APP_1(1e391941-79ed-447a-a556-30387bba8997)
proxy-1 | 2016-02-17T12:33:00.107965332Z INFO:haproxy:HAProxy configuration remains unchanged
proxy-1 | 2016-02-17T12:33:00.108892125Z INFO:haproxy:===========END===========
Since the certs need to be renewed every 90 days when using lets encrypt, it would be very useful to be able to set the ssl cert in haproxy from a volume instead of an envvar.
I'm trying to setup haproxy as load balancer for postgres read slaves, and I cannot do it.
a simple haproxy.cfg file for postgres should be similar to this
listen pgsql 0.0.0.0:15432
mode tcp
balance roundrobin
# Add PostgreSQL replica here.
option pgsql-check user username
server replica1 127.0.0.1:5433 check inter 10000
server replica2 127.0.0.1:5434 check inter 10000
server replica3 127.0.0.1:5435 check inter 10000
I didn't found any way to configure like above, specially the listen pgsql 0.0.0.0:15432
line.
I tried something like this
postgres-slave:
restart: always
image: sameersbn/postgresql:9.4-21
ports:
- "5433:5432"
links:
- postgres-master
environment:
- DEBUG=false
- REPLICATION_MODE=slave
- REPLICATION_SSLMODE=prefer
- REPLICATION_HOST=postgres-master
- REPLICATION_PORT=5432
- REPLICATION_USER=repluser
- REPLICATION_PASS=replpass
- PG_TRUST_LOCALNET=true
- DB_EXTENSION=unaccent,pg_trgm
volumes:
- /var/lib/postgresql
lb:
image: 'dockercloud/haproxy:1.2.1'
links:
- postgres-slave
environment:
- "EXTRA_BIND_SETTINGS=5432:accept-proxy"
- "TCP_PORTS=15432"
ports:
- '15432:5433'
Any ideas if this would be possible now or in the near future?
I try to set up haproxy to serve as a load balancer for mysql.
Setting EXTRA_BIND_SETTINGS does have no effect whatsoever (e.g. EXTRA_BIND_SETTINGS=3306:frontend tcp
or EXTRA_BIND_SETTINGS=3306:default_frontend tcp
or any permutation thereof - not even an error). What do I misunderstand here?
Editing fontend_helper.py at line 133 does the trick, but this is certainly not the intended way.
Hi, is it implemented/possible to have stats served via HTTPS? Maybe I just overlooked, but I cannot find it in the readme.
If it's not implemented, would it be possible?
Thanks!
I've moved my compose v2 file to using ADDITIONAL_SERVICES
so it works ok on swarm. It works fine for the initial load but when one of the additional service containers stops or starts haproxy doesn't get an updated config. I enabled DEBUG
and the container appears to get the events but doesn't do anything with them.
Also - running docker exec <id> /reload.sh
causes the container to crash and restart with correct config:
Traceback (most recent call last):
File "/usr/bin/dockercloud-haproxy", line 9, in <module>
load_entry_point('dockercloud-haproxy==1.4', 'console_scripts', 'dockercloud-haproxy')()
File "/usr/lib/python2.7/site-packages/haproxy/main.py", line 49, in main
listen_docker_events()
File "/usr/lib/python2.7/site-packages/haproxy/eventhandler.py", line 67, in listen_docker_events
for event in docker.events(decode=True):
File "/usr/lib/python2.7/site-packages/docker/client.py", line 217, in _stream_helper
data = reader.read(1)
File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/response.py", line 243, in read
data = self._fp.read(amt)
File "/usr/lib/python2.7/httplib.py", line 588, in read
return self._read_chunked(amt)
File "/usr/lib/python2.7/httplib.py", line 630, in _read_chunked
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python2.7/socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib/python2.7/site-packages/gevent/_socket2.py", line 280, in recv
self._wait(self._read_event)
File "/usr/lib/python2.7/site-packages/gevent/_socket2.py", line 179, in _wait
self.hub.wait(watcher)
File "/usr/lib/python2.7/site-packages/gevent/hub.py", line 627, in wait
result = waiter.get()
File "/usr/lib/python2.7/site-packages/gevent/hub.py", line 875, in get
return self.hub.switch()
File "/usr/lib/python2.7/site-packages/gevent/hub.py", line 606, in switch
return greenlet.switch(self)
gevent.hub.BlockingSwitchOutError: Impossible to call blocking function in the event loop callback
I'm using this in a Stack, and suddenly after being up for weeks, it just raises and exception and dies. Does not restart. This is a low traffic web site. There are nothing else in the logs.
[lb-1]2016-05-13T01:34:31.926027191Z Traceback (most recent call last):
[lb-1]2016-05-13T01:34:31.927063846Z File "/usr/bin/dockercloud-haproxy", line 9, in <module>
[lb-1]2016-05-13T01:34:31.929084513Z load_entry_point('dockercloud-haproxy==1.3', 'console_scripts', 'dockercloud-haproxy')()
[lb-1]2016-05-13T01:34:31.929103843Z File "/usr/lib/python2.7/site-packages/haproxy/main.py", line 41, in main
[lb-1]2016-05-13T01:34:31.929151507Z listen_dockercloud_events()
[lb-1]2016-05-13T01:34:31.929638324Z File "/usr/lib/python2.7/site-packages/haproxy/eventhandler.py", line 60, in listen_dockercloud_events
[lb-1]2016-05-13T01:34:31.930650973Z events.run_forever()
[lb-1]2016-05-13T01:34:31.930666543Z File "/usr/lib/python2.7/site-packages/dockercloud/api/events.py", line 42, in run_forever
[lb-1]2016-05-13T01:34:31.930711053Z ws.run_forever(ping_interval=5, ping_timeout=5, *args, **kwargs)
[lb-1]2016-05-13T01:34:31.934331712Z File "/usr/lib/python2.7/site-packages/websocket/_app.py", line 206, in run_forever
[lb-1]2016-05-13T01:34:31.934398989Z thread.join()
[lb-1]2016-05-13T01:34:31.939365659Z File "/usr/lib/python2.7/threading.py", line 929, in join
[lb-1]2016-05-13T01:34:31.939567346Z raise RuntimeError("cannot join thread before it is started")
[lb-1]2016-05-13T01:34:31.951178144Z RuntimeError: cannot join thread before it is started
I am having a hard time reliably using the CLI to do linking, as every time I want to add one link, it removes all of the others. Ideally, I'd like HAProxy to interrogate the Docker Cloud API for any services with a VIRTUAL_HOST environment variable, and link against all of those. Does anybody have something like this working?
Is there any plan on implementing a way to use http2 and still be able to use virtual_hosts? It works when using tcp mode but then I can have only that application listening to port 443.
I'm using this image to load-balance some microservices on the Docker Cloud. The containers have been dying once every 2-3 hours.
To debug, I did a BYON node, and made sure one of my dockercloud/haproxy instances was deployed to that node. I then used "docker stats" to examine the container. It showed 95% or more CPU utilization on an idle container (not handling any traffic).
When I run "docker logs" on the container, it continuously dumps out "INFO:haproxy:Websocket close".
It seems like the Websocket connection to the Docker Cloud API is cycling really, really quickly.
Example snippet from docker stack YAML at
https://gist.github.com/evanp/caf8ca6be500afb762b096a261424707
In the guide under "Usage within Docker Cloud" it says to set the attribute role
, but (in case of stackfiles at least) it is the attribute roles
:
[...]
roles:
- global
I just stumbled over this.
Can't modify the frontend block so can't block any IP addresses, is it possible to had an environment variable for that?
Hi,
we are using your haproxy service to redirect all our incoming traffic to the good container thanks to the VIRTUAL_HOST environment var.
This works well in http but i didnt manage to make this work for https.
We have one site which need https. So on the app i set:
Then this website was accessible by https. The thing was if we tried to reach the other websites by https then the certficate sent was the one of mysite.com
Did i miss something to do something like that?
If I spawn an HAProxy instance with FORCE_SSL=true
and SSL_CERT equal to a cert string and I try to access the proxy via HTTP, it doesn't redirect to HTTPS. I know the SSL cert is properly configured... if I access it via HTTPS the browser indicates a secure connection. What I want is for ha-proxy to redirect incoming HTTP requests to the HTTPS equivalent URL. Is FORCE_SSL the right approach?
I've been trying to let HA forward HTTPS to nginx (letting nginx terminate SSL) but somehow can't seem to get it work.
Can someone give me an example of how to do this?
Before I migrated to the Docker for Mac beta, I was using the normal setup for docker on my mac, by using docker-machine etc. The below was a working configuration (sample configuration with just nginx plain service).
version: '2'
services:
web:
image: nginx:latest
lb:
image: dockercloud/haproxy:latest
links:
- web
ports:
- "80:80"
environment:
- DOCKER_TLS_VERIFY
- DOCKER_HOST
- DOCKER_CERT_PATH
volumes:
- $DOCKER_CERT_PATH:$DOCKER_CERT_PATH
But when using Docker for Mac Beta, one shouldn't need to set the environment variables and mount the certificate path, as it should be native. So I tried this one:
version: '2'
services:
web:
image: nginx:latest
lb:
image: dockercloud/haproxy:latest
links:
- web
ports:
- "80:80"
I cannot access localhost on port 80. If I remove the lb service, and map 80:80 on the web service, then I can access it on localhost as per documentation from Docker for Mac.
This might be a bug in Docker for Mac, but I want to raise this issue here as well, as it might be a combination of them together?
Hi, i'm trying to swap from tutum/haproxy
to dockercloud/haproxy
mostly because of certs from volume + swarm support (didn't tested swarm yet), but did run into issues. When using same yaml file what I was using for tutum/haproxy
, version dockercloud/haproxy:1.4
is throwing an error:
INFO:haproxy:dockercloud/haproxy 1.4 is running outside Docker Cloud
INFO:haproxy:Legacy link mode, loading HAProxy definition from environment variables: unable to connect to docker daemon ('Connection aborted.', error(2, 'No such file or directory'))
INFO:haproxy:dockercloud/haproxy PID: 6
INFO:haproxy:==========BEGIN==========
Traceback (most recent call last):
File "/usr/bin/dockercloud-haproxy", line 9, in <module>
load_entry_point('dockercloud-haproxy==1.4', 'console_scripts', 'dockercloud-haproxy')()
File "/usr/lib/python2.7/site-packages/haproxy/main.py", line 53, in main
run_haproxy()
File "/usr/lib/python2.7/site-packages/haproxy/haproxycfg.py", line 24, in run_haproxy
haproxy = Haproxy(config.LINK_MODE, msg)
File "/usr/lib/python2.7/site-packages/haproxy/haproxycfg.py", line 47, in __init__
self.specs = self._initialize(self.link_mode)
File "/usr/lib/python2.7/site-packages/haproxy/haproxycfg.py", line 61, in _initialize
specs = LegacyLinkSpecs()
File "/usr/lib/python2.7/site-packages/haproxy/parser/legacy_link_parser.py", line 15, in __init__
self._merge_services_with_same_vhost()
File "/usr/lib/python2.7/site-packages/haproxy/parser/base_parser.py", line 27, in _merge_services_with_same_vhost
for route in self.routes[service_alias]:
KeyError: 'DEVELOPMENT_WEBAPP'
my yaml file looks like:
webapp:
extends:
file: ./webapp/docker-compose.yml
service: webapp
environment:
VIRTUAL_HOST: 'https://webapp.local,webapp.local'
APP_ENV: development
proxy:
image: dockercloud/haproxy:1.4
ports:
- "127.0.0.1:80:80"
- "127.0.0.1:443:443"
volumes:
- ./ssl/:/certs/
environment:
CA_CERT_FILE: /certs/cert.pem
links:
- webapp
but when I change version to 1.3
everything seems to work correctly. When VIRTUAL_HOST
is deleted version 1.4
is not throwing an error. It seems that something is wrong with _merge_services_with_same_vhost
function. Unfortunately I don't know python well to tell what is wrong with that..
Also would be nice if there will be tests with docker-compose yaml files, same going for examples.
To allow containers across multiple hosts without linking
I saw that HSTS usage in haproxy is hardcoded to includeSubDomains: https://github.com/docker/dockercloud-haproxy/blob/master/haproxy/helper/backend_helper.py#L120
Is there a way to override this ?
My haproxy container just died
lb-1 | 2016-03-23T03:14:44.096670714Z INFO:haproxy:HAProxy configuration remains unchanged
lb-1 | 2016-03-23T03:14:44.097105728Z INFO:haproxy:===========END===========
lb-1 | 2016-03-23T04:17:42.473585340Z INFO:haproxy:Websocket close
lb-1 | 2016-03-23T04:17:42.881216862Z INFO:haproxy:==========BEGIN==========
lb-1 | 2016-03-23T04:17:42.882948430Z INFO:haproxy:Websocket open
lb-1 | 2016-03-23T04:17:44.314101070Z INFO:haproxy:Linked service: WEB(d678de1b-7414-4c9c-981d-e00bc4b072c9)
lb-1 | 2016-03-23T04:17:44.314739737Z INFO:haproxy:Linked container: WEB_1(243bc744-e4a2-4c18-9816-8e24fe2b45db), WEB_2(3c6e491e-c5d9-48dc-a2d8-6d1b5519eebd), WEB_3(234fc628-ef8c-4f81-94b8-847a580afbcf), WEB_4(936eda4c-9044-49f2-aa6b-8a27b4d8034e)
lb-1 | 2016-03-23T04:17:44.320007446Z INFO:haproxy:HAProxy configuration remains unchanged
lb-1 | 2016-03-23T04:17:44.320389363Z INFO:haproxy:===========END===========
lb-1 | 2016-03-23T05:21:08.069389859Z INFO:haproxy:Websocket close
lb-1 | 2016-03-23T05:21:08.465770815Z INFO:haproxy:==========BEGIN==========
lb-1 | 2016-03-23T05:21:08.466250546Z INFO:haproxy:Websocket open
lb-1 | 2016-03-23T05:21:09.152664793Z ERROR:root:can't start new thread
lb-1 | 2016-03-23T05:21:09.154532058Z Traceback (most recent call last):
lb-1 | 2016-03-23T05:21:09.154874074Z File "/usr/local/bin/dockercloud-haproxy", line 9, in <module>
lb-1 | 2016-03-23T05:21:09.155440948Z load_entry_point('dockercloud-haproxy==1.2.1', 'console_scripts', 'dockercloud-haproxy')()
lb-1 | 2016-03-23T05:21:09.155598774Z File "/usr/local/lib/python2.7/dist-packages/haproxy/main.py", line 41, in main
lb-1 | 2016-03-23T05:21:09.156408989Z listen_dockercloud_events()
lb-1 | 2016-03-23T05:21:09.156615622Z File "/usr/local/lib/python2.7/dist-packages/haproxy/eventhandler.py", line 56, in listen_dockercloud_events
lb-1 | 2016-03-23T05:21:09.157370339Z events.run_forever()
lb-1 | 2016-03-23T05:21:09.157571209Z File "/usr/local/lib/python2.7/dist-packages/dockercloud/api/events.py", line 42, in run_forever
lb-1 | 2016-03-23T05:21:09.157999832Z ws.run_forever(ping_interval=5, ping_timeout=5, *args, **kwargs)
lb-1 | 2016-03-23T05:21:09.158172596Z File "/usr/local/lib/python2.7/dist-packages/websocket/_app.py", line 206, in run_forever
lb-1 | 2016-03-23T05:21:09.159057910Z thread.join()
lb-1 | 2016-03-23T05:21:09.159268063Z File "/usr/lib/python2.7/threading.py", line 938, in join
lb-1 | 2016-03-23T05:21:09.160642061Z raise RuntimeError("cannot join thread before it is started")
lb-1 | 2016-03-23T05:21:09.161013891Z RuntimeError: cannot join thread before it is started
lb-1 | 2016-03-23T05:21:11.563614959Z Exception in thread Thread-323 (most likely raised during interpreter shutdown):
lb-1 | 2016-03-23T05:21:11.563880056Z Traceback (most recent call last):
lb-1 | 2016-03-23T05:21:11.564038516Z File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
lb-1 | 2016-03-23T05:21:11.564178413Z File "/usr/lib/python2.7/threading.py", line 763, in run
lb-1 | 2016-03-23T05:21:11.564316609Z File "/usr/lib/python2.7/multiprocessing/pool.py", line 329, in _handle_workers
lb-1 | 2016-03-23T05:21:11.564453367Z File "/usr/lib/python2.7/Queue.py", line 138, in put
lb-1 | 2016-03-23T05:21:11.564600194Z File "/usr/lib/python2.7/threading.py", line 383, in notify
lb-1 | 2016-03-23T05:21:11.564867034Z <type 'exceptions.TypeError'>: 'NoneType' object is not callable
My webdevops/php-apache
container expose 80
, 443
and 9000
.
First, I only set VIRTUAL_HOST
, so i ended up with 3 backends, but only 80
is supposed to work in my case.
So then, as the documentation says, i added EXCLUDE_PORTS=443,9000
but after a redeploy of the web container, HAproxy kept the 3 backends.
I had to redeploy HAproxy in order to apply the EXCLUDE_PORTS
configuration.
I guess it's not the expected behavior =)
The OpenSSL package in Alpine has been updated to 1.0.2h, so we'll need a rebuild of this image so we can deploy the fix.
I see that "latest" in the docs is 1.2.1 but meanwhile there is 1.4.2 released. How about some proper version management?
When I switch the service settings for network in Docker Cloud, haproxy lists all the backend correctly in stats but they are all in red, failing with a layer4 timeout. Sometimes they go green for a while and go red again.
Hi everyone,
We have an issue migrating existing haproxy configuration from tutum to dockercloud.
Haproxy container starts, but haproxy is dead. Same configuration works fine on tutum.
haproxy version is 1.3 (latest as of today). Strange thing is - I have a lot of backends, and certain backends work, whilst the others don't.
Generated configuration as seen in the log:
router-dev-1 | 2016-04-16T05:52:46.698724799Z global
router-dev-1 | 2016-04-16T05:52:46.698737886Z log XXX local0
router-dev-1 | 2016-04-16T05:52:46.698749146Z log XXX local1 notice
router-dev-1 | 2016-04-16T05:52:46.698760061Z log-send-hostname
router-dev-1 | 2016-04-16T05:52:46.698770010Z maxconn 4096
router-dev-1 | 2016-04-16T05:52:46.698780546Z pidfile /var/run/haproxy.pid
router-dev-1 | 2016-04-16T05:52:46.698793707Z user haproxy
router-dev-1 | 2016-04-16T05:52:46.698804313Z group haproxy
router-dev-1 | 2016-04-16T05:52:46.698815108Z daemon
router-dev-1 | 2016-04-16T05:52:46.698825562Z stats socket /var/run/haproxy.stats level admin
router-dev-1 | 2016-04-16T05:52:46.698835366Z ssl-default-bind-options no-sslv3
router-dev-1 | 2016-04-16T05:52:46.698845842Z ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
router-dev-1 | 2016-04-16T05:52:46.698859144Z defaults
router-dev-1 | 2016-04-16T05:52:46.698869346Z balance roundrobin
router-dev-1 | 2016-04-16T05:52:46.698878344Z log global
router-dev-1 | 2016-04-16T05:52:46.698888321Z mode http
router-dev-1 | 2016-04-16T05:52:46.698897906Z option redispatch
router-dev-1 | 2016-04-16T05:52:46.698907266Z option httplog
router-dev-1 | 2016-04-16T05:52:46.698915560Z option dontlognull
router-dev-1 | 2016-04-16T05:52:46.698924195Z option forwardfor
router-dev-1 | 2016-04-16T05:52:46.698933645Z option http-server-close
router-dev-1 | 2016-04-16T05:52:46.698942673Z timeout connect 5000
router-dev-1 | 2016-04-16T05:52:46.698951699Z timeout client 50000
router-dev-1 | 2016-04-16T05:52:46.698959195Z timeout server 50000
router-dev-1 | 2016-04-16T05:52:46.698966630Z listen stats
router-dev-1 | 2016-04-16T05:52:46.698974385Z bind :1936
router-dev-1 | 2016-04-16T05:52:46.698981745Z mode http
router-dev-1 | 2016-04-16T05:52:46.698990231Z stats enable
router-dev-1 | 2016-04-16T05:52:46.698998046Z timeout connect 10s
router-dev-1 | 2016-04-16T05:52:46.699005833Z timeout client 1m
router-dev-1 | 2016-04-16T05:52:46.699022957Z timeout server 1m
router-dev-1 | 2016-04-16T05:52:46.699030927Z stats hide-version
router-dev-1 | 2016-04-16T05:52:46.699038424Z stats realm Haproxy\ Statistics
router-dev-1 | 2016-04-16T05:52:46.699046505Z stats uri /
router-dev-1 | 2016-04-16T05:52:46.699055239Z stats auth stats:stats
router-dev-1 | 2016-04-16T05:52:46.699064022Z frontend port_443
router-dev-1 | 2016-04-16T05:52:46.699072294Z bind :443 ssl crt /certs/
router-dev-1 | 2016-04-16T05:52:46.699080471Z reqadd X-Forwarded-Proto:\ https
router-dev-1 | 2016-04-16T05:52:46.699088222Z acl is_websocket hdr(Upgrade) -i WebSocket
router-dev-1 | 2016-04-16T05:52:46.699095686Z acl host_rule_2 hdr(host) -i overlay-dc.trunkplatform.com.au
router-dev-1 | 2016-04-16T05:52:46.699105747Z acl host_rule_2_port hdr(host) -i overlay-dc.trunkplatform.com.au:443
router-dev-1 | 2016-04-16T05:52:46.699113179Z use_backend SERVICE_OVERLAY_PROXY if host_rule_2 or host_rule_2_port
router-dev-1 | 2016-04-16T05:52:46.699120166Z frontend port_80
router-dev-1 | 2016-04-16T05:52:46.699127059Z bind :80
router-dev-1 | 2016-04-16T05:52:46.699134737Z acl is_websocket hdr(Upgrade) -i WebSocket
router-dev-1 | 2016-04-16T05:52:46.699141994Z monitor-uri /haproxy?monitor
router-dev-1 | 2016-04-16T05:52:46.699148756Z acl host_rule_1 hdr(host) -i overlay-dc.trunkplatform.com.au
router-dev-1 | 2016-04-16T05:52:46.699155801Z acl host_rule_1_port hdr(host) -i overlay-dc.trunkplatform.com.au:80
router-dev-1 | 2016-04-16T05:52:46.699166400Z use_backend SERVICE_OVERLAY_PROXY if host_rule_1 or host_rule_1_port
router-dev-1 | 2016-04-16T05:52:46.699399378Z INFO:haproxy:Launching HAProxy
router-dev-1 | 2016-04-16T05:52:46.705940056Z INFO:haproxy:HAProxy has been launched(PID: 24)
router-dev-1 | 2016-04-16T05:52:46.706191127Z INFO:haproxy:===========END===========
router-dev-1 | 2016-04-16T05:52:46.710378027Z [ALERT] 106/055246 (24) : Proxy 'port_443': unable to find required use_backend: 'SERVICE_OVERLAY_PROXY'.
router-dev-1 | 2016-04-16T05:52:46.710721532Z [ALERT] 106/055246 (24) : Proxy 'port_80': unable to find required use_backend: 'SERVICE_OVERLAY_PROXY'.
router-dev-1 | 2016-04-16T05:52:46.712622057Z [ALERT] 106/055246 (24) : Fatal errors found in configuration.
As you can see backend section is simply missing.
This is haproxy stackfile:
router-dev:
image: 'dockercloud/haproxy:1.3'
deployment_strategy: every_node
environment:
- 'DEFAULT_SSL_CERT=XXX'
- EXTRA_SSL_CERTS=XX_DUMMY_CERT
- MODE=http
- MONITOR_PORT=80
- 'MONITOR_URI=/haproxy?monitor'
- 'OPTION=redispatch, httplog, dontlognull, forwardfor,http-server-close'
- 'RSYSLOG_DESTINATION=XXXX'
- 'XX_DUMMY_CERT=<CERT>'
links:
- 'overlay-proxy.ops:overlay-proxy'
ports:
- '80:80'
- '443:443'
- '1936:1936'
- '5000:5000'
privileged: true
restart: always
roles:
- global
tags:
- dev
And this is a stackfile of a linked service:
overlay-proxy:
image: 'quay.io/trunk/overlay-proxy:latest'
deployment_strategy: high_availability
environment:
- FORCE_SSL=true
- 'VIRTUAL_HOST=http://overlay-dc.trunkplatform.com.au, https://overlay-dc.trunkplatform.com.au'
expose:
- '80'
- '443'
restart: always
tags:
- dev
Any help would be much appreciated, as tutum is shutting down and we need to migrate our production ASAP.
Thanks,
Evgeny
I try to configure session stickiness with the COOKIE
variable. I added it to my docker compose file as follows:
lb:
image: dockercloud/haproxy
links:
- web
environment:
- COOKIE=SRV insert indirect nocache
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
This is the configuration from the docker logs:
lb_1 | global
lb_1 | log 127.0.0.1 local0
lb_1 | log 127.0.0.1 local1 notice
lb_1 | log-send-hostname
lb_1 | maxconn 4096
lb_1 | pidfile /var/run/haproxy.pid
lb_1 | user haproxy
lb_1 | group haproxy
lb_1 | daemon
lb_1 | stats socket /var/run/haproxy.stats level admin
lb_1 | ssl-default-bind-options no-sslv3
lb_1 | ssl-default-bind-ciphers --- I removed this ---
lb_1 | defaults
lb_1 | balance roundrobin
lb_1 | log global
lb_1 | mode http
lb_1 | option redispatch
lb_1 | option httplog
lb_1 | option dontlognull
lb_1 | option forwardfor
lb_1 | timeout connect 5000
lb_1 | timeout client 50000
lb_1 | timeout server 50000
lb_1 | listen stats
lb_1 | bind :1936
lb_1 | mode http
lb_1 | stats enable
lb_1 | timeout connect 10s
lb_1 | timeout client 1m
lb_1 | timeout server 1m
lb_1 | stats hide-version
lb_1 | stats realm Haproxy\ Statistics
lb_1 | stats uri /
lb_1 | stats auth stats:stats
lb_1 | frontend default_frontend
lb_1 | bind :80
lb_1 | maxconn 4096
lb_1 | default_backend default_service
lb_1 | backend default_service
lb_1 | server dockertest_web_1 dockertest_web_1:3000 check
If I understand haproxy and the config script correctly, there should be a cookie
in the backends server lines… Do I anything wrong? Is it a bug?
... and X-Forwarded-Proto: http
is never passed for HTTP requests.
This adds X-Forwarded-Proto: https
regardless of whether the request was HTTP or HTTPS when no VIRTUAL_HOST
values are defined.
if ssl:
frontend_section.append("reqadd X-Forwarded-Proto:\ https")
From /haproxy/helper/frontend_helper.py#L98
The header value should be http
or https
depending on the actual request.
acl http dst_post 80
reqadd X-Forwarded-Proto:\ http if http
acl https dst_port 443
reqadd X-Forwarded-Proto:\ https if https
Case 1: no VIRTUAL_HOST
defined
haproxy:
image: dockercloud/haproxy
restart: unless-stopped
links:
- www
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "443:443"
environment:
DEFAULT_SSL_CERT: |
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
www:
image: php:apache
www
always receives 'X-Forwarded-Proto: https` (Fail).
Case 2: VIRTUAL_HOST
defined explicitly for HTTPS
haproxy:
image: dockercloud/haproxy
restart: unless-stopped
links:
- www
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "443:443"
environment:
DEFAULT_SSL_CERT: |
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
www:
image: php:apache
environment:
VIRTUAL_HOST: "*,https://*"
This almost works as there are two front ends:
X-Forwarded-Proto
(Fail)Case 3: hacked with EXTRA_SETTINGS
haproxy:
image: dockercloud/haproxy
restart: unless-stopped
links:
- www
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "443:443"
environment:
DEFAULT_SSL_CERT: |
-----BEGIN PRIVATE KEY-----
...
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
www:
image: php:apache
environment:
EXTRA_SETTINGS: >
acl https dst_port 443,
reqdel X-Forwarded-Proto,
reqadd X-Forwarded-Proto:\ http unless https,
reqadd X-Forwarded-Proto:\ https if https,
The default backend is configured to pass X-Forwarded-Proto
with a value that matches the request. (Pass)
Hello all,
On my new deployment, while http requests are fast, https requests takes up to 10 seconds to load. Then once loaded it is fast for each subsequent request for a while.
It was working fine on the previous deployment on tutum, using an older version (tutum/haproxy:latest
)
Any clue?
I would like to add ssl verify none
to haproxy.cfg on the haproxy server for the backend such as the following.
backend SERVICE_NGINX
server NGINX_1 10.7.0.4:443 check ssl verify none
But it is not apparent how I could do that. Is this possible through an environment variable on the linked service?
Hi,
it would be nice if I could add an overrideable default for EXTRA_SETTINGS defined in linked nodes.
Because I essentially use the same EXTRA_SETTINGS on all services I link it would make it much easier for management purposes.
Example:
DEFAULT_EXTRA_SETTINGS=rspidel ^x-powered-by:
configured on dockercloud-haproxy
and
EXTRA_SETTINGS=rspidel ^server:
configures on app1
and nothing additional on app2
and app3
The config would yield
[...]
backend SERVICE_APP1
[...]
rspidel ^server:
backend SERVICE_APP2
[...]
rspidel ^x-powered-by:
backend SERVICE_APP3
[...]
rspidel ^x-powered-by:
[...]
Would that be possible?
Thanks in advance!
Looks like the proxy doesn't pick up containers with non-standard names.
Let's take a sample from the Readme and add one line:
version: '2'
services:
web:
image: dockercloud/hello-world
container_name: web
lb:
image: dockercloud/haproxy
links:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
It results in
lb_1 | INFO:haproxy:Initial start
lb_1 | INFO:haproxy:Linked service:
lb_1 | INFO:haproxy:Linked container:
Expected
lb_1 | INFO:haproxy:Initial start
lb_1 | INFO:haproxy:Linked service: <dirname>_web
lb_1 | INFO:haproxy:Linked container: web
It's not necessarily needs to be supported. Just a note in the Readme to not use named containers should suffice.
This is my docker-compose.yml
version: '2'
services:
app:
image: pedramrn/webapp:0.1.4
expose:
- 8080
networks:
- main
working_dir: /src
restart: never
depends_on:
- db-postgresql
lb:
image: dockercloud/haproxy:1.5
depends_on:
- app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- main
links:
- app
db-postgresql:
image: postgres:9.5
ports:
- "5432:5432"
volumes:
- data-postgresql:/var/lib/postgresql/data
networks:
- main
restart: unless-stopped
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=postgres
volumes:
data-postgresql:
networks:
main:
With this configuration docker swarm schedules all the app
containers on the same node as the haproxy
is in, If I remove the links: -app
then the haproxy
can't recognize it and does not route any traffic to the containers.
Am I missing something?
Hi!
I'm not able to enable WebSockets in the same HTTP host.
I was trying -e VIRTUAL_HOST="ws://example.com, http://example.com"
Thanks.
Would be useful to be able to set extra settings on individual backend server
lines, notably ssl
for those services that expose only a TLS interface but still need host-based routing. Maybe EXTRA_SERVER_SETTINGS
?
when running haproxy at the terminal I only see haproxy conf logs, where can I find the other logs
Any chance we can get HAProxy reloads that do not drop traffic?
When services or service backends are added or permanently removed, HAProxy has to reload across our entire infrastructure. These reloads can cause reliability problems because while HAProxy is top notch at not dropping traffic while it is running, it can (and does) drop traffic during reloads.
http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
I am trying to setup automatic failover where I have a container that can take over only if health checks on all other containers fail.
The backup option seems perfect for this : http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-backup. However, I am unable to get it to work.
I have tried the following:
Any suggestions? Here is the stackfile I am using (uses two different nodes so I can test failover by powering the primary one off):
superproxytest-live:
image: 'dockercloud/hello-world:latest'
target_num_containers: 1
environment:
- NAME=Live
- HEALTH_CHECK=check inter 2000 fall 3 rise 2
- VIRTUAL_HOST=http://superproxytest.superproxytest.6eb2f65c.svc.dockerapp.io
expose:
- '80'
restart: always
tags:
- superproxytest_live
superproxytest-replica:
image: 'dockercloud/hello-world:latest'
target_num_containers: 1
environment:
- NAME=Replica
# Causes this service AND the live service to be backup
- HEALTH_CHECK=check inter 2000 fall 3 rise 2 backup
# Causes HA proxy to fail on re-deploy
# - EXTRA_SETTINGS=backup
# Does nothing?
# - OPTION=backup
- VIRTUAL_HOST=http://superproxytest.superproxytest.6eb2f65c.svc.dockerapp.io
expose:
- '80'
restart: always
tags:
- superproxytest_replica
superproxytest:
image: dockercloud/haproxy:latest
deployment_strategy: every_node
tags:
- superproxytest
restart: always
roles:
- global
ports:
- "80:80"
- "1936:1936"
environment:
- STATS_AUTH=stats:stats
links:
- "superproxytest-live:superproxytest-live"
- "superproxytest-replica:superproxytest-replica"
I'm trying to set up a maintenance page or fall back for when HAPROXY generates a 503. This is the error that appears if the service is unreachable, and is seen all too often during service restarts and the like.
I thought using VIRTUAL_HOST_WEIGHT
would provide a natural failover option, consider:
App1:
VIRTUAL_HOST_WEIGHT=1
VIRTUAL_HOST=app1.example.com
App2:
VIRTUAL_HOST_WEIGHT=1
VIRTUAL_HOST=app2.example.com
Common:
VIRTUAL_HOST_WEIGHT=-1
VIRTUAL_HOST=*.example.com
So if either App1
or App2
fails and generates a 503 you would hope that Common
would still serve its maintenance/fallback page -- but it doesn't. If I STOP the App1
service I just start serving a 503 on app1.example.com
until such time as its restarted. Common
never kicks in.
Is this entirely expected behaviour, and if so, what can we do to set a failover page that shows something other than the standard 503?
I'm trying to deploy following YAML to a UCP Swarm Cluster
version: '2'
services:
web:
image: dockercloud/hello-world
lb:
image: dockercloud/haproxy
links:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
When I do a docker-compose scale web=2, nothing happens inside the HA Proxy. Its still just pointing the be original container and container 2 never gets added.
Here is more info
frontend default_frontend
bind :80
maxconn 4096
default_backend default_service
backend default_service
server dockercloudhaproxy_web_1 dockercloudhaproxy_web_1:80 check
When 2nd container is launched with docker-compose scale web=2, I woudl expect a dockercloudhaproxy_web_2 added to the default_service.
Am I missing anything? What else can I do to troubleshoot?
Hello.
I can't find, do you support PROXY protocol introduced in haproxy-1.5 http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt.
For example, I would like to add send-proxy
option:
server servername localhost:25 check inter 2000 rise 2 fall 3 send-proxy
So I'll be able to use this, for example: https://github.com/haraka/Haraka/blob/master/docs/HAProxy.md
Thanks.
Hi,
What am I missing or not understanding?
When using the VIRTUAL_HOST option with a single nodejs app everything works as expected. I add the path, expose port on container and all requests get routed to the correct nodejs container.
However, the moment I try with several nodejs apps with different ports, only the first gets routed, the rest fail with a 404 error. Routing to a simple nginx site + nodejs app does seem to work.
I've tried everything from setting BACKEND_PORTS, FRONTEND_PORTS, TCP_PORTS, VIRTUAL_HOST_WEIGHT without success though.
Example config:
api-browser:
image: 'trigdev/api-browser:0.1'
deployment_strategy: high_availability
environment:
- 'APP_PORT= 3013'
- 'VIRTUAL_HOST=DOMAIN.COMt:80/api/v0/trigg/browse/'
- VIRTUAL_HOST_WEIGHT=1
expose:
- '3013'
links:
- amq
- db
restart: on-failure
tags:
- backend
- medium
api-crud:
image: 'trigdev/api-crud:0.1'
deployment_strategy: high_availability
environment:
- 'APP_PORT= 3005'
- 'VIRTUAL_HOST=DOMAIN.COM:80/api/v0/trigg/'
- VIRTUAL_HOST_WEIGHT=1
expose:
- '3005'
links:
- amq
- db
restart: on-failure
tags:
- backend
- medium
lb:
image: 'dockercloud/haproxy:1.0.1'
deployment_strategy: high_availability
environment:
- 'BACKEND_PORTS=3005,3009,3010'
- BALANCE=roundrobin
links:
- api-browser
- api-crud
- confirm-email
- password-reset
ports:
- '80:80'
- '1936:1936'
restart: on-failure
roles:
- global
tags:
- frontend
I'm loving this haproxy implementation so far. Solves all my load balancing needs save one.
I'm trying to install Gitlab on my private server and would love to be able to forward ssh connections. HAProxy has this capability, it seems it would require very little additional templating to get this to work with what you guys have already created.
https://confluence.atlassian.com/bitbucketserver/setting-up-ssh-port-forwarding-776640364.html
For now I'm content exposing Gitlab SSH on another port, which works as long as my services aren't scaling beyond one host machine. At that point I may come back and try to implement this myself.
We run haproxy in Docker Cloud using a stackfile. The log always shows the following problem which leads to a locked Docker account sooner or later:
2016-04-13T13:16:00.784169614Z INFO:haproxy:Websocket open
2016-04-13T13:16:01.060468731Z ERROR:haproxy:Status 429 (GET https://cloud.docker.com/api/app/v1/container/84995beb-f414-4411-a418-55ed286772f4/). Response: {"detail": "Too many login attempts."}
2016-04-13T13:16:01.060514990Z
2016-04-13T13:16:11.294212262Z ERROR:haproxy:Status 429 (GET https://cloud.docker.com/api/app/v1/container/84995beb-f414-4411-a418-55ed286772f4/). Response: {"detail": "Too many login attempts."}
2016-04-13T13:16:11.294255913Z
2016-04-13T13:16:21.514026149Z ERROR:haproxy:Status 429 (GET https://cloud.docker.com/api/app/v1/container/84995beb-f414-4411-a418-55ed286772f4/). Response: {"detail": "Too many login attempts."}
2016-04-13T13:16:21.514074063Z
What could be the reason for this error? Configuration issue?
Thx,
Sven
I have many docker-compose.yml files for differend apps and not linked to haproxy.
I want run one haproxy for all apps. It is different docker-compose files and samples unsuitable for this situation.
I want set env VIRTUAL_HOST in app docker-compose file, restart haproxy with new links (?) and all works.
Can you help me to do this?
I can't wrap my head around this, but how to get haproxy listen to another port than 80?
The web container is running a webserver on port 8080. I would like haproxy to listen to port 8080 as well, and redirect all requests on this port to the node server.
With the configuration below the result is
$ curl http://localhost:8080/
curl: (56) Recv failure: Connection reset by peer
However it works, if I change the port in the docker-compose.yml like this
haproxy:
ports:
- "8080:80"
The request returns fine
$ curl -s -D - http://localhost:8080/ -o /dev/null
HTTP/1.1 200 OK
I would later add more services on different ports, so I would be like to be able to configure the ports, haproxy is listen to. What am I missing here?
Not working file and output
docker-compose.yml
version: "2"
services:
web:
build: 'web/'
volumes:
- './web/public:/usr/src/app/public'
ports:
- '8080'
haproxy:
image: 'dockercloud/haproxy:1.2.1'
links:
- 'web'
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
output of docker-compose up
Starting p_web_1
Starting p_haproxy_1
Attaching to p_web_1, p_haproxy_1
haproxy_1 | INFO:haproxy:dockercloud/haproxy 1.2.1 is running outside Docker Cloud
haproxy_1 | INFO:haproxy:New link mode, loading HAProxy definition through docker api
haproxy_1 | INFO:haproxy:dockercloud/haproxy PID: 1
haproxy_1 | INFO:haproxy:==========BEGIN==========
haproxy_1 | INFO:haproxy:Initial start
haproxy_1 | INFO:haproxy:Linked service: p_web
haproxy_1 | INFO:haproxy:Linked container: p_web_1
haproxy_1 | INFO:haproxy:HAProxy configuration:
haproxy_1 | global
haproxy_1 | log 127.0.0.1 local0
haproxy_1 | log 127.0.0.1 local1 notice
haproxy_1 | log-send-hostname
haproxy_1 | maxconn 4096
haproxy_1 | pidfile /var/run/haproxy.pid
haproxy_1 | user haproxy
haproxy_1 | group haproxy
haproxy_1 | daemon
haproxy_1 | stats socket /var/run/haproxy.stats level admin
haproxy_1 | ssl-default-bind-options no-sslv3
haproxy_1 | ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
haproxy_1 | defaults
haproxy_1 | balance roundrobin
haproxy_1 | log global
haproxy_1 | mode http
haproxy_1 | option redispatch
haproxy_1 | option httplog
haproxy_1 | option dontlognull
haproxy_1 | option forwardfor
haproxy_1 | timeout connect 5000
haproxy_1 | timeout client 50000
haproxy_1 | timeout server 50000
haproxy_1 | listen stats
haproxy_1 | bind :1936
haproxy_1 | mode http
haproxy_1 | stats enable
haproxy_1 | timeout connect 10s
haproxy_1 | timeout client 1m
haproxy_1 | timeout server 1m
haproxy_1 | stats hide-version
haproxy_1 | stats realm Haproxy\ Statistics
haproxy_1 | stats uri /
haproxy_1 | stats auth stats:stats
haproxy_1 | frontend default_frontend
haproxy_1 | bind :80
haproxy_1 | maxconn 4096
haproxy_1 | default_backend default_service
haproxy_1 | backend default_service
haproxy_1 | server p_web_1 p_web_1:8080 check
haproxy_1 | INFO:haproxy:Launching HAProxy
haproxy_1 | INFO:haproxy:HAProxy has been launched(PID: 10)
haproxy_1 | INFO:haproxy:===========END===========
It seems like the FORCE_SSL
param doesn't work. I still can access the http
endpoints and don't get redirected to https
.
On Mac OS X, this is my docker-compose (v2) file:
version: "2"
services:
lb:
image: dockercloud/haproxy:1.2.1
depends_on:
- web
environment:
- DOCKER_TLS_VERIFY
- DOCKER_HOST
- DOCKER_CERT_PATH
volumes:
- $DOCKER_CERT_PATH:$DOCKER_CERT_PATH
ports:
- 80:80
networks:
- lb-tier
web:
image: dockercloud/hello-world
networks:
- lb-tier
networks:
lb-tier:
When I am trying to start the containers, the lb
container, directly goes in exit status. Docker logs gives me the following:
INFO:haproxy:dockercloud/haproxy 1.2.1 is running outside Docker Cloud
INFO:haproxy:New link mode, loading HAProxy definition through docker api
INFO:haproxy:dockercloud/haproxy PID: 1
INFO:haproxy:==========BEGIN==========
INFO:haproxy:Initial start
Traceback (most recent call last):
File "/usr/local/bin/dockercloud-haproxy", line 9, in <module>
load_entry_point('dockercloud-haproxy==1.2.1', 'console_scripts', 'dockercloud-haproxy')()
File "/usr/local/lib/python2.7/dist-packages/haproxy/main.py", line 43, in main
run_haproxy("Initial start")
File "/usr/local/lib/python2.7/dist-packages/haproxy/haproxycfg.py", line 24, in run_haproxy
haproxy = Haproxy(config.LINK_MODE, msg)
File "/usr/local/lib/python2.7/dist-packages/haproxy/haproxycfg.py", line 46, in __init__
self.specs = self._initialize(self.link_mode)
File "/usr/local/lib/python2.7/dist-packages/haproxy/haproxycfg.py", line 54, in _initialize
links = Haproxy._init_new_links()
File "/usr/local/lib/python2.7/dist-packages/haproxy/haproxycfg.py", line 82, in _init_new_links
links, Haproxy.cls_linked_services = NewLinkHelper.get_new_links(docker, haproxy_container)
File "/usr/local/lib/python2.7/dist-packages/haproxy/helper/new_link_helper.py", line 13, in get_new_links
linked_compose_services = _get_linked_compose_services(networks, project)
File "/usr/local/lib/python2.7/dist-packages/haproxy/helper/new_link_helper.py", line 79, in _get_linked_compose_services
haproxy_links.extend(network.get("Links", []))
TypeError: 'NoneType' object is not iterable
However, if I use links: -web
instead, the containers are linked.
Hi,
i have a request.
Let's say I have a container app1
with ports 443
and 8443
. and I would like to forward to a specific port on that same container for a different VIRTUAL_HOST: e.g.:
https://some.example.com
goes to port 443
and
https://another.example.com
goes to port 8443
Is this at all possible?
If so, is it implemented and I am missing something or is this a feature not yet implemented?
Thanks in advance!
This does not work at all. I deploy an HAproxy container from this image just like I used to with Tutumcloud/haproxy, but I cannot connect to this service. Not even monitoring service on port 1936.
Maybe I'm missing something, but it seems like it is not possible to bind exposed ports to virtual host ports.
web:
image: myimage
autoredeploy: true
expose:
- '4000'
- '5000'
environment:
- VIRTUAL_HOST='example.com:4444, example.com:5555'
How can I expose container port 4000
at example.com:4444
and container port 5000
at example.com:5555
?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.