GithubHelp home page GithubHelp logo

vernemq / docker-vernemq Goto Github PK

View Code? Open in Web Editor NEW
176.0 11.0 232.0 570 KB

VerneMQ Docker image - Starts the VerneMQ MQTT broker and listens on 1883 and 8080 (for websockets).

Home Page: https://vernemq.com

License: Apache License 2.0

Shell 84.23% Erlang 2.82% Dockerfile 7.41% Mustache 5.54%

docker-vernemq's Introduction

docker-vernemq

What is VerneMQ?

VerneMQ is a high-performance, distributed MQTT message broker. It scales horizontally and vertically on commodity hardware to support a high number of concurrent publishers and consumers while maintaining low latency and fault tolerance. VerneMQ is the reliable message hub for your IoT platform or smart products.

VerneMQ is an Apache2 licensed distributed MQTT broker, developed in Erlang.

How to use this image

1. Accepting the VerneMQ EULA

NOTE: To use the official Docker images you have to accept the VerneMQ End User License Agreement. You can read how to accept the VerneMQ EULA here.

NOTE 2 (TL:DR): To use the binary Docker packages (that is, the official packages from Docker Hub) or the VerneMQ binary Linux packages commercially and legally, you need a paid subscription. Accepting the EULA is your promise to do that. To avoid a subscription, you need to clone this repository and build and host your own Dockerfiles/-images.

2. Using Helm to deploy on Kubernetes

First install and configure Helm according to the documentation. Then add VerneMQ Helm charts repository:

helm repo add vernemq https://vernemq.github.io/docker-vernemq

You can now deploy VerneMQ on your Kubernetes cluster:

helm install vernemq/vernemq

For more information, check out the Helm chart README.

3. Using pure Docker commands

docker run -e "DOCKER_VERNEMQ_ACCEPT_EULA=yes" --name vernemq1 -d vernemq/vernemq

Sometimes you need to configure a forwarding for ports (on a Mac for example):

docker run -p 1883:1883 -e "DOCKER_VERNEMQ_ACCEPT_EULA=yes" --name vernemq1 -d vernemq/vernemq

This starts a new node that listens on 1883 for MQTT connections and on 8080 for MQTT over websocket connections. However, at this moment the broker won't be able to authenticate the connecting clients. To allow anonymous clients use the DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on environment variable.

docker run -e "DOCKER_VERNEMQ_ACCEPT_EULA=yes" -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" --name vernemq1 -d vernemq/vernemq

Autojoining a VerneMQ cluster

This allows a newly started container to automatically join a VerneMQ cluster. Assuming you started your first node like the example above you could autojoin the cluster (which currently consists of a single container 'vernemq1') like the following:

docker run -e "DOCKER_VERNEMQ_ACCEPT_EULA=yes" -e "DOCKER_VERNEMQ_DISCOVERY_NODE=<IP-OF-VERNEMQ1>" --name vernemq2 -d vernemq/vernemq

(Note, you can find the IP of a docker container using docker inspect <containername/cid> | grep \"IPAddress\").

4. Automated clustering on Kubernetes without helm

When running VerneMQ inside Kubernetes, it is possible to cause pods matching a specific label to cluster altogether automatically. This feature uses Kubernetes' API to discover other peers, and relies on the default pod service account which has to be enabled.

Simply set DOCKER_VERNEMQ_DISCOVERY_KUBERNETES=1 in your pod's environment, and expose your own pod name through MY_POD_NAME . By default, this setting will cause all pods in the same namespace with the app=vernemq label to join the same cluster. Cluster name (defaults to cluster.local), namespace and label settings can be overridden with DOCKER_VERNEMQ_KUBERNETES_CLUSTER_NAME, DOCKER_VERNEMQ_KUBERNETES_NAMESPACE and DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR respectively.

An example configuration of your pod's environment looks like this:

env:
  - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
    value: "1"
  - name: MY_POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name
  - name: DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR
    value: "app=vernemq,release=myinstance"

When enabling Kubernetes autoclustering, don't set DOCKER_VERNEMQ_DISCOVERY_NODE.

If you encounter "SSL certification error (subject name does not match the host name)" like below, you may try to set DOCKER_VERNEMQ_KUBERNETES_INSECURE to "1".

kubectl logs vernemq-0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (51) SSL: certificate subject name 'client' does not match target host name 'kubernetes.default.svc.cluster.local'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (51) SSL: certificate subject name 'client' does not match target host name 'kubernetes.default.svc.cluster.local'
vernemq failed to start within 15 seconds,
see the output of 'vernemq console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.
...

If using an vernemq.conf.local file, you can insert a placeholder (###IPADDRESS###) in your config to be replaced (at POD creation time) with the actual IP address of the POD vernemq is running on, making VMQ clustering possible.

If istio is enabled you, set DOCKER_VERNEMQ_KUBERNETES_ISTIO_ENABLED=1 so the init script will check if istio is ready.

5. Using Docker Swarm

Please follow the official Docker guide to properly setup Swarm cluster with one or more nodes.

Once Swarm is setup you can deploy a VerneMQ stack. The following snippet describes the stack using a docker-compose.yml file:

version: "3.7"
services:
  vmq0:
    image: vernemq/vernemq
    environment:
      DOCKER_VERNEMQ_SWARM: 1
  vmq:
    image: vernemq/vernemq
    depends_on:
      - vmq0
    environment:
      DOCKER_VERNEMQ_SWARM: 1
      DOCKER_VERNEMQ_DISCOVERY_NODE: vmq0
    deploy:
      replicas: 2

Run docker stack deploy -c docker-compose.yml my-vernemq-stack to deploy a 3 node VerneMQ cluster.

Note: Docker Swarm currently lacks the functionality similar to what is called a statefulset in Kubernetes. As a consequence VerneMQ must rely on a specific discovery service (the vmq0 service above) that is started before the other replicas.

Checking cluster status

To check if the above containers have successfully clustered you can issue the vmq-admin command:

docker exec vernemq1 vmq-admin cluster show
+--------------------+-------+
|        Node        |Running|
+--------------------+-------+
|[email protected]| true  |
|[email protected]| true  |
+--------------------+-------+

If you started VerneMQ cluster inside Kubernetes using DOCKER_VERNEMQ_DISCOVERY_KUBERNETES=1, you can execute vmq-admin through kubectl:

kubectl exec vernemq-0 -- vmq-admin cluster show
+---------------------------------------------------+-------+
|                       Node                        |Running|
+---------------------------------------------------+-------+
|[email protected]| true  |
|[email protected]| true  |
+---------------------------------------------------+-------+

All vmq-admin commands are available. See https://vernemq.com/docs/administration/ for more information.

VerneMQ Configuration

All configuration parameters that are available in vernemq.conf can be defined using the DOCKER_VERNEMQ prefix followed by the confguration parameter name. E.g: allow_anonymous=on is -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" or allow_register_during_netsplit=on is -e "DOCKER_VERNEMQ_ALLOW_REGISTER_DURING_NETSPLIT=on". All available configuration parameters can be found on https://vernemq.com/docs/configuration/.

Erlang VM args

Erlang VM args can be updated using following environment variables

Env variable name Description
DOCKER_VERNEMQ_ERLANG__MAX_PORTS Erlang max ports. Value provided will be set for +Q in vm.args file
DOCKER_VERNEMQ_ERLANG__PROCESS_LIMIT Erlang process limit. Value provided will be set for +P in vm.args file
DOCKER_VERNEMQ_ERLANG__MAX_ETS_TABLES Erlang Max ETS tables. Value provided will be set for +e in vm.args file
DOCKER_VERNEMQ_ERLANG__DISTRIBUTION_BUFFER_SIZE Erlang Distribution buffer size. Value provided will be set for +zdbbl in vm.args file

Logging

VerneMQ store crash and error log files in /var/log/vernemq/, and, by default, doesn't write console log to the disk to avoid filling the container disk space. However this behaviour can be changed by setting the environment variable DOCKER_VERNEMQ_LOG__CONSOLE to both which tells VerneMQ to send logs to stdout and /var/log/vernemq/console.log. For more information please see VerneMQ logging documentation: https://docs.vernemq.com/configuring-vernemq/logging

Remarks

Some of our configuration variables contain dots .. For example if you want to adjust the log level of VerneMQ you'd use -e "DOCKER_VERNEMQ_LOG.CONSOLE.LEVEL=debug". However, some container platforms such as Kubernetes don't support dots and other special characters in environment variables. If you are on such a platform you could substitute the dots with two underscores __. The example above would look like -e "DOCKER_VERNEMQ_LOG__CONSOLE__LEVEL=debug".

There some exceptions on configuration names contains dots. You can see follow examples:

format in vernemq.conf format in environment variable name
vmq_webhooks.pool_timeout = 60000 DOCKER_VERNEMQ_VMQ_WEBHOOKS__POOL_timeout=6000
vmq_webhooks.pool_timeout = 60000 DOCKER_VERNEMQ_VMQ_WEBHOOKS.pool_timeout=60000

File Based Authentication

You can set up File Based Authentication by adding users and passwords as environment variables as follows:

DOCKER_VERNEMQ_USER_<USERNAME>='password'

where <USERNAME> is the username you want to use. This can be done as many times as necessary to create the users you want. The usernames will always be created in lowercase

CAVEAT - You cannot have a = character in your password.

Thank you to all our contributors!

contributors

docker-vernemq's People

Contributors

333miiko avatar a7i avatar ashtonian avatar bgogri avatar billimek avatar bohlenc avatar chrisns avatar codeadict avatar dereulenspiegel avatar dergraf avatar drf avatar francois-travais avatar hsudbrock avatar iam-nagarajan avatar ioolkos avatar jeffgrunewald avatar kllex avatar larshesel avatar mgagliardo91 avatar michalmela avatar michikuehne avatar muratsplat avatar niclaszll avatar nmatsui avatar ofirshtrull avatar psanetra avatar rubengees avatar samk-mesh avatar troyanov avatar vedavidhbudimuri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-vernemq's Issues

vmq-admin breaks on Docker IP change

Environment

  • VerneMQ Version: 1.2.0
  • OS: Docker

Steps to reproduce

First, let's run a fresh container with VerneMQ in it and call vmq-admin:

$ docker run -d erlio/docker-vernemq:1.2.0
582b6594662b0fdeb18549a2ab37269c45966ef31e400b7061989018c8950fff

$ docker exec -it 582b6594662b vmq-admin cluster show
+------------------+-------+
|       Node       |Running|
+------------------+-------+
|[email protected]| true  |
+------------------+-------+

So far so good. Now let's stop that container, start any other (I'm using redis as an example) and then restart the original VerneMQ container:

$ docker stop 582b6594662b
582b6594662b

$ docker run -d redis
5189487103ab60afbe2ef71561879fb723f883f58e95e3689418c49de8a09d27

$ docker start 582b6594662b
582b6594662b

Now let's try vmq-admin again:

$ docker exec -it 582b6594662b vmq-admin cluster show
Node '[email protected]' not responding to pings.

What happened? The redis container took 172.17.0.2, which previously belonged to VerneMQ. Now VerneMQ has a different IP:

$ docker exec -it 582b6594662b tail -n 1 /etc/hosts
172.17.0.3	582b6594662b

I guess the problem is that the IP is persisted in /etc/vernemq/vm.args across container restarts:

$ docker exec -it 582b6594662b cat /etc/vernemq/vm.args
+P 256000
-env ERL_MAX_ETS_TABLES 256000
-env ERL_CRASH_DUMP /erl_crash.dump
-env ERL_FULLSWEEP_AFTER 0
-env ERL_MAX_PORTS 65536
+A 64
-setcookie vmq
-name [email protected]
+K true
+W w
-smp enable
+zdbbl 32768

In practice, this bit me when calling docker-compose up repeatedly, which can change container startup order (and thus IP assignments) if some containers do not need to be restarted because no changes were made to their configuration.

Use stdout logging instead of tailing the console.log file

The console.log file gets rotated and can grow significantly to the point of making the service deployed using Docker vulnerable to interruptions due to full hard disk. I have experienced this issue in a production environment in AWS and will submit a fix for it once I sort it out and test it.

Kubernetes labelSelector should be configurable

When activating kubernetes autodiscovery, VerneMQ will query kubernetes in order to get the pod subdomain and hostname. To do so the query to kubernetes contains a labelSelector parameter as such:
?labelSelector=app=$DOCKER_VERNEMQ_KUBERNETES_APP_LABEL

We can see here that VerneMQ assumes that VerneMQ pods all have the same label app, which is quite common when using Helm for example. Even if Helm does add a app label by default to the pod, it isn't enough to identify VerneMQ nodes of a cluster. Indeed the Helm convention is to use both app and release label to make sure one is selecting pods belonging to the same cluster. In short, if we assume the user is using Helm convention, the labelSelector value should look a bit more like so: ?labelSelector=app=<pod_app_label>,release=<pod_release_label>.

Moreover Helm conventions changed a bit recently and nowadays the label app is now app.kubernetes.io/name and the label release is app.kubernetes.io/instance.

But unfortunately, we cannot configure the label(s) to be used, just the value of the app label. I suggest to replace $DOCKER_VERNEMQ_KUBERNETES_APP_LABEL by $DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR which would contain the entire labelSelector, not just the value of the label app.

Authentication

This is kinda cool to help people whet their whistle, but having authorization is pretty important. It probably would be as simple as letting users mount a volume containing an acl file as described here

Can not auth using Redis

Hello everybody,

I'm trying to auth using Redis, but it keeps failing. Could you please verify if I'm doing anything wrong?

Here is my docker-compose env vars:

      - DOCKER_VERNEMQ_VMQ_DIVERSITY__AUTH_REDIS__ENABLED=on
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__REDIS__HOST=redis
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__REDIS__PORT=6379
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__REDIS__PASSWORD=hidden
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__REDIS__DATABASE=0

and here is my Redis key:

get "[\"\", \"pedro2\", \"lwm2m_adapter\"]"
"{\"passhash\": \"$2a$12$Y7hZuD921ScEQCO/dgwDEOSo9WV.bFCub/CU6J5rUK8RM9vGdVBQe\", \"subscribe_acl\": [{\"pattern\": \"devices/deviceA/#\"}, {\"pattern\": \"devices/deviceB/#\"}, {\"pattern\": \"devices/deviceC/#\"}]}"

So why is it failing when trying to connect with the same username and client id (pedro2 and lwm2m_adapter respectively)?

vernemq_1         | 2017-07-17 11:14:35.531 [error] <0.352.0>@vmq_diversity_script_state:handle_info:175 can't call function auth_on_register with args [{addr,<<"172.18.0.1">>},{port,33366},{mountpoint,<<>>},{client_id,<<"lwm2m_adapter">>},{username,<<"pedro2">>},{password,<<"olhaaquiotoken">>},{clean_session,true}] in "/usr/share/vernemq/lua/auth/redis.lua" due to {exit,{noproc,{gen_server,call,[auth_redis,{checkout,#Ref<0.0.2.872>,true},5000]}}}
vernemq_1         | 2017-07-17 11:14:35.532 [warning] <0.484.0>@vmq_mqtt_fsm:check_user:546 can't authenticate client {[],<<"lwm2m_adapter">>} due to error
vernemq_1         | 2017-07-17 11:14:35.540 [error] <0.375.0>@vmq_diversity_script_state:handle_info:175 can't call function auth_on_register with args [{addr,<<"172.18.0.1">>},{port,33368},{mountpoint,<<>>},{client_id,<<"lwm2m_adapter">>},{username,<<"pedro2">>},{password,<<"olhaaquiotoken">>},{clean_session,true}] in "/usr/share/vernemq/lua/auth/redis.lua" due to {exit,{noproc,{gen_server,call,[auth_redis,{checkout,#Ref<0.0.1.1484>,true},5000]}}}
vernemq_1         | 2017-07-17 11:14:35.541 [warning] <0.485.0>@vmq_mqtt_fsm:check_user:546 can't authenticate client {[],<<"lwm2m_adapter">>} due to error 

Am I doing something wrong ?!

Custom Plugins

Can someone suggest me the best way to include custom plugins with vernemq-docker?

vernemq fails to cluster for unknown reason

Here is my vernemq manifest: https://gist.github.com/cbluth/f77b658e09781ef36f1ba317b41e1a31#file-vernemq-statefulset-dev-yaml

I have started this vernemq cluster with three replicas, and the nodes fail to join a cluster. The logs dont show anything helpful:

$ kubectl --context=dev -n backend exec -it vernemq-1 -- vmq-admin cluster show
+---------------------------------------------------+-------+
|                       Node                        |Running|
+---------------------------------------------------+-------+
|[email protected]| true  |
+---------------------------------------------------+-------+

$ kubectl --context=dev -n backend exec -it vernemq-0 -- vmq-admin cluster show
+---------------------------------------------------+-------+
|                       Node                        |Running|
+---------------------------------------------------+-------+
|[email protected]| true  |
+---------------------------------------------------+-------+

$ kubectl --context=dev -n backend logs vernemq-1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15176    0 15176    0     0   2727      0 --:--:--  0:00:05 --:--:--  3749
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15176    0 15176    0     0   199k      0 --:--:-- --:--:-- --:--:--  200k
Will join an existing Kubernetes cluster with discovery node at vernemq-0.vernemq.backend.svc.cluster.local
14:29:41.006 [warning] message_size_limit is deprecated and has no effect, use max_message_size instead
config is OK
-config /var/lib/vernemq/generated.configs/app.2018.10.24.14.29.40.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args
Exec:  /usr/lib/vernemq/erts-8.3.5.3/bin/erlexec -boot /usr/lib/vernemq/releases/1.4.1/vernemq               -config /var/lib/vernemq/generated.configs/app.2018.10.24.14.29.40.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args              -pa /usr/lib/vernemq/lib/erlio-patches -- console -noshell -noinput
Root: /usr/lib/vernemq
14:29:43.994 [info] Application lager started on node '[email protected]'
14:29:44.030 [info] Application vmq_plugin started on node '[email protected]'
14:29:44.031 [info] Application ssl_verify_fun started on node '[email protected]'
14:29:44.031 [info] Application epgsql started on node '[email protected]'
14:29:44.130 [info] writing state {[{[{actor,<<121,199,1,165,134,170,135,41,56,224,130,232,132,133,186,129,152,175,166,216>>}],1}],{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[['[email protected]',{[{actor,<<121,199,1,165,134,170,135,41,56,224,130,232,132,133,186,129,152,175,166,216>>}],1}]],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,25,120,1,203,96,206,97,96,96,96,204,96,130,82,41,12,172,137,201,37,249,69,185,64,81,145,202,227,140,75,219,86,181,107,90,60,104,122,209,210,186,171,113,198,250,101,55,178,18,25,179,50,56,83,24,88,82,50,147,75,18,25,19,5,128,144,35,49,32,209,32,67,32,11,13,100,48,162,138,129,173,0,17,76,41,12,198,97,169,69,121,169,190,129,14,101,32,58,183,80,215,80,15,202,210,75,74,76,206,78,205,75,209,43,46,75,214,75,206,41,45,46,73,45,210,203,201,79,78,204,33,205,145,32,199,32,28,202,64,138,67,65,90,1,68,28,95,41>>
14:29:44.236 [info] Datadir /var/lib/vernemq/meta/meta/0 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,51044508}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:44.441 [info] Datadir /var/lib/vernemq/meta/meta/1 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,57169680}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:44.599 [info] Datadir /var/lib/vernemq/meta/meta/2 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,55847932}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:44.734 [info] Datadir /var/lib/vernemq/meta/meta/3 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,44813888}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:44.890 [info] Datadir /var/lib/vernemq/meta/meta/4 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,57032550}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:45.078 [info] Datadir /var/lib/vernemq/meta/meta/5 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,55244403}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:45.335 [info] Datadir /var/lib/vernemq/meta/meta/6 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,47058945}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:45.604 [info] Datadir /var/lib/vernemq/meta/meta/7 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,47592825}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:45.814 [info] Datadir /var/lib/vernemq/meta/meta/8 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,56342560}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:46.068 [info] Datadir /var/lib/vernemq/meta/meta/9 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,32052242}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:46.269 [info] Datadir /var/lib/vernemq/meta/meta/10 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,46843934}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:46.550 [info] Datadir /var/lib/vernemq/meta/meta/11 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,37840095}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:29:47.022 [info] Application plumtree started on node '[email protected]'
14:29:47.136 [info] Application hackney started on node '[email protected]'
14:29:47.398 [info] Application inets started on node '[email protected]'
14:29:47.398 [info] Application xmerl started on node '[email protected]'
14:29:47.587 [info] Application vmq_plumtree started on node '[email protected]'
14:29:47.646 [info] Try to start vmq_plumtree: ok
14:29:51.704 [info] loaded 0 subscriptions into vmq_reg_trie
14:29:51.716 [info] cluster event handler 'vmq_cluster' registered
14:29:52.579 [info] Application vmq_acl started on node '[email protected]'
14:29:52.759 [info] Application vmq_passwd started on node '[email protected]'
14:29:53.249 [info] Application vmq_server started on node '[email protected]'
14:29:53.331 [info] Sent join request to: '[email protected]'
14:29:53.347 [info] Unable to connect to '[email protected]'
$ kubectl --context=dev -n backend logs vernemq-0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7645    0  7645    0     0  80284      0 --:--:-- --:--:-- --:--:-- 80473
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7645    0  7645    0     0  29202      0 --:--:-- --:--:-- --:--:-- 29291
14:28:55.800 [warning] message_size_limit is deprecated and has no effect, use max_message_size instead
config is OK
-config /var/lib/vernemq/generated.configs/app.2018.10.24.14.28.55.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args
Exec:  /usr/lib/vernemq/erts-8.3.5.3/bin/erlexec -boot /usr/lib/vernemq/releases/1.4.1/vernemq               -config /var/lib/vernemq/generated.configs/app.2018.10.24.14.28.55.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args              -pa /usr/lib/vernemq/lib/erlio-patches -- console -noshell -noinput
Root: /usr/lib/vernemq
14:28:56.442 [info] Application lager started on node '[email protected]'
14:28:56.444 [info] Application vmq_plugin started on node '[email protected]'
14:28:56.444 [info] Application ssl_verify_fun started on node '[email protected]'
14:28:56.444 [info] Application epgsql started on node '[email protected]'
14:28:56.455 [info] writing state {[{[{actor,<<215,106,159,154,59,250,131,122,167,138,152,88,108,187,13,21,29,49,52,114>>}],1}],{dict,1,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[['[email protected]',{[{actor,<<215,106,159,154,59,250,131,122,167,138,152,88,108,187,13,21,29,49,52,114>>}],1}]],[],[],[]}}},{dict,0,16,16,8,80,48,{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},{{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}}} to disk <<75,2,131,80,0,0,1,25,120,1,203,96,206,97,96,96,96,204,96,130,82,41,12,172,137,201,37,249,69,185,64,81,145,235,89,243,103,89,255,106,174,90,222,53,35,34,103,55,175,168,172,161,73,81,86,34,99,86,6,103,10,3,75,74,102,114,73,34,99,162,0,16,114,36,6,36,26,100,8,100,161,129,12,70,84,49,176,21,32,130,41,133,193,56,44,181,40,47,213,55,208,161,12,68,231,22,234,26,232,65,89,122,73,137,201,217,169,121,41,122,197,101,201,122,201,57,165,197,37,169,69,122,57,249,201,137,57,164,57,18,228,24,132,67,25,72,113,40,72,43,0,158,60,90,154>>
14:28:56.467 [info] Datadir /var/lib/vernemq/meta/meta/0 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,49458728}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.500 [info] Datadir /var/lib/vernemq/meta/meta/1 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,55167825}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.512 [info] Datadir /var/lib/vernemq/meta/meta/2 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,46843104}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.531 [info] Datadir /var/lib/vernemq/meta/meta/3 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,57927891}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.543 [info] Datadir /var/lib/vernemq/meta/meta/4 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,46165787}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.565 [info] Datadir /var/lib/vernemq/meta/meta/5 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,52945183}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.579 [info] Datadir /var/lib/vernemq/meta/meta/6 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,44756691}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.597 [info] Datadir /var/lib/vernemq/meta/meta/7 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,35152485}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.707 [info] Datadir /var/lib/vernemq/meta/meta/8 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,35533159}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.736 [info] Datadir /var/lib/vernemq/meta/meta/9 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,32617045}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.754 [info] Datadir /var/lib/vernemq/meta/meta/10 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,57146622}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.792 [info] Datadir /var/lib/vernemq/meta/meta/11 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,41242328}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
14:28:56.874 [info] Application plumtree started on node '[email protected]'
14:28:56.887 [info] Application hackney started on node '[email protected]'
14:28:56.910 [info] Application inets started on node '[email protected]'
14:28:56.910 [info] Application xmerl started on node '[email protected]'
14:28:56.947 [info] Application vmq_plumtree started on node '[email protected]'
14:28:56.959 [info] Try to start vmq_plumtree: ok
14:28:57.475 [info] loaded 0 subscriptions into vmq_reg_trie
14:28:57.481 [info] cluster event handler 'vmq_cluster' registered
14:28:58.024 [info] Application vmq_acl started on node '[email protected]'
14:28:58.066 [info] Application vmq_passwd started on node '[email protected]'
14:28:58.115 [info] Application vmq_server started on node '[email protected]'

An important part i see is this: 14:29:53.347 [info] Unable to connect to '[email protected]'

Any help would be appreciated.

Binding metrics listener on 0.0.0.0

Hello,
whenever I bind

listener.http.default = 127.0.0.1:8888

on 0.0.0.0:8888 I can not do a port-forward with kubernetes as Kubernetes indicates there is no socket listening on this port:

E0918 15:28:31.681107 94370 portforward.go:331] an error occurred forwarding 8888 -> 8888: error forwarding port 8888 to pod eeb4904d4f112b65b78170d1c65f2251fbf4acc587ce7428de5239d2512bf041, uid : exit status 1: 2018/09/18 13:28:31 socat[1675256] E connect(5, AF=2 127.0.0.1:8888, 16): Connection refused
E0918 15:28:31.691713 94370 portforward.go:331] an error occurred forwarding 8888 -> 8888: error forwarding port 8888 to pod eeb4904d4f112b65b78170d1c65f2251fbf4acc587ce7428de5239d2512bf041, uid : exit status 1: 2018/09/18 13:28:31 socat[1675257] E connect(5, AF=2 127.0.0.1:8888, 16): Connection refused

What's the reason for this behaviour? When I bind it on localhost instead it works as intended. I bound all my TCP sockets on 0.0.0.0 which is working as expected.

Docker tag "erlio/docker-vernemq:1.4.1" issue

I am not sure what is going on here, but when I used dockertag erlio/docker-vernemq:1.4.1 in kubernetes 1.10, i get this behaviour:


user@laptop:~/x$ kubectl logs vernemq-0 
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:19 --:--:--     0curl: (6) Could not resolve host: kubernetes.default.svc.cluster.local
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   302  100   302    0     0     28      0  0:00:10  0:00:10 --:--:--    74
jq: error: Cannot iterate over null
14:20:33.207 [warning] message_size_limit is deprecated and has no effect, use max_message_size instead
config is OK
-config /var/lib/vernemq/generated.configs/app.2018.07.17.14.20.33.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args
Exec:  /usr/lib/vernemq/erts-8.3.5.3/bin/erlexec -boot /usr/lib/vernemq/releases/1.4.1/vernemq               -config /var/lib/vernemq/generated.configs/app.2018.07.17.14.20.33.config -args_file /etc/vernemq/vm.args -vm_args /etc/vernemq/vm.args              -pa /usr/lib/vernemq/lib/erlio-patches -- console -noshell -noinput
Root: /usr/lib/vernemq
14:20:33.854 [info] Application lager started on node '[email protected]'
14:20:33.857 [info] Application vmq_plugin started on node '[email protected]'
14:20:33.857 [info] Application ssl_verify_fun started on node '[email protected]'
14:20:33.857 [info] Application epgsql started on node '[email protected]'

Please note the initial curl error Could not resolve host: kubernetes.default.svc.cluster.local,
which I think also causes this (note the double point ..) : [email protected]

I change nothing but the docker image tag to 1.3.1 and it seems to work fine.

How to setup docker-vernemq on Docker Swarm

Environment

VerneMQ Version: None
OS: Docker Swarm (Docker images based on Alpine or Ubuntu)
Erlang/OTP version (if building from source):

Expected behavior

3 node VerneMQ cluster running in Docker Swarm with Docker services/stacks

Actual behaviour

Unknown. I can't find this use case in the documentation and it would be very, very nice to have it!
There is some documentation about starting a Docker container. But Docker swarm works with services/stacks that are available between the nodes.
Is it even possible to run as a service/stack or do you have start it manually on different servers?

We are comparing different MQTT brokers and those who appeal to us the most are VerneMQ and EMQ. We would like to run it in our production system but want to have native Docker Swarm support.

Thanks in advance!

Supporting multiple networks

I am trying to connect vernemq to 2 different networks

services:
  verne:
    image: "erlio/docker-vernemq:1.1.0"
    networks:
       - "external"
       - "default"

Due to the fact that the interface is hard coded to eth0 in vernemq.sh the vernemq node gets bound to a potential wrong ip address

I tried to override the IP_ADDRESS variable with "0.0.0.0" in the startup script, but this results in a working mqtt but a failing ranch

Is there any other variable to set to get this to work?

vernemq.conf overwritten each restart

When I create docker container using docker run -p 1883:1883 -p 8080:8080 --name vernemq1 --restart=always -v /data/vernemq/:/etc/vernemq/ -d erlio/docker-vernemq
the vernemq.conf seems to be overwritten each restart.
Can this behaviour be changed to keep the modifications?

docker pause/restart doesn't work

The main reason for this is that it adds the node_join command to the vm.args every time the container gets started. Moreover it actually messes with a previous --eval inside the vm.args.
To fix this issue one should only add the node_join command the first time the node is started, and actually only evaluate it if the node hasn't joined a cluster before. So this needs some work.

Workaround for now, don't use pause/restart. ;)

VerneMQ Helm chart

It would be really nice to have an "official" Helm chart to deploy VerneMQ on a Kubernetes cluster. It would avoid the issues like #79. I'me pretty sure the VerneMQ dev team already have templates of Kuberntes configurations.

Best would be to have it into the official Helm chart repository.

Issues with automatic discovery on GKE

After setting up permissions on GKE to enable the default service account and creating the necessary roles (I volunteer to add these steps to the readme file) I cannot get the automatic discovery to work.

The issue seems to lie in the fact that when the vernemq.sh script requests the PodList, the spec.hostname and spec.subdomain properties are missing from the Pod descriptions, and so kube_pod_names is always empty. Is there something I'm missing? Could we use spec.nodeName instead of spec.hostname?

Env var format to override settings incompatible with Kubernetes

Kubernetes doesn't allow dots in env var names:

  • spec.template.spec.containers[0].env[7].name: Invalid value: "DOCKER_VERNEMQ_LOG.CONSOLE.LEVEL": must match the regex [A-Za-z_][A-Za-z0-9_]* (e.g. 'my_name' or 'MY_NAME' or 'MyName')

Also, the README.md file specifies all underscores for the DOCKER_VERNEMQ names, but that form doesn't work either.

Questions about webhook specs

Environment
VerneMQ 1.6.2
OS: CentOS Linux release 7.6.1810 3.10.0-957.1.3.el7.x86_64

I have several questions to how webhooks for auth_on_subscribe and auth_on_publish should work.
First, some response messages are not valid json in the documentation :

"result": { "error": "some error message" }

,

"result": "next"

Is that a typo?

The major question is:
How should the client be notified when either auth_on_subscribe or auth_on_publish hook send error? I would assume connection should be dropped. However with error message {"result": { "error": "some error message" }} this does not happen. In fact publisher will receive PUBACK if qos is set. I believe this is wrong. I can get the desired behavior by sending {"result":"error"}. Drawback of this approach is that crash logs are now full of

2019-01-11 12:56:21 =CRASH REPORT====
  crasher:
    initial call: vmq_ranch:init/5
    pid: <0.14194.0>
    registered_name: []
    exception error: {{case_clause,<<"error">>},[{vmq_webhooks_plugin,handle_response,3,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_webhooks/src/vmq_webhooks_plugin.erl"},{line,657}]},{vmq_webhooks_plugin,call_endpoint,4,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_webhooks/src/vmq_webhooks_plugin.erl"},{line,582}]},{vmq_webhooks_plugin,maybe_call_endpoint,4,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_webhooks/src/vmq_webhooks_plugin.erl"},{line,557}]},{vmq_webhooks_plugin,all_till_ok,3,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_webhooks/src/vmq_webhooks_plugin.erl"},{line,481}]},{vmq_plugin_helper,all_till_ok,2,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_plugin/src/vmq_plugin_helper.erl"},{line,31}]},{vmq_mqtt_fsm,auth_on_register,3,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_server/src/vmq_mqtt_fsm.erl"},{line,596}]},{vmq_mqtt_fsm,check_user,2,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_server/src/vmq_mqtt_fsm.erl"},{line,523}]},{vmq_mqtt_fsm,init,3,[{file,"/opt/vernemq/distdir/BUILD/1.6.2/_build/default/lib/vmq_server/src/vmq_mqtt_fsm.erl"},{line,144}]}]}
    ancestors: [<0.390.0>,<0.389.0>,ranch_sup,<0.110.0>]
    message_queue_len: 0
    messages: []
    links: [<0.366.0>,<0.390.0>,#Port<0.25414>]
    dictionary: [{max_msg_size,0},{rand_seed,{#{jump => #Fun<rand.16.15449617>,max => 288230376151711743,next => #Fun<rand.15.15449617>,type => exsplus},[215205180552756476|4120718457347142]}},{socket_open,#Ref<0.4065418721.1822818306.218644>},{bytes_received,#Ref<0.4065418721.1822818306.218641>},{mqtt_connect_received,#Ref<0.4065418721.1822818306.218639>}]
    trap_exit: true
    status: running
    heap_size: 2586
    stack_size: 27
    reductions: 5565

Is this crash benign? Please advise on how to correctly disconnect client without seeing the crash (if possible).

Question: ARM support for Docker on Raspberry Pi

Hi, is there (or will there be) an official option to run VerneMQ on Docker on an ARM platform like the Raspberry Pi?

I have tried to modify the original Dockerfile to the following example: https://gist.github.com/SeppPenner/08ef2a0593e30e0091351a7cb7932df4

However, I get the issue:

Err http://security.debian.org jessie/updates InRelease

Err http://security.debian.org jessie/updates Release.gpg
Unable to connect to security.debian.org:http: [IP: 212.211.132.250 80]

I have tried this on my private Raspberry Pi as well as on a company's (Which is behind a proxy) and did get the same result.

Automated clustering in Kubernetes does not work with VERNEMQ

I followed the instructions in the readme to create two pods with vernemq and expected that the pods would be part of the cluster. Unfortunately, I don't see that happening.

Here is my yaml file.

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: vernemq
spec:
  serviceName: vernemq
  replicas: 2
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      containers:
      - image: erlio/docker-vernemq:1.3.1
        name: vernemq
        env:
          - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
            value: "1"
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
            value: "on"
        ports:
        - containerPort: 1883
          name: mqtt-port
        - containerPort: 8080
          name: websocket-port
        - containerPort: 8888
          name: http-port
        resources: {}
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: vernemq
  name: vernemq
spec:
  ports:
  - name: "mqtt"
    port: 1883
    nodePort: 30883
  - name: "wsocket"
    port: 8080
    nodePort: 30080
  - name: "http"
    port: 8888
    nodePort: 30888
  type: NodePort  
  selector:
    app: vernemq

Here are my pods..

$ kubectl get pods -l app=vernemq
NAME        READY     STATUS    RESTARTS   AGE
vernemq-0   1/1       Running   0          11m
vernemq-1   1/1       Running   0          11m

Here is my cluster data shown..

$ kubectl exec vernemq-0 -- vmq-admin cluster show
+---------------------------------------------------+-------+
|                       Node                        |Running|
+---------------------------------------------------+-------+
|[email protected]| true  |
+---------------------------------------------------+-------+

I am trying this on one node only for now and hope that is ok.

Here are the logs from one of the pod. It says it is unable to join the cluster.

Will join an existing Kubernetes cluster with discovery node at vernemq-0.vernemq.default.svc.cluster.local
2018-06-05 15:23:37.867 [info] &lt;0.213.0&gt;@plumtree_metadata_leveldb_instance:init_state:322 Datadir /var/lib/vernemq/meta/meta/9 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,41625000}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
2018-06-05 15:23:37.913 [info] &lt;0.214.0&gt;@plumtree_metadata_leveldb_instance:init_state:322 Datadir /var/lib/vernemq/meta/meta/10 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,32660142}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
2018-06-05 15:23:37.921 [info] &lt;0.215.0&gt;@plumtree_metadata_leveldb_instance:init_state:322 Datadir /var/lib/vernemq/meta/meta/11 options for LevelDB: [{open,[{block_cache_threshold,33554432},{block_restart_interval,16},{block_size_steps,16},{compression,true},{create_if_missing,true},{delete_threshold,1000},{eleveldb_threads,71},{fadvise_willneed,false},{limited_developer_mem,false},{sst_block_size,4096},{tiered_slow_level,0},{total_leveldb_mem_percent,6},{use_bloomfilter,true},{write_buffer_size,44623398}]},{read,[{verify_checksums,true}]},{write,[{sync,false}]},{fold,[{verify_checksums,true},{fill_cache,false}]}]
2018-06-05 15:23:37.981 [info] &lt;0.31.0&gt; Application plumtree started on node '[email protected]'
2018-06-05 15:23:38.005 [info] &lt;0.31.0&gt; Application vmq_plugin started on node '[email protected]'
2018-06-05 15:23:38.005 [info] &lt;0.31.0&gt; Application ssl_verify_fun started on node '[email protected]'
2018-06-05 15:23:38.005 [info] &lt;0.31.0&gt; Application epgsql started on node '[email protected]'
2018-06-05 15:23:38.053 [info] &lt;0.31.0&gt; Application hackney started on node '[email protected]'
2018-06-05 15:23:38.502 [info] &lt;0.322.0&gt;@vmq_reg_trie:handle_info:183 loaded 0 subscriptions into vmq_reg_trie
2018-06-05 15:23:38.519 [info] &lt;0.220.0&gt;@vmq_cluster:init:114 plumtree peer service event handler 'vmq_cluster' registered
2018-06-05 15:23:39.080 [info] &lt;0.31.0&gt; Application vmq_acl started on node '[email protected]'
2018-06-05 15:23:39.117 [info] &lt;0.31.0&gt; Application vmq_passwd started on node '[email protected]'
2018-06-05 15:23:39.232 [info] &lt;0.31.0&gt; Application vmq_server started on node '[email protected]'
2018-06-05 15:23:39.255 [info] &lt;0.3.0&gt;@plumtree_peer_service:attempt_join:50 Sent join request to: '[email protected]'
2018-06-05 15:23:39.260 [info] &lt;0.3.0&gt;@plumtree_peer_service:attempt_join:53 Unable to connect to '[email protected]'

I looked at #52 and I am not using any certificates. Apart from that, I think we are doing almost the same. Not sure why it can't join the cluster.

Unable to set bind address for listeners

Hello,

Currently the VerneMQ docker container determines the IP address of the container using this method:

IP_ADDRESS=$(ip -4 addr show eth0 | grep -oP "(?<=inet).*(?=/)"| sed -e "s/^[[:space:]]*//" | tail -n 1)

In my case, I want to run VerneMQ on multiple docker networks:

root@mqtt:/# ip -4 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5365: eth0@if5366: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    inet 172.17.0.26/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
5367: eth1@if5368: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    inet 172.18.0.15/16 brd 172.18.255.255 scope global eth1
       valid_lft forever preferred_lft forever
5369: eth2@if5370: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    inet 172.28.0.2/16 brd 172.28.255.255 scope global eth2
       valid_lft forever preferred_lft forever

I want it to answer on them all, so I would like to use 0.0.0.0 for the following, but there seems no way to accomplish this. Setting the environment variables results in the generated lines added to the file before the auto-generated lines.

echo "listener.tcp.default = ${IP_ADDRESS}:1883" >> /etc/vernemq/vernemq.conf
echo "listener.ws.default = ${IP_ADDRESS}:8080" >> /etc/vernemq/vernemq.conf
echo "listener.vmq.clustering = ${IP_ADDRESS}:44053" >> /etc/vernemq/vernemq.conf
echo "listener.http.metrics = ${IP_ADDRESS}:8888" >> /etc/vernemq/vernemq.conf

Mounting the config file still results in the start script modifying it. Perhaps the start script could look for vernemq.conf.local, and use that if it is present? This seems to be a solution #42 as well. I could do a PR for this if you like.

Questions when subscribing and expecting to receive many messages

Environment

VerneMQ Version: 1.1.1 ( ImageID 4317379db50c)
OS:Linux Docker container
Client:
Version: 17.06.1-ce
API version: 1.30
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22:48:20 2017
OS/Arch: windows/amd64

Server:
Version: 17.06.1-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 874a737
Built: Thu Aug 17 22:54:55 2017
OS/Arch: linux/amd64
Experimental: true

Expected behaviour

I have my own HTTP server which is connected to the VerneMQ broker (client A) and
uses webhooks to authorize registering, publishing and subscribing to messages.
Client A publishes one message when a client connects and another one when disconnects from the broker.
I publish messages using my own test program which involves 2000 clients connecting to the broker
and publishing messages. I'm using a third party application (client B) to subscribe to some of the messages published.

I expect to see that client B received 4000 messages sent by client A in decent amount of time (maximum 5 minutes).

Actual behaviour
I see all the 4000 messages received by client B in aproximately 50 minutes, which is a very long
period of time for my purposes.

Details
Client A has the keep alive set to 150 and max inflight to 10000.

On the broker I edited the etc/vernemq/vernemq.conf and set (besides everything need to use the webhooks)
the following:

max_inflight_messages = 10000
max_online_messages = 10000
max_offline_messages = 10000
listener.nr_of_acceptors = 10000
erlang.async_threads = 128
leveldb.maximum_memory.percent = 80

Sometimes I see in logs from HTTP server the following:
org.eclipse.paho.client.mqttv3.internal.ClientState checkForActivity
SEVERE: 8c894b2efbe942a7b132116497cd68cc: Timed out as no activity, keepAlive=150,000 lastOutboundActivity=1,503,497,209,568 lastInboundActivity=1,503,497,062,154 time=1,503,497,359,569 lastPing=1,503,497,209,568

Also from testing this multiple times that the broker seems to hang and send the messages from time to time. I can see that the HTTP Server receives the messages and authorizes the publication with periods of complete inactivity. I also encountered in broker logs the following warnings:

[warning] <0.20438.0>@vmq_mqtt_fsm:connected:420 client {"newmountpoint",<<"8c894b2efbe942a7b132116497cd68cc">>} with username <<"test_auth_00001">> stopped due to keepalive expired
[warning] <0.217.0>@plumtree_broadcast:schedule_lazy_tick:599 Broadcast overloaded, 1177ms mailbox traversal, schedule next lazy broadcast in 58585ms, the min interval is 10000ms

Using in my test program less clients(e.g. 200) client B receives all messages in 1 minute.

I noticed that both when I'm using a small number of clients or a big number of clients I encounter the follwing error:
[error] <0.20443.0>@vmq_webhooks_plugin:call_endpoint:476 calling endpoint failed due to received_payload_not_json

I am sending everything json so I don't understand why I see this error.

I updated today my Docker for Windows to latest version and also VerneMQ to latest version and I don't see any difference for my problem.

At this moment I don't know what is the cause for this hanging. Is something I can change in broker settings or in my connection settings to move faster ?

I appreciate any help here.
Thank You!

Unable to set nodename

Unable to set nodename for vernemq node in docker container.

After looking at bin/vernemq.sh, I feel the issues to be in bin/vernemq.sh line 10. Below could be the possible fix.
if env | grep -q "DOCKER_VERNEMQ_NODENAME"; then
sed -i.bak "s/VerneMQ@${DOCKER_VERNEMQ_NODENAME}/" /etc/vernemq/vm.args
else
sed -i.bak "s/[email protected]/VerneMQ@${IP_ADDRESS}/" /etc/vernemq/vm.args

Or I might have misunderstood.

Can someone help me with this issue?

VerneMQ cluster node restart failed

We create a VMQ cluster in docker . There are two nodes here. The first node is ok๏ผŒbut once the second node is closed, it will not start.
Our steps are as follows๏ผš
1ใ€docker run --name vernemq1 -p 0.0.0.0:1183:1883 -p 0.0.0.0:1888:8888 -v /app/vernemq/config/vernemq.conf:/etc/vernemq/vernemq.conf -d erlio/docker-vernemq๏ผˆwe want all nodes to use the same config file. And the port 8888 is for status page๏ผ‰

2ใ€docker inspect bd71a898d734 | grep "IPAddress" ๏ผˆwe find the ip and the ip is "172.17.0.2"๏ผ‰

3ใ€docker run -e "DOCKER_VERNEMQ_DISCOVERY_NODE=172.17.0.2" --name vernemq2 -p 0.0.0.0:1283:1883 -p 0.0.0.0:2888:8888 -v /app/vernemq/config/vernemq.conf:/etc/vernemq/vernemq.conf -v /app/vernemq/static:/usr/lib/vernemq/lib/vmq_server-1.6.1/priv/static -d erlio/docker-vernemq

The node vernemq2 is working properly๏ผŒbut the logs shows some error.

09:40:41.723 [error] Failed to start Ranch listener {{0,0,0,0},8888} in ranch_tcp:listen([{ip,{0,0,0,0}},{port,8888},{nodelay,true},{linger,{true,0}},{send_timeout,30000},{send_timeout_close,true}]) for reason eaddrinuse (address already in use)
09:40:41.723 [error] CRASH REPORT Process <0.432.0> with 0 neighbours exited with reason: {listen_error,{{0,0,0,0},8888},eaddrinuse} in ranch_acceptors_sup:listen_error/4 line 56

If we stop the vernemq2 node, we can't start it anymore. The error is same as the begin. The only way to
repair cluster is delete the vernemq2 container node๏ผŒ and build a new one. It's there any way to fix this problems?

MySQL auth problem on Kubernetes

Image version: docker-vernemq:1.3.0

I have a custom auth plugin doing mysql based authentication (based on the original mysql plugin from here: https://github.com/erlio/vernemq/blob/master/apps/vmq_diversity/priv/auth/mysql.lua ), with some smaller alterations. The plugin looks like this:


function auth_on_register(reg)
    local results
    if reg.username == "token" and reg.password ~= nil and reg.client_id ~= nil then
        log.debug("username based auth")
        results = mysql.execute(pool,
                [[SELECT publish_acl, subscribe_acl FROM vmq_auth_acl WHERE client_id=? AND password=?;]],
                reg.client_id, reg.password)

    elseif reg.username ~= nil and reg.client_id ~= nil and reg.username == reg.client_id then
        log.debug("certificate based auth")
        results = mysql.execute(pool,
                [[SELECT publish_acl, subscribe_acl FROM vmq_auth_acl WHERE client_id=?;]],
                reg.client_id)
    else
        log.warn("bad connection setup")
        return false
    end

    return check_result_add_to_cache(results, reg)
end

function check_result_add_to_cache(results, reg)
    if #results == 1 then
        row = results[1]
        publish_acl = json.decode(row.publish_acl)
        subscribe_acl = json.decode(row.subscribe_acl)
        cache_insert(
                reg.mountpoint,
                reg.client_id,
                reg.username,
                publish_acl,
                subscribe_acl
        )
        return true
    else
        log.debug("auth failure, client not found in ACL database, or wrong password")
        return false
    end
end

pool = "auth_mysql"
config = {
    pool_id = pool
}

mysql.ensure_pool(config)
hooks = {
    auth_on_register = auth_on_register,
    auth_on_publish = auth_on_publish,
    auth_on_subscribe = auth_on_subscribe,
    on_unsubscribe = on_unsubscribe,
    on_client_gone = on_client_gone,
    on_client_offline = on_client_offline
}

I have tried this with MySQL using Docker on my local dev pc and it worked well. After that I have tried to deploy it to a Kubernetes cluster and it stopped working. The YAML descriptor is the following:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mqtt-gateway
  namespace: sensorhub
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mqtt-gateway
        version: "2.0.0"
    spec:
      containers:
      - name: mqtt-gateway
        image: erlio/docker-vernemq:1.3.0
        ports:
         - containerPort: 8883
         - containerPort: 8884
        volumeMounts:
         - name: ca
           mountPath: '/etc/ca'
           readOnly: true
         - name: mqtt-cert
           mountPath: '/etc/ssl'
           readOnly: true
         - name: mqtt-authconf
           mountPath: '/etc/scripts/'
           readOnly: true
        env:
         - name: DOCKER_VERNEMQ_LOG__CONSOLE__LEVEL
           value: "info"
         - name: DOCKER_VERNEMQ_PLUGINS__VMQ_PASSWD
           value: "off"
         - name: DOCKER_VERNEMQ_PLUGINS__VMQ_ACL
           value: "off"
         - name: DOCKER_VERNEMQ_PLUGINS__VMQ_DIVERSITY
           value: "on"
         - name: DOCKER_VERNEMQ_VMQ_DIVERSITY__MYSQL__HOST
           valueFrom:
            configMapKeyRef:
              name: mysql-config
              key: host
         - name: DOCKER_VERNEMQ_VMQ_DIVERSITY__MYSQL__PORT
           valueFrom:
             configMapKeyRef:
               name: mysql-config
               key: port
         - name: DOCKER_VERNEMQ_VMQ_DIVERSITY__MYSQL__USER
           valueFrom:
             configMapKeyRef:
               name: mysql-config
               key: username
         - name: DOCKER_VERNEMQ_VMQ_DIVERSITY__MYSQL__PASSWORD
           valueFrom:
             configMapKeyRef:
               name: mysql-config
               key: password
         - name: DOCKER_VERNEMQ_VMQ_DIVERSITY__MYSQL__DATABASE
           valueFrom:
             configMapKeyRef:
               name: mysql-config
               key: db
         - name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
           value: "/etc/ca/crt"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
           value: "/etc/ssl/crt"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
           value: "/etc/ssl/key"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTBASED__REQUIRE_CERTIFICATE
           value: "on"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTBASED__USE_IDENTITY_AS_USERNAME
           value: "on"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTBASED
           value: "0.0.0.0:8883"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__PWBASED__REQUIRE_CERTIFICATE
           value: "off"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__PWBASED__USE_IDENTITY_AS_USERNAME
           value: "off"
         - name: DOCKER_VERNEMQ_LISTENER__SSL__PWBASED
           value: "0.0.0.0:8884"
         - name: DOCKER_VERNEMQ_LISTENER__MAX_CONNECTIONS
           value: "100000"
         - name: DOCKER_VERNEMQ_LISTENER__NR_OF_ACCEPTORS
           value: "100"
         - name: DOCKER_VERNEMQ_VMQ_DIVERSITY__MYSQLACL__FILE
           value: "/etc/scripts/mysqlacl.lua"
         - name: DOCKER_VERNEMQ_MAX_CLIENT_ID_SIZE
           value: "36"
         - name: DOCKER_VERNEMQ_MESSAGE_SIZE_LIMIT
           value: "16384"
      volumes:
      - name: ca
        secret:
          secretName: deviceregistry-ca
      - name: mqtt-cert
        secret:
          secretName: mqtt-gateway-cert
      - name: mqtt-authconf
        secret:
          secretName: mqtt-gateway-authconf
          items:
          - key: authplugin
            path: "mysqlacl.lua"

From the logs it looks like it cannot connect to the database because of a wrong password:

2018-02-07 15:08:04.574 [info] <0.31.0> Application plumtree started on node '[email protected]'
2018-02-07 15:08:04.581 [info] <0.31.0> Application vmq_plugin started on node '[email protected]'
2018-02-07 15:08:04.582 [info] <0.31.0> Application ssl_verify_fun started on node '[email protected]'
2018-02-07 15:08:04.582 [info] <0.31.0> Application epgsql started on node '[email protected]'
2018-02-07 15:08:04.594 [info] <0.31.0> Application hackney started on node '[email protected]'
2018-02-07 15:08:08.587 [info] <0.352.0>@vmq_reg_trie:handle_info:183 loaded 0 subscriptions into vmq_reg_trie
2018-02-07 15:08:08.603 [info] <0.235.0>@vmq_cluster:init:114 plumtree peer service event handler 'vmq_cluster' registered
2018-02-07 15:08:09.156 [info] <0.31.0> Application vmq_diversity started on node '[email protected]'
2018-02-07 15:08:09.210 [info] <0.378.0>@vmq_diversity_app:start:64 enable script for "/etc/scripts/mysqlacl.lua"
2018-02-07 15:08:09.244 [error] <0.384.0>@vmq_diversity_script_state:init:96 can't load script "/etc/scripts/mysqlacl.lua" due to {throw,{auth_fail,{error_packet,2,1045,<<"28000">>,"Access denied for user 'root'@'10.244.1.64' (using password: YES)"}}}
2018-02-07 15:08:09.245 [error] <0.378.0>@vmq_diversity_app:load_script:96 could not load script "/etc/scripts/mysqlacl.lua" due to {{'EXIT',{{badmatch,{error,{normal,{child,undefined,{vmq_diversity_script_state,1},{vmq_diversity_script_state,start_link,[1,"/etc/scripts/mysqlacl.lua"]},permanent,5000,worker,[vmq_diversity_script_state]}}}},[{vmq_diversity_script_sup_sup,setup_lua_states,2,[{file,"/opt/vernemq/distdir/1.3.0/_build/default/lib/vmq_diversity/src/vmq_diversity_script_sup_sup.erl"},{line,97}]},{vmq_diversity_script_sup_sup,start_link,1,[{file,"/opt/vernemq/distdir/1.3.0/_build/default/lib/vmq_diversity/src/vmq_diversity_script_sup_sup.erl"},{line,45}]},{supervisor,do_start_child,2,[{file,"supervisor.erl"},{line,365}]},{supervisor,handle_start_child,2,[{file,"supervisor.erl"},{line,724}]},{supervisor,handle_call,3,[{file,"supervisor.erl"},{line,422}]},{gen_server,try_handle_call,4,[{file,"gen_server.erl"},{line,615}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,647}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}},{child,undefined,{vmq_diversity_script_sup_sup,"/etc/scripts/mysqlacl.lua"},{vmq_diversity_script_sup_sup,start_link,["/etc/scripts/mysqlacl.lua"]},permanent,5000,supervisor,[vmq_diversity_script_sup_sup]}}

The MySQL settings come from a config map and are the same that other applications use in this cluster and those can connect without a problem. The password does not contain any special characters.

Do you have any idea what could cause this issue?

How to do file persistence volume?

docker-compose file

    volumes:
      - mqtt/data:/vernemq/data

error log

Creating network "docker_cherry-network" with driver "bridge"
Creating mqttinit ... done
Creating mqtt-1   ... done
Creating mqtt-0   ... done
Creating mqtt-2   ... done
Attaching to mqttinit, mqtt-0, mqtt-2, mqtt-1
mqttinit    | 11:31:04.534 [error] Error creating /vernemq/data/generated.configs: permission denied
mqttinit    | configuration error, exit
mqttinit    | 11:31:04.534 [error] Error creating /vernemq/data/generated.configs: permission denied
mqtt-0      | 11:31:05.805 [error] Error creating /vernemq/data/generated.configs: permission denied
mqtt-0      | configuration error, exit
mqtt-0      | 11:31:05.805 [error] Error creating /vernemq/data/generated.configs: permission denied

This error should be caused by file directory permissions.

Passing config arguments retuns an error

Hi,

When I run the container with the port mapping argument and the anonymous argument I get the following error;

โฏ docker run --name vernemq1 -d erlio/docker-vernemq -p 1883:1883 -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on"
7adb4137d8258032d1e9e6d35cb25b6682723896bd6bac3c8627fbdc2e2cd42e
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "exec: \"-p\": executable file not found in $PATH".

When I remove the -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" argument it successfully starts but it isn't reachable because I'm running it on a Mac

Cheers!

Discussion: How to change the official VerneMQ Docker images

Goals: We want to support multiple image types like a larger Debian based image and a small stripped-down Alpine based image. Moreover we want to provide Docker Images that use the new SWC replication algorithm out-of-the-box but keep Docker images with the current/stable replication algorithm. We want to do differntiate between those multiple versions with Docker tags.

The easiest way to make this work is to have a Docker image that doesn't use a VerneMQ release that was installed via one of our official packages (today we take the official Debian Jessie package), but uses a pre-build VerneMQ release directly. The main downside of such an approach is that existing Docker volume mounts have to be adjusted to reflect the new directory structure.

The following adjustments would be necessary:

/etc/vernemq --> /vernemq/etc
/var/log/vernemq --> /vernemq/log
/var/lib/vernemq --> /vernemq/data

the changes can be handled in a (sort of) backward compatible way using OS symlinks.

Also the binaries vernemq, vmq-admin, vmq-passwd are inside /vernemq/bin and cannot be easily copied into /usr/sbin.

What do you think about this? Is this a change that could be done right away, or does it need special care for current Docker based deployments?

Excessive memory consuption, too many os proceses ?

Hi,

I am seeing this strange beaviour from vernemq while running it under docker-compose.

screenshot from 2016-10-27 14-27-25

There are A LOT, like tree maybe four htop's pages full of vernemq process (like os process, I guess, not erlang process).

Is this expected?
I stop and restart docker compose pretty ofter, may it be left from the old runs?

Cheers and congrats for the work,

Simone

Vernemq running on Kubernetes but sometimes not forwarding received messages

Hello,
our environment is made up of at least 3 nodes (max 5 nodes) running with Kubernetes on Google Cloud Platform.
Over that we were working with a compiled by us version of Vernemq, in particular a cluster made up of 3 Pods; this compiled version was created starting by Vernemq 1.3.0 and modifying the webhooks pool dimension.
Pods resources limits were 1 cpu and 3 Gb RAM.
From July we had these problems: sometimes a POD, despite running, correctly accepting subscriptions and receiving messages, didn't forward the received messages to subscribers. We used to unlock this situation restarting the stuck Pod.
To try to solve our problem we decided to upgrade Vernemq version, so now we are working with official docker image "erlio/docker-vernemq:1.5.0".
New Pods have same resources limits but we added an HAProxy in front of container Vernemq.
This is the actual vernemq-conf:

# defines the default nr of allowed concurrent
# connections per listener
listener.max_connections = 50000

# defines the nr. of acceptor processes waiting
# to concurrently accept new connections
listener.nr_of_acceptors = 30

# used when clients of a particular listener should
# be isolated from clients connected to another
# listener.
listener.mountpoint = off

# log setting 
log.console = console
log.console.file = /vernemq/log/console.log
log.console.level = info
log.error.file = /dev/null

log.crash = on
log.crash.file = /etc/vernemeq/crash.log
log.crash.maximum_message_size = 64KB
log.crash.size = 10MB
log.crash.rotation = $D0
log.crash.rotation.keep = 5

# cluster cookies
distributed_cookie = ***

allow_register_during_netsplit = on
allow_publish_during_netsplit = on
allow_subscribe_during_netsplit = on
allow_unsubscribe_during_netsplit = on

shared_subscription_policy = prefer_local

erlang.distribution.port_range.minimum = 9100
erlang.distribution.port_range.maximum = 9199

listener.ssl.cafile = ***
listener.ssl.certfile = ***
listener.ssl.keyfile = ***
listener.ssl.require_certificate = off
listener.ssl.use_identity_as_username = off
listener.ssl.default = 0.0.0.0:9883

listener.wss.cafile = ***
listener.wss.certfile = ***
listener.wss.keyfile = ***
listener.wss.require_certificate = off
listener.wss.use_identity_as_username = off
listener.wss.default = 0.0.0.0:8083

listener.tcp.default = 0.0.0.0:2883
listener.tcp.alternative1 = 0.0.0.0:2884
listener.tcp.alternative2 = 0.0.0.0:2885
listener.tcp.alternative3 = 0.0.0.0:2886
listener.tcp.alternative4 = 0.0.0.0:2887
listener.ws.default = 0.0.0.0:8080
listener.vmq.default = 0.0.0.0:44053
listener.http.default = 0.0.0.0:8888

plugins.vmq_diversity = off

plugins.vmq_passwd = on
vmq_passwd.password_file = /etc/vernemq/vmq.passwd
vmq_passwd.password_reload_interval = 10

plugins.vmq_acl = on
vmq_acl.acl_file = /etc/vernemq/vmq.acl
vmq_acl.acl_reload_interval = 10

plugins.vmq_webhooks = on
vmq_webhooks.ccs-webauth.hook = auth_on_register
vmq_webhooks.ccs-webauth.endpoint = ***
vmq_webhooks.ccs-webauth1.hook = auth_on_publish
vmq_webhooks.ccs-webauth1.endpoint = ***
vmq_webhooks.ccs-webauth2.hook = auth_on_subscribe
vmq_webhooks.ccs-webauth2.endpoint = ***

max_message_size = 32768
max_client_id_size = 50
persistent_client_expiration = 1d

retry_interval = 10
max_inflight_messages = 100
max_online_messages = 1000
max_offline_messages = 5

#leveldb.maximum_memory = 10485760
#leveldb.write_buffer_size_min = 1048576
#leveldb.write_buffer_size_max = 2097152

leveldb.maximum_memory = 209715200
leveldb.write_buffer_size_min = 10485760
leveldb.write_buffer_size_max = 104857600

vmq_webhooks.pool_max_connections = 1000
vmq_webhooks.pool_timeout = 60000

We added a check script to every vernemq POD: this subscribes on the specific POD, sends a message and waits for the reply. This check script is used by Kubernetes liveness, so when it fails 5 consecutive times, the POD is automatically restarted.
Also with the new version this check sometimes fails. In logs we found this stacktrace:

12:05:31.644 [error] gen_server plumtree_metadata_manager terminated with reason: no case clause matching [] in plumtree_metadata_manager:store/2 line 472
12:05:31.645 [error] CRASH REPORT Process plumtree_metadata_manager with 0 neighbours exited with reason: no case clause matching [] in plumtree_metadata_manager:store/2 line 472 in gen_server:terminate/7 line 812
12:05:31.646 [error] gen_server plumtree_broadcast terminated with reason: {{{case_clause,[]},[{plumtree_metadata_manager,store,2,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,472}]},{plumtree_metadata_manager,read_merge_write,2,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,457}]},{plumtree_metadata_manager,handle_call,3,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,316}]},{gen_server,try_handle_call,...},...]},...}
12:05:31.646 [error] CRASH REPORT Process plumtree_broadcast with 0 neighbours exited with reason: {{{case_clause,[]},[{plumtree_metadata_manager,store,2,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,472}]},{plumtree_metadata_manager,read_merge_write,2,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,457}]},{plumtree_metadata_manager,handle_call,3,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,316}]},{gen_server,try_handle_call,...},...]},...} in gen_server2:terminate/3 line 995
12:05:31.646 [error] Supervisor plumtree_sup had child plumtree_metadata_manager started with plumtree_metadata_manager:start_link() at <0.225.0> exit with reason no case clause matching [] in plumtree_metadata_manager:store/2 line 472 in context child_terminated
12:05:31.647 [error] Supervisor plumtree_sup had child plumtree_broadcast started with plumtree_broadcast:start_link() at <0.224.0> exit with reason {{{case_clause,[]},[{plumtree_metadata_manager,store,2,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,472}]},{plumtree_metadata_manager,read_merge_write,2,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,457}]},{plumtree_metadata_manager,handle_call,3,[{file,"/opt/vernemq/distdir/1.5.0/_build/default/lib/plumtree/src/plumtree_metadata_manager.erl"},{line,316}]},{gen_server,try_handle_call,...},...]},...} in context child_terminated

Can you help us?
Thank you very much,
Mauro

schermata del 2018-09-04 17-04-24

Options from Docker env are converted to lowercase

When launching the docker image, the credentials from environment are mangled to all lower-case (see: https://github.com/erlio/docker-vernemq/blob/master/bin/vernemq.sh#L49).

$ echo DOCKER_VERNEMQ_VMQ_DIVERSITY__POSTGRES__PASSWORD=lowercaseUPPERCASE | cut -c 16- | tr '[:upper:]' '[:lower:]' | sed 's/__/./g'
vmq_diversity.postgres.password=lowercaseuppercase

This is probably to get the keys in lowercase but this also converts the values to lowercase which causes issues that are not really that obvious until you start digging further.

Clustering shows pods as running but logs are throwing warnings

i am trying to deploy vernemq onto K8s. When i deploy the stateful set, the command
kubectl exec vernemq-0 -- vmq-admin cluster show gives me the following ouput,
+-----------------------------------------------+-------+
| Node |Running|
+-----------------------------------------------+-------+
|[email protected]| true |
|[email protected]| true |
+-----------------------------------------------+-------+

When i log into one of the pods and look at the logs, here is what i see,
2018-11-05 20:21:49.607 [info] <0.420.0>@vmq_cluster_node:connect:244 successfully connected to cluster node '[email protected]' 2018-11-05 20:21:49.607 [warning] <0.2525.0>@vmq_cluster_com:loop:74 terminate due to remote_node_not_available

Seems like it has conflicting output. I can't connect to the nodes though.

Here is my config.

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: vernemq
  namespace: test
spec:
  serviceName: vernemq
  replicas: 2
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      containers:
      - name: vernemq
        image: erlio/docker-vernemq:1.3.1
        ports:
        - containerPort: 1883
          name: mqtt
        - containerPort: 8883
          name: mqtts
        - containerPort: 4369
          name: epmd
        - containerPort: 44053
          name: vmq
        - containerPort: 9100
        - containerPort: 9101
        - containerPort: 9102
        - containerPort: 9103
        - containerPort: 9104
        - containerPort: 9105
        - containerPort: 9106
        - containerPort: 9107
        - containerPort: 9108
        - containerPort: 9109
        env:
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_KUBERNETES_CLUSTER_NAME
          value: "vernemq"  
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          value: "iot"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
          value: "9100"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
          value: "9109"
        volumeMounts:
          - name: vernemq-config-volume
            mountPath: /etc/vernemq/vernemq.conf
            subPath: vernemq.conf
          - name: vernemq-acl-volume
            mountPath: /etc/vernemq/vmq.acl
            subPath: vmq.acl
        resources:
            limits:
              cpu: 100m
              memory: 512Mi
            requests:
              cpu: 10m
              memory: 256Mi
      volumes:
      - name: vernemq-config-volume
        configMap:
          name: vernemq-config
      - name: vernemq-acl-volume
        configMap:
          name: vernemq-acl

Also noticing that, when the NS is "default", the broker starts up
2018-11-05 21:31:42.820 [info] <0.31.0> Application plumtree started on node '[email protected]' 2018-11-05 21:31:44.509 [info] <0.31.0> Application vmq_plugin started on node '[email protected]' 2018-11-05 21:31:44.509 [info] <0.31.0> Application ssl_verify_fun started on node '[email protected]' 2018-11-05 21:31:44.509 [info] <0.31.0> Application epgsql started on node '[email protected]' 2018-11-05 21:31:50.110 [info] <0.31.0> Application hackney started on node '[email protected]' 2018-11-05 21:32:17.113 [info] <0.327.0>@vmq_reg_trie:handle_info:183 loaded 0 subscriptions into vmq_reg_trie 2018-11-05 21:32:19.819 [info] <0.222.0>@vmq_cluster:init:114 plumtree peer service event handler 'vmq_cluster' registered 2018-11-05 21:32:41.311 [info] <0.31.0> Application vmq_acl started on node '[email protected]' 2018-11-05 21:32:43.853 [info] <0.31.0> Application vmq_passwd started on node '[email protected]' 2018-11-05 21:32:52.907 [info] <0.31.0> Application vmq_server started on node '[email protected]'

but when the NS is injected through an environment variable
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE value: "test"
the logs alternative successful connection and then termination due to remote node not being available.

Error joining a node on different host in a cluster

Hello,

I have two hosts A & B and both are connected to local LAN

I started a container on host A with following command

docker run -e "DOCKER_VERNEMQ_ALLOW_ANONYMOUS=on" --name vernemq1 -p 1883:1883 -p 8883:8883 -d erlio/docker-vernemq

Now i start a container on host B with below command

docker run -e "DOCKER_VERNEMQ_DISCOVERY_NODE=192.168.1.33" --name vernemq2 -d erlio/docker-vernemq

where 192.168.1.33 is A's local ip address

when i run sudo docker exec vernemq1 vmq-admin cluster show it shows just a single node in cluster.

I tailed logs of container running on B and it shows error connecting to cluster as shown below.
12:36:19.159 [info] cluster event handler 'vmq_cluster' registered 12:36:19.801 [info] Application vmq_acl started on node '[email protected]' 12:36:19.865 [info] Application vmq_passwd started on node '[email protected]' 12:36:19.922 [info] Application vmq_server started on node '[email protected]' 12:36:19.946 [info] Sent join request to: '[email protected]' 12:36:19.950 [info] Unable to connect to '[email protected]'
Ironically when running the above two containers locally they are able to join and form a cluster.

looks like a docker networking issue but i cannot figure it out exactly what's blocking two nodes from communicating. Any help ?

Multiple ip breaks startup script

I'm trying to launch a vernemq instance in a rancher environment, I get:

8/30/2016 12:05:53 PM/usr/sbin/start_vernemq: line 49: [: -ne: unary operator expected
8/30/2016 12:06:19 PMvernemq failed to start within 15 seconds,
8/30/2016 12:06:19 PMsee the output of 'vernemq console' for more information.
8/30/2016 12:06:19 PMIf you want to wait longer, set the environment variable
8/30/2016 12:06:19 PMWAIT_FOR_ERLANG to the number of seconds to wait.

vernemq console throws an exception saying there's a syntax error at these lines:

erlang.distribution.port_range.minimum = 9100
erlang.distribution.port_range.maximum = 9109
listener.tcp.default = 172.17.0.17
10.42.0.193:1883
listener.ws.default = 172.17.0.17
10.42.0.193:8080
listener.vmq.clustering = 172.17.0.17
10.42.0.193:44053
listener.http.metrics = 172.17.0.17

could be that with multiple ip they go on a newline?

Communication problems on Kubernetes

Image version: docker-vernemq:1.3.0

I have an issue with communicating with Vernemq inside a Kubernetes cluster. I have set up an SSL secured mqtt endpoint using ENV variables given in the Kubernetes Deployment description. I have made my listener bind to 0.0.0.0:8883. The docker container starts up fine and if I check the active listeners using vmq-admin I see my custom one on 0.0.0.0:8883 and also the default one listening directly on the container IP (10.244.0.49 or something similar) port 1883 started by the vernemq.sh init script.

The problem is that I can't connect to my custom endpoint, but I can to the default one. The ports are forwarded to the outside world using a Kubernetes NodePort service. I have checked the logs, I have traced the client ids with vmq-admin but there are absolutely no sign of activity when I try to connect to my custom listener. The listener itself should be okay, it is working on a local Docker version without Kubernetes.

I assume the problem is with networking. In the local Docker version I also bind my listener to 0.0.0.0 but in hat case I use the -p switch of Docker run to access the port and it works fine. In the case of Kubernetes networking is handled in some other way, there is no need to expose ports the same way like in the standalone docker version. It is still strange that binding to the container port directly works and binding to 0.0.0.0 (all interfaces?) not. I'm going to try to make my listener to bind to the container IP as well but it is not a trivial thing to do since I have to make a static listener description and the container IPs are given dynamically.

I wonder if any of you know how Kubernetes / Docker networking works or have any idea what can cause this problem. Thanks in advance!

VerneMQ can't create a cluster automatically on Amazon Elastic Kubernetes Service(EKS)

PLEASE DO NOT MARK THIS AS DUPLICATE AND CLOSE IT, UNLESS THERE IS A COMPLETE SOLUTION TO THIS PROBLEM

Followed the write up from https://github.com/nmatsui/kubernetes-vernemq and followed the issue resolution from #52. In this case, it's not a SSL issue. Trying to do the exact same thing on Amazon EKS.

Here is the YAML file

--- 
apiVersion: apps/v1
kind: StatefulSet
metadata: 
  name: vernemq
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: vernemq
  serviceName: vernemq
  template: 
    metadata: 
      labels: 
        app: vernemq
    spec:
      containers:
      - name: vernemq
        image: erlio/docker-vernemq
        # image: nmatsui/docker-vernemq:debug_insecure_kubernetes_restapi
        imagePullPolicy: Always
        # Just spin & wait forever
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        ports:
        - containerPort: 1883
          name: mqtt
        - containerPort: 8883
          name: mqtts
        - containerPort: 4369
          name: epmd
        - containerPort: 44053
          name: vmq
        - containerPort: 9100
        - containerPort: 9101
        - containerPort: 9102
        - containerPort: 9103
        - containerPort: 9104
        - containerPort: 9105
        - containerPort: 9106
        - containerPort: 9107
        - containerPort: 9108
        - containerPort: 9109
        env:
        - name: MY_POD_NAME
          valueFrom:
           fieldRef:
             fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          value: "default"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
          value: "9100"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
          value: "9109"
        - name: DOCKER_VERNEMQ_LISTENER__VMQ__CLUSTERING
          value: "0.0.0.0:44053"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
          value: "0.0.0.0:8883"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
          value: "/etc/ssl/ca.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
          value: "/etc/ssl/server.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
          value: "/etc/ssl/server.key"
        - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
          value: "/etc/vernemq-passwd/vmq.passwd"
        volumeMounts:
        - mountPath: /etc/ssl
          name: vernemq-certifications
          readOnly: true
        - mountPath: /etc/vernemq-passwd
          name: vernemq-passwd
          readOnly: true
      volumes:
      - name: vernemq-certifications
        secret:
          secretName: vernemq-certifications
      - name: vernemq-passwd
        secret:
          secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: empd
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: Service
metadata:
  name: mqtts
  labels:
    app: mqtts
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 8883
    name: mqtts

Kubectl version

>kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}

Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
>kubectl get nodes

NAME                                           STATUS    ROLES     AGE       VERSION
ip-192-168-138-81.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3
ip-192-168-233-48.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3
ip-192-168-96-199.us-west-2.compute.internal   Ready     <none>    6d        v1.10.3

The Pods started & ran successfully.

>kubectl get pods -l app=vernemq

NAME        READY     STATUS    RESTARTS   AGE
vernemq-0   1/1       Running   0          5h
vernemq-1   1/1       Running   0          5h
vernemq-2   1/1       Running   0          5h

I could not get any cluster.

>kubectl exec vernemq-0 -- vmq-admin cluster show

Node '[email protected]' not responding to pings.
command terminated with exit code 1

The Pod's log say nothing.

>kubectl logs vernemq-0

Running it in container scheduler

Hi,

I'm trying to run it in a scheduler (nomad), but... it's not running.

First I'm not sure about ranges for port allocation. Is it possible to run it with specific port mapping - which looks like the case from the readme?

Then I wonder if it can do retry for discovery? I'd like to have this work:
DOCKER_VERNEMQ_DISCOVERY_NODE="mqtt-${env}.service.consul"

I've also tried to run it with net=host, but looks like it requires rewriting all the startup script. Not sure, maybe it' scheduler.

2017-04-25 23:51:21.167 [info] <0.31.0> Application vmq_server started on node '[email protected]'
2017-04-25 23:51:21.200 [info] <0.3.0>@plumtree_peer_service:attempt_join:50 Sent join request to: '[email protected]'
2017-04-25 23:51:21.200 [debug] <0.424.0> Supervisor inet_gethost_native_sup started undefined at pid <0.425.0>
2017-04-25 23:51:21.200 [debug] <0.59.0> Supervisor kernel_safe_sup started inet_gethost_native:start_link() at pid <0.424.0>
2017-04-25 23:51:21.223 [info] <0.3.0>@plumtree_peer_service:attempt_join:53 Unable to connect to '[email protected]'
2017-04-25 23:51:24.280 [debug] <0.434.0>@vmq_ranch:teardown:123 session normally stopped

Kubernetes namespace could be detected automatically

When using the kubernetes clustering mechanism, one has to set the environment variable DOCKER_VERNEMQ_KUBERNETES_NAMESPACE.

While this method works fine there is a way to find the namespace in which the pod runs, it's stored in a file named namespace alongside ca.crt and token in /var/run/secrets/kubernetes.io/serviceaccount/.

got 403 Forbidden error while downloading "deb" file

We are building vernemq from the source and it is throwing error while downloading the ".deb" file.

Here is the terminal log:

Building vernemq
Step 1/20 : FROM debian:jessie
 ---> ce40fb3adcc6
Step 2/20 : MAINTAINER Erlio GmbH [email protected]
 ---> Using cache
 ---> cfa0d7b2f00e
Step 3/20 : RUN apt-get update && apt-get install -y     libssl-dev     logrotate     sudo && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 8008f95469fd
Step 4/20 : ENV VERNEMQ_VERSION 1.2.2
 ---> Using cache
 ---> 901e474b901e
Step 5/20 : ADD https://bintray.com/artifact/download/erlio/vernemq/deb/jessie/vernemq_$VERNEMQ_VERSION-1_amd64.deb /tmp/vernemq.deb
ERROR: Service 'vernemq' failed to build: Got HTTP status code >= 400: 403 Forbidden

VerneMQ can't create a cluster automatically on Microsoft Azure Kubernetes Service(AKS)

I tried to create a VerneMQ cluster using below yaml on Microsoft Azure Kubernetes Service (AKS). The pods ran successfully, but I could not get any cluster.

yaml file is like below:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: vernemq
spec:
  serviceName: vernemq
  replicas: 3
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      containers:
      - name: vernemq
        image: erlio/docker-vernemq:1.3.1
        ports:
        - containerPort: 1883
          name: mqtt
        - containerPort: 8883
          name: mqtts
        - containerPort: 4369
          name: epmd
        - containerPort: 44053
          name: vmq
        - containerPort: 9100
        - containerPort: 9101
        - containerPort: 9102
        - containerPort: 9103
        - containerPort: 9104
        - containerPort: 9105
        - containerPort: 9106
        - containerPort: 9107
        - containerPort: 9108
        - containerPort: 9109
        env:
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
          value: "1"
        - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
          value: "vernemq"
        - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
          value: "default"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
          value: "9100"
        - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
          value: "9109"
        - name: DOCKER_VERNEMQ_LISTENER__VMQ__CLUSTERING
          value: "0.0.0.0:44053"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
          value: "0.0.0.0:8883"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CAFILE
          value: "/etc/ssl/ca.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__CERTFILE
          value: "/etc/ssl/server.crt"
        - name: DOCKER_VERNEMQ_LISTENER__SSL__KEYFILE
          value: "/etc/ssl/server.key"
        # if mqtt client can't use TLSv1.2
        # - name: DOCKER_VERNEMQ_LISTENER__SSL__TLS_VERSION
        #   value: "tlsv1"
        - name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
          value: "/etc/vernemq-passwd/vmq.passwd"
        volumeMounts:
        - mountPath: /etc/ssl
          name: vernemq-certifications
          readOnly: true
        - mountPath: /etc/vernemq-passwd
          name: vernemq-passwd
          readOnly: true
      volumes:
      - name: vernemq-certifications
        secret:
          secretName: vernemq-certifications
      - name: vernemq-passwd
        secret:
          secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  labels:
    app: vernemq
spec:
  clusterIP: None
  selector:
    app: vernemq
  ports:
  - port: 4369
    name: empd
---
apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: ClusterIP
  selector:
    app: vernemq
  ports:
  - port: 1883
    name: mqtt
---
apiVersion: v1
kind: Service
metadata:
  name: mqtts
  labels:
    app: mqtts
spec:
  type: LoadBalancer
  selector:
    app: vernemq
  ports:
  - port: 8883
    name: mqtts

Kubernetes v1.9.6 is running on AKS.

$ kubectl get nodes
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-83898320-0   Ready     agent     2h        v1.9.6
aks-nodepool1-83898320-1   Ready     agent     2h        v1.9.6
aks-nodepool1-83898320-2   Ready     agent     2h        v1.9.6
aks-nodepool1-83898320-3   Ready     agent     2h        v1.9.6

The Pods started & ran successfully.

$ kubectl get pods -l app=vernemq
NAME        READY     STATUS    RESTARTS   AGE
vernemq-0   1/1       Running   0          1m
vernemq-1   1/1       Running   0          1m
vernemq-2   1/1       Running   0          1m

I could not get any cluster.

$ kubectl exec vernemq-0 -- vmq-admin cluster show
Node '[email protected]' not responding to pings.
command terminated with exit code 1

The Pod's log says that SSL certification error occurred.

$ kubectl logs vernemq-0
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (51) SSL: certificate subject name 'client' does not match target host name 'kubernetes.default.svc.cluster.local'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (51) SSL: certificate subject name 'client' does not match target host name 'kubernetes.default.svc.cluster.local'
vernemq failed to start within 15 seconds,
see the output of 'vernemq console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.
...

Can not auth using MongoDB

Hello everybody!

I've been trying to use MongoDB auth, but I keep getting the message

vernemq_1         | 2017-07-05 10:29:26.660 [error] <0.428.0>@vmq_mqtt_fsm:check_user:536 can't authenticate client {[],<<"test-client">>} due to
vernemq_1         |                                 no_matching_hook_found

Here is my service configuration (in docker-compose.yml):

 vernemq:
    image: erlio/docker-vernemq:1.1.0
    ports:
     - "1883:1883"
    environment:
      - DOCKER_VERNEMQ_PLUGINS__VMQ_DIVERSITY=on
      - DOCKER_VERNEMQ_PLUGINS__VMQ_PASSWD=off
      - DOCKER_VERNEMQ_PLUGINS__VMQ_ACL=off
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__MONGODB__HOST=mongodb
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__MONGODB__PORT=27017
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__MONGODB__DATABASE=xxx
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__MONGODB__LOGIN=yyy
      - DOCKER_VERNEMQ_VMQ_DIVERSITY__MONGODB__PASSWORD=zzz
    depends_on:
      mongodb:
        condition: service_started
    restart: on-failure

and I'm pretty sure my users were added to MongoDB's vmq_acl_auth collection (on the xxx DB) with passhash stored encrypted using bcrypt v.2a:

> db.vmq_acl_auth.find().pretty()
{
        "_id" : ObjectId("595cbb0e4dda7a46ea9ea46f"),
        "mountpoint" : "/",
        "client_id" : "test-client",
        "username" : "test-user2",
        "passhash" : "$2a$12$uig5H./AO6fP1Qs1IYiLR.mWmZkS57xoZGyxStuh4/6Q1zTZ5Gkim",
        "subscribe_acl" : [
                {
                        "pattern" : "a/#"
                }
        ]
}

Also, should MongoDB store passhashas in "passhash" : "$2a$12$uig5H./AO6fP1Qs1IYiLR.mWmZkS57xoZGyxStuh4/6Q1zTZ5Gkim", or as binary data, such as "passhash" : BinData(0,"JDJhJDEyJHVpZzVILi9BTzZmUDFRczFJWWlMUi5tV21aa1M1N3hvWkd5eFN0dWg0LzZRMXpUWjVHa2lt"),? Regardless, I've tried both and they have both failed.

Any help is much appreciated!

Testing the dockerfile

Hi all,

As the Dockerfile grows more complex, I was thinking about a way to test that we don't break anything and a quick google search found goss: https://github.com/aelsabbahy/goss and this tutorial: https://medium.com/@aelsabbahy/tutorial-how-to-test-your-docker-image-in-half-a-second-bbd13e06a4a9

As I'm no Docker expert I'd find it very nice if we had an easily extensible test-suite to make sure that old features don't break and new features/bugfixes will actually work.

Do you think something like this would be worthwhile? What other ways are there of testing Dockerfiles?

/cc @drf @dergraf

Docker container errors (at least for me) on nightly 1.5.0 (1.5.0-127-ge130553)

Hello,
Container failing to start with error that "start_vernemq" is not found in $PATH
actual fix is
https://github.com/erlio/docker-vernemq/blob/0501ac840ff068d29d0bf460e07b445f2980636f/Dockerfile#L53
needs to be change to
CMD ["bash", "/usr/sbin/start_vernemq"] which composes command bash /usr/sbin/start_vernemq
As the start_vernemq is moved to mentioned directory at https://github.com/erlio/docker-vernemq/blob/0501ac840ff068d29d0bf460e07b445f2980636f/Dockerfile#L26

Actually, the bash script is there, but shell can not find it in $PATH, so running it explicitly with bash solves it.
using bash path_to_sh_script solves the issue for those encountering such error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.