GithubHelp home page GithubHelp logo

hassio-addons / addon-emqx Goto Github PK

View Code? Open in Web Editor NEW
63.0 63.0 13.0 280 KB

EMQX - Home Assistant Community Add-ons

License: MIT License

Jinja 61.72% Dockerfile 38.28%
addon home-assistant homeassistant homeassistant-addons mqtt mqtt-broker

addon-emqx's Introduction

Home Assistant Community Add-on: Log Viewer

GitHub Release Project Stage License

Supports aarch64 Architecture Supports amd64 Architecture Supports armhf Architecture Supports armv7 Architecture Supports i386 Architecture

Github Actions Project Maintenance GitHub Activity

Discord Community Forum

Buy me a coffee

Browser-based log utility for Home Assistant.

Log Viewer Screenshot

About

Easily monitor your Home Assistant logs in your web browser. Custom filtering is available to make it simple to differentiate different types of logs.

📚 Read the full add-on documentation

Support

Got questions?

You have several options to get them answered:

You could also open an issue here GitHub.

Contributing

This is an active open-source project. We are always open to people who want to use the code or contribute to it.

We have set up a separate document containing our contribution guidelines.

Thank you for being involved! 😍

Authors & contributors

The original setup of this repository is by Dale Higgs.

For a full list of all authors and contributors, check the contributor's page.

We have got some Home Assistant add-ons for you

Want some more functionality to your Home Assistant instance?

We have created multiple add-ons for Home Assistant. For a full list, check out our GitHub Repository.

License

MIT License

Copyright (c) 2018-2024 Dale Higgs <@dale3h>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

addon-emqx's People

Contributors

agners avatar frenck avatar miguelangel-nubla avatar remcom avatar renovate[bot] avatar tjorim avatar wrt54g avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

addon-emqx's Issues

EMQX won't start

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting


Add-on: EMQX
The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on

Add-on version: 0.3.2
You are running the latest version of this add-on.
System: null (aarch64 / qemuarm-64)
Home Assistant Core: 2024.1.3
Home Assistant Supervisor: 2024.01.0.dev0201

Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[11:19:25] INFO: Configuring timezone (Asia/Shanghai)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-emqx: starting
s6-rc: info: service init-emqx successfully started
s6-rc: info: service emqx: starting
s6-rc: info: service emqx successfully started
s6-rc: info: service legacy-services: starting
[11:19:26] INFO: Starting EMQX...
s6-rc: info: service legacy-services successfully started
escript: exception throw: {emqx_conf_schema,[#{kind => validation_error,
path => "node.cookie",
reason => required_field}]}
in function hocon_tconf:assert/2 (hocon_tconf.erl, line 1226)
in call from hocon_tconf:map/4 (hocon_tconf.erl, line 303)
in call from hocon_cli:get_values/3 (hocon_cli.erl, line 194)
in call from hocon_cli:multi_get/2 (hocon_cli.erl, line 182)
in call from escript:run/2 (escript.erl, line 750)
in call from escript:start/1 (escript.erl, line 277)
in call from init:start_em/1
in call from init:do_boot/3
ERROR: call_hocon_failed: -s emqx_conf_schema -c /opt/emqx/etc/emqx.conf multi_get node.data_dir node.name node.cookie node.db_backend cluster.proto_dist node.dist_net_ticktime�(B
[11:19:29] INFO: Service EMQX exited with code 127 (by signal 0)
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service emqx: stopping
s6-rc: info: service emqx successfully stopped
s6-rc: info: service init-emqx: stopping
s6-rc: info: service init-emqx successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-timezone: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service base-addon-timezone successfully stopped
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Config doesnt appear to work

Problem/Motivation

Environment variables in config doesnt appear to work

Expected behavior

EMQ_NAME=foo should work

(What you expected to happen)

Actual behavior

image

Steps to reproduce

I've tried
EMQX_LISTENER__TCP__EXTERNAL="2883"
EMQX_NAME="foo"

Proposed changes

I think an example config, or perhaps allow mapping a config from /config/emqx or something ?

Thanks!

High CPU load with only little traffic

Problem/Motivation

With only one ESPresense node connected and ~2-4 messages / second the add-on shows very high CPU load on a HA Blue:

grafik

grafik
(I think the dropped messages are ESPresense sending information that HA room-mqtt does not listen to)

Mosquitto in comparison was around 1-5% CPU load for exactly the same workload.

Steps to reproduce

Just install and drop-in replacement, no individual config, just users created.

Proposed changes

Just wanted to know if this is to expect or due to experimental state. I use the most recent version of the add-on...

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

dockerfile
emqx/Dockerfile
  • ghcr.io/hassio-addons/debian-base 7.4.0
github-actions
.github/workflows/ci.yaml
.github/workflows/deploy.yaml
.github/workflows/labels.yaml
.github/workflows/lock.yaml
.github/workflows/pr-labels.yaml
.github/workflows/release-drafter.yaml
.github/workflows/stale.yaml
regex
emqx/Dockerfile
  • ghcr.io/hassio-addons/debian-base 7.4.0
emqx/build.yaml
  • ghcr.io/hassio-addons/debian-base 7.4.0
  • ghcr.io/hassio-addons/debian-base 7.4.0
emqx/Dockerfile
  • debian_12/procps 2:4.0.2-3
emqx/Dockerfile
  • emqx/emqx v5.7.2

  • Check this box to trigger a request for Renovate to run again on this repository

crash after update from 0.6.1 -> 0.6.2

Problem/Motivation

crash after update from 0.6.1 -> 0.6.2

Actual behavior

Crash and stop after a few seconds

Steps to reproduce

Add-on: EMQX
The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on

Add-on version: 0.6.2
You are running the latest version of this add-on.
System: null (amd64 / qemux86-64)
Home Assistant Core: 2024.8.0
Home Assistant Supervisor: 2024.08.0

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[12:03:43] INFO: Configuring timezone (Europe/Moscow)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-emqx: starting
s6-rc: info: service init-emqx successfully started
s6-rc: info: service emqx: starting
s6-rc: info: service emqx successfully started
s6-rc: info: service legacy-services: starting
[12:03:43] INFO: Starting EMQX...
s6-rc: info: service legacy-services successfully started
escript: exception throw: {emqx_conf_schema,[#{reason => required_field,
                                      path => "node.cookie",
                                      kind => validation_error}]}
  in function  hocon_tconf:assert/2 (hocon_tconf.erl, line 1227)
  in call from hocon_tconf:map/4 (hocon_tconf.erl, line 303)
  in call from hocon_cli:get_values/3 (hocon_cli.erl, line 194)
  in call from hocon_cli:multi_get/2 (hocon_cli.erl, line 182)
  in call from escript:run/2 (escript.erl, line 750)
  in call from escript:start/1 (escript.erl, line 277)
  in call from init:start_em/1 
  in call from init:do_boot/3 
ERROR: call_hocon_failed: -s emqx_conf_schema -c /opt/emqx/etc/emqx.conf multi_get node.data_dir node.name node.cookie node.db_backend cluster.proto_dist node.dist_net_ticktime�(B
[12:03:44] INFO: Service EMQX exited with code 127 (by signal 0)
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service emqx: stopping
s6-rc: info: service emqx successfully stopped
s6-rc: info: service init-emqx: stopping
s6-rc: info: service init-emqx successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-timezone: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-timezone successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Cpu usage

Problem/Motivation

Cpu usage is taking 15% on pi4 with V0.2.1 for the addon-emqx

Is it expected ?

Zigbee2MQTT Connection Setup Issue

Hello -

I would love some help as I have a few hours trying to figure this out. I've also read most I could find online with my luck.

I successfully installed EMQX, created the user, and authenticated it.

In MQTT, I have successfully "Re-configured MQTT" with the correct broker info, etc. When I listen to a topic, I can see it in the EMQX dashboard in Topics.

My trouble lies with Zigbee2MQTT. I cannot get it to talk to the EMQX database. This is the error I'm getting:

[11:54:55] INFO: Preparing to start...
[11:54:55] INFO: Socat not enabled
[11:54:56] INFO: Starting Zigbee2MQTT...
Zigbee2MQTT:info 2023-01-06 11:54:56: Logging to console and directory: '/config/zigbee2mqtt/log/2023-01-06.11-54-56' filename: log.txt
Zigbee2MQTT:info 2023-01-06 11:54:56: Starting Zigbee2MQTT version 1.29.1 (commit #unknown)
Zigbee2MQTT:info 2023-01-06 11:54:56: Starting zigbee-herdsman (0.14.83)
Zigbee2MQTT:info 2023-01-06 11:54:57: zigbee-herdsman started (resumed)
Zigbee2MQTT:info 2023-01-06 11:54:57: Coordinator firmware version: '{"meta":{"maintrel":1,"majorrel":2,"minorrel":7,"product":1,"revision":20210708,"transportrev":2},"type":"zStack3x0"}'
Zigbee2MQTT:info 2023-01-06 11:54:57: Currently 0 devices are joined:
Zigbee2MQTT:warn 2023-01-06 11:54:57: permit_join set to true in configuration.yaml.
Zigbee2MQTT:warn 2023-01-06 11:54:57: Allowing new devices to join.
Zigbee2MQTT:warn 2023-01-06 11:54:57: Set permit_join to false once you joined all devices.
Zigbee2MQTT:info 2023-01-06 11:54:57: Zigbee: allowing new devices to join.
Zigbee2MQTT:info 2023-01-06 11:54:57: Connecting to MQTT server at mqtt//a0d7b954-emqx:1883
Zigbee2MQTT:error 2023-01-06 11:54:57: MQTT failed to connect, exiting...
Zigbee2MQTT:info 2023-01-06 11:54:57: Stopping zigbee-herdsman...

What should the username and password be in Zigbee2MQTT? I've tried the original credentials that worked with Mosquito and also that I authenticated in EMQX, but nothing seems to work.

I'm sure I'm over looking something stupid, but I would really appreciate any help you can provide.

Many thanks,
n.

Topic metrics return 500 network error

I am using version 0.2.12

When i open Diagnose and Topic metric i get 500 network error

Also when i open Management and MQTT settings page does not load.

EMQX Crashes on start: Error Code 1 (Signal 0)

Problem/Motivation

EMQX doesn't start in Home Assistant

Expected behavior

I expect the addon to start

Actual behavior

Crash on start with error code 1 (Signal 0)

Steps to reproduce

Click start addon.

Proposed changes

Perhaps this is a bug?

Log Output:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting


Add-on: EMQX
The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on

Add-on version: 0.2.11
You are running the latest version of this add-on.
System: Home Assistant OS 11.0 (amd64 / generic-x86-64)
Home Assistant Core: 2023.10.3
Home Assistant Supervisor: 2023.10.0

Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[13:18:26] INFO: Configuring timezone (America/Chicago)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-emqx: starting
s6-rc: info: service init-emqx successfully started
s6-rc: info: service emqx: starting
s6-rc: info: service emqx successfully started
s6-rc: info: service legacy-services: starting
[13:18:26] INFO: Starting EMQX...
s6-rc: info: service legacy-services successfully started
EMQX_PLUGINS__INSTALL_DIR [plugins.install_dir]: /data/emqx/plugins
EMQX_RPC__PORT_DISCOVERY [rpc.port_discovery]: manual
EMQX_NODE__DATA_DIR [node.data_dir]: /data/emqx/data
EMQX_NODE__COOKIE [node.cookie]: ******
EMQX_NODE__NAME [node.name]: [email protected]
2023-10-16T13:18:30.935259-05:00 [error] ssl:default failed to listen on 8883 - eaddrinuse (address already in use)
2023-10-16T13:18:30.935588-05:00 [error] crasher: initial call: esockd_listener:init/1, pid: <0.2772.0>, registered_name: [], exit: {eaddrinuse,[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,835}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}, ancestors: [<0.2769.0>,esockd_sup,<0.2616.0>], message_queue_len: 0, messages: [], links: [<0.2769.0>], dictionary: [], trap_exit: true, status: running, heap_size: 1598, stack_size: 28, reductions: 2809; neighbours:
Failed to start listener ssl:default on 0.0.0.0:8883: {eaddrinuse,{child,undefined,{listener_sup,{'ssl:default',{{0,0,0,0},8883}}},{esockd_listener_sup,start_link,[tcp,'ssl:default',{{0,0,0,0},8883},[{tcp_options,[binary,{packet,raw},{reuseaddr,true},{backlog,1024},{nodelay,true},{buffer,4096},{high_watermark,1048576},{send_timeout,15000},{send_timeout_close,true}]},{acceptors,16},{access_rules,[{allow,all}]},{max_connections,infinity},{proxy_protocol,false},{proxy_protocol_timeout,3000},{ssl_options,[{cacertfile,"/opt/emqx/etc/certs/cacert.pem"},{certfile,"/opt/emqx/etc/certs/cert.pem"},{ciphers,["TLS_AES_256_GCM_SHA384","TLS_AES_128_GCM_SHA256","TLS_CHACHA20_POLY1305_SHA256","TLS_AES_128_CCM_SHA256","TLS_AES_128_CCM_8_SHA256","ECDHE-ECDSA-AES256-GCM-SHA384","ECDHE-RSA-AES256-GCM-SHA384","ECDHE-ECDSA-AES256-SHA384","ECDHE-RSA-AES256-SHA384","ECDH-ECDSA-AES256-GCM-SHA384","ECDH-RSA-AES256-GCM-SHA384","ECDH-ECDSA-AES256-SHA384","ECDH-RSA-AES256-SHA384","DHE-DSS-AES256-GCM-SHA384","DHE-DSS-AES256-SHA256","AES256-GCM-SHA384","AES256-SHA256","ECDHE-ECDSA-AES128-GCM-SHA256","ECDHE-RSA-AES128-GCM-SHA256","ECDHE-ECDSA-AES128-SHA256","ECDHE-RSA-AES128-SHA256","ECDH-ECDSA-AES128-GCM-SHA256","ECDH-RSA-AES128-GCM-SHA256","ECDH-ECDSA-AES128-SHA256","ECDH-RSA-AES128-SHA256","DHE-DSS-AES128-GCM-SHA256","DHE-DSS-AES128-SHA256","AES128-GCM-SHA256","AES128-SHA256","ECDHE-ECDSA-AES256-SHA","ECDHE-RSA-AES256-SHA","DHE-DSS-AES256-SHA","ECDH-ECDSA-AES256-SHA","ECDH-RSA-AES256-SHA","ECDHE-ECDSA-AES128-SHA","ECDHE-RSA-AES128-SHA","DHE-DSS-AES128-SHA","ECDH-ECDSA-AES128-SHA","ECDH-RSA-AES128-SHA","RSA-PSK-AES256-GCM-SHA384","RSA-PSK-AES256-CBC-SHA384","RSA-PSK-AES128-GCM-SHA256","RSA-PSK-AES128-CBC-SHA256","RSA-PSK-AES256-CBC-SHA","RSA-PSK-AES128-CBC-SHA"]},{client_renegotiation,true},{depth,10},{enable_crl_check,false},{fail_if_no_peer_cert,false},{gc_after_handshake,false},{handshake_timeout,15000},{hibernate_after,5000},{honor_cipher_order,true},{keyfile,"/opt/emqx/etc/certs/key.pem"},{log_level,notice},{ocsp,#{enable_ocsp_stapling => false,refresh_http_timeout => 15000,refresh_interval => 300000}},{reuse_sessions,true},{secure_renegotiate,true},{user_lookup_fun,{fun emqx_tls_psk:lookup/3,undefined}},{verify,verify_none},{versions,['tlsv1.3','tlsv1.2']}]},{tune_fun,{emqx_olp,backoff_new_conn,[default]}}],{emqx_connection,start_link,[#{enable_authn => true,limiter => undefined,listener => {ssl,default},zone => default}]}]},transient,false,infinity,supervisor,[esockd_listener_sup]}}.
2023-10-16T13:18:30.936743-05:00 [error] crasher: initial call: application_master:init/4, pid: <0.2660.0>, registered_name: [], exit: {{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,"ssl:default(0.0.0.0:8883) : eaddrinuse"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,"emqx_listeners.erl"},{line,781}]},{lists,foreach_1,2,[{file,"lists.erl"},{line,1442}]},{emqx_app,maybe_start_listeners,0,[{file,"emqx_app.erl"},{line,93}]},{emqx_app,start,2,[{file,"emqx_app.erl"},{line,46}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,293}]}]}}}},[{application_master,init,4,[{file,"application_master.erl"},{line,142}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}, ancestors: [<0.2659.0>], message_queue_len: 1, messages: [{'EXIT',<0.2661.0>,normal}], links: [<0.2659.0>,<0.1989.0>], dictionary: [], trap_exit: true, status: running, heap_size: 610, stack_size: 28, reductions: 179; neighbours:
{"Kernel pid terminated",application_controller,"{application_start_failure,emqx,{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,"ssl:default(0.0.0.0:8883) : eaddrinuse"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,"emqx_listeners.erl"},{line,781}]},{lists,foreach_1,2,[{file,"lists.erl"},{line,1442}]},{emqx_app,maybe_start_listeners,0,[{file,"emqx_app.erl"},{line,93}]},{emqx_app,start,2,[{file,"emqx_app.erl"},{line,46}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,293}]}]}}}}}"}

Kernel pid terminated (application_controller) ({application_start_failure,emqx,{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,"ssl:default(0.0.0.0:8883) : eaddrinuse"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,"emqx_listeners.erl"},{line,781}]},{lists,foreach_1,2,[{file,"lists.erl"},{line,1442}]},{emqx_app,maybe_start_listeners,0,[{file,"emqx_app.erl"},{line,93}]},{emqx_app,start,2,[{file,"emqx_app.erl"},{line,46}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,293}]}]}}}}})

Crash dump is being written to: /opt/emqx/log/erl_crash.dump...done

[13:18:32] INFO: Service EMQX exited with code 1 (by signal 0)
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service emqx: stopping
s6-rc: info: service emqx successfully stopped
s6-rc: info: service init-emqx: stopping
s6-rc: info: service init-emqx successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-timezone: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-timezone successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Cannot delete topic in metrics

Problem/Motivation

I added some topics to Topic Metrics in Diagnose. But now I cannot delete them when I need them no more.

Expected behavior

I expect to delete them.

Actual behavior

404 API_NOT_EXIST:Request Path Not Found

Steps to reproduce

Add some topics to Diagnose metrics and try to delete them.

Stoped working after updating to HassOS 12.4

Problem/Motivation

Addon crashes since HassOS 12.4

Actual behavior

Hit Start. Crash and stop after a few seconds

Steps to reproduce

Hit Start Button

s6-rc: info: service base-addon-banner: starting

-----------------------------------------------------------
 Add-on: EMQX
 The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on
-----------------------------------------------------------
 Add-on version: 0.6.0
 You are running the latest version of this add-on.
 System: Home Assistant OS 12.4  (amd64 / generic-x86-64)
 Home Assistant Core: 2024.6.3
 Home Assistant Supervisor: 2024.06.0
-----------------------------------------------------------
 Please, share the above information when looking for help
 or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[21:12:39] INFO: Configuring timezone (Europe/Berlin)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-emqx: starting
s6-rc: info: service init-emqx successfully started
s6-rc: info: service emqx: starting
s6-rc: info: service emqx successfully started
s6-rc: info: service legacy-services: starting
[21:12:39] INFO: Starting EMQX...
s6-rc: info: service legacy-services successfully started
EMQX_PLUGINS__INSTALL_DIR [plugins.install_dir]: /data/emqx/plugins
EMQX_RPC__PORT_DISCOVERY [rpc.port_discovery]: manual
EMQX_NODE__DATA_DIR [node.data_dir]: /data/emqx/data
EMQX_NODE__COOKIE [node.cookie]: ******
EMQX_NODE__NAME [node.name]: [email protected]
2024-06-19T21:12:42.983694+02:00 [error] tcp:default failed to listen on 1883 - eaddrinuse (address already in use)
2024-06-19T21:12:42.984225+02:00 [error] crasher: initial call: esockd_listener:init/1, pid: <0.2725.0>, registered_name: [], exit: {eaddrinuse,[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,961}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,241}]}]}, ancestors: [<0.2723.0>,esockd_sup,<0.2440.0>], message_queue_len: 0, messages: [], links: [<0.2723.0>], dictionary: [], trap_exit: true, status: running, heap_size: 1598, stack_size: 28, reductions: 2944; neighbours:
2024-06-19T21:12:42.986000+02:00 [error] Supervisor: {<0.2723.0>,esockd_listener_sup}. Context: start_error. Reason: eaddrinuse. Offender: id=listener,pid=undefined.
Failed to start listener tcp:default on 0.0.0.0:1883: {{shutdown,{failed_to_start_child,listener,eaddrinuse}},{child,undefined,{listener_sup,{'tcp:default',{{0,0,0,0},1883}}},{esockd_listener_sup,start_link,['tcp:default',{{0,0,0,0},1883}]},transient,false,infinity,supervisor,[esockd_listener_sup]}}.
2024-06-19T21:12:42.986866+02:00 [error] crasher: initial call: application_master:init/4, pid: <0.2515.0>, registered_name: [], exit: {{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,"tcp:default(0.0.0.0:1883) : {shutdown,\n                                {failed_to_start_child,listener,eaddrinuse}}"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,"emqx_listeners.erl"},{line,813}]},{lists,foreach_1,2,[{file,"lists.erl"},{line,1686}]},{emqx_app,maybe_start_listeners,0,[{file,"emqx_app.erl"},{line,93}]},{emqx_app,start,2,[{file,"emqx_app.erl"},{line,46}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,293}]}]}}}},[{application_master,init,4,[{file,"application_master.erl"},{line,142}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,241}]}]}, ancestors: [<0.2514.0>], message_queue_len: 1, messages: [{'EXIT',<0.2516.0>,normal}], links: [<0.2514.0>,<0.2139.0>], dictionary: [], trap_exit: true, status: running, heap_size: 987, stack_size: 28, reductions: 201; neighbours:
Kernel pid terminated (application_controller) ("{application_start_failure,emqx,{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,\"tcp:default(0.0.0.0:1883) : {shutdown,\n                                {failed_to_start_child,listener,eaddrinuse}}\"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,\"emqx_listeners.erl\"},{line,813}]},{lists,foreach_1,2,[{file,\"lists.erl\"},{line,1686}]},{emqx_app,maybe_start_listeners,0,[{file,\"emqx_app.erl\"},{line,93}]},{emqx_app,start,2,[{file,\"emqx_app.erl\"},{line,46}]},{application_master,start_it_old,4,[{file,\"application_master.erl\"},{line,293}]}]}}}}}")

Crash dump is being written to: /opt/emqx/log/erl_crash.dump...done
[21:12:44] INFO: Service EMQX exited with code 1 (by signal 0)
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service emqx: stopping
s6-rc: info: service emqx successfully stopped
s6-rc: info: service init-emqx: stopping
s6-rc: info: service init-emqx successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-timezone: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-timezone successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Emqx add-on crashes and wouldn't restart only after full system reboot

Here's the log

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting

Add-on: EMQX
The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on

Add-on version: 0.2.7
You are running the latest version of this add-on.
System: Home Assistant OS 10.0 (amd64 / qemux86-64)
Home Assistant Core: 2023.4.5
Home Assistant Supervisor: 2023.04.0

Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[06:55:03] INFO: Configuring timezone (America/New_York)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-emqx: starting
s6-rc: info: service init-emqx successfully started
s6-rc: info: service emqx: starting
s6-rc: info: service emqx successfully started
s6-rc: info: service legacy-services: starting
[06:55:04] INFO: Starting EMQX...
s6-rc: info: service legacy-services successfully started
EMQX_PLUGINS__INSTALL_DIR [plugins.install_dir]: /data/emqx/plugins
EMQX_RPC__PORT_DISCOVERY [rpc.port_discovery]: manual
EMQX_LOG__FILE_HANDLERS__DEFAULT__ENABLE [log.file_handlers.default.enable]: false
EMQX_LOG__CONSOLE_HANDLER__ENABLE [log.console_handler.enable]: true
EMQX_NODE__DATA_DIR [node.data_dir]: /data/emqx/data
EMQX_NODE__COOKIE [node.cookie]: ******
EMQX_NODE__NAME [node.name]: [email protected]
Config override: log.file_handlers.default is enabled
Listener ssl:default on 0.0.0.0:8883 started.
Listener tcp:default on 0.0.0.0:1883 started.
Listener ws:default on 0.0.0.0:8083 started.
Listener wss:default on 0.0.0.0:8084 started.
Listener http:dashboard on :18083 started.
EMQX 5.0.22 is running now!
2023-04-21T06:56:16.827653-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"84.64% mem usage">>, name: high_system_memory_usage
2023-04-21T06:58:16.836781-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:00:16.844816-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"87.36% mem usage">>, name: high_system_memory_usage
2023-04-21T07:01:16.847618-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:05:16.854308-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"84.42% mem usage">>, name: high_system_memory_usage
2023-04-21T07:06:16.855956-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:10:16.860941-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"86.31% mem usage">>, name: high_system_memory_usage
2023-04-21T07:11:16.864137-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:15:16.869366-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"70.40% mem usage">>, name: high_system_memory_usage
2023-04-21T07:17:16.873022-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:19:16.876116-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"82.24% mem usage">>, name: high_system_memory_usage
2023-04-21T07:21:16.883599-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:23:16.886075-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"71.74% mem usage">>, name: high_system_memory_usage
2023-04-21T07:24:16.887942-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:29:16.893911-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"83.01% mem usage">>, name: high_system_memory_usage
2023-04-21T07:30:16.896256-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:35:16.907216-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"78.80% mem usage">>, name: high_system_memory_usage
2023-04-21T07:37:16.910024-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:39:16.914260-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"88.52% mem usage">>, name: high_system_memory_usage
2023-04-21T07:41:16.918134-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:46:16.925063-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"76.57% mem usage">>, name: high_system_memory_usage
2023-04-21T07:48:16.927833-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:49:16.929025-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"78.30% mem usage">>, name: high_system_memory_usage
2023-04-21T07:51:16.931938-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:52:16.934111-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"84.69% mem usage">>, name: high_system_memory_usage
2023-04-21T07:53:16.935810-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:55:16.939987-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"74.36% mem usage">>, name: high_system_memory_usage
2023-04-21T07:57:16.944121-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T07:58:16.947054-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"81.84% mem usage">>, name: high_system_memory_usage
2023-04-21T08:00:16.950437-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:01:16.952524-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"77.54% mem usage">>, name: high_system_memory_usage
2023-04-21T08:03:16.955874-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:05:16.963590-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"85.78% mem usage">>, name: high_system_memory_usage
2023-04-21T08:06:16.964877-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:07:16.967169-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"84.06% mem usage">>, name: high_system_memory_usage
2023-04-21T08:08:16.968997-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:10:16.972236-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"79.19% mem usage">>, name: high_system_memory_usage
2023-04-21T08:11:16.973963-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:12:16.976232-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"81.48% mem usage">>, name: high_system_memory_usage
2023-04-21T08:14:16.982009-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:19:16.995231-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"89.44% mem usage">>, name: high_system_memory_usage
2023-04-21T08:20:16.996918-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:22:17.000976-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"89.61% mem usage">>, name: high_system_memory_usage
2023-04-21T08:24:17.005893-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:38:17.023134-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"80.42% mem usage">>, name: high_system_memory_usage
2023-04-21T08:39:17.024991-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:40:17.026939-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"89.87% mem usage">>, name: high_system_memory_usage
2023-04-21T08:41:17.032294-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage
2023-04-21T08:43:17.035344-04:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3, line: 416, message: <<"85.44% mem usage">>, name: high_system_memory_usage
2023-04-21T08:44:17.036830-04:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3, line: 422, name: high_system_memory_usage

Lost Config on Hostname Change

Problem/Motivation

Changing the HAOS hostname via GUI makes the EMQX add-on lose its configuration (username, password, authentication, nodes, etc...)

Expected behavior

The add-on should retain its configuration on hostname change, similar to other add-ons.

Actual behavior

The add-on lost all configuration, requiring setup from scratch

Steps to reproduce

Change HAOS hostname via GUI

Proposed changes

Not sure!

404 error when viewing monitor or deleting monitor

Problem/Motivation

When creating a monitor of a topic i get a 404 error in the top.
Also i am unable to delete any topics monitor views
Sometimes the monitor views collects the messages sometimes not

Expected behavior

That i don't get the 404 error and that i am able to delete views

Steps to reproduce

I have checked if there is no other add on running that can cause the error

240316-120921_EMQX_–Home_Assistant-_Google_Chrome

publish the addon as a service?

First of all thank you for the great addon.
I already use it for many months without any problems.

Today I recognized that it doesn't work with the birdnet-pi addon which recently got mqtt broker autodetection.

I opened an issue here and the author suggested that you should publish your addon as a service:

quote:

the owner should do that in the config.json of the addon

EDIT : through the "discovery" tag as seen in https://developers.home-assistant.io/docs/add-ons/configuration/

He's working on a manual override, so it would not be required on birdnet-pi side, but it might be interesting for you to add that feature anyway.

Latest update to 0.6.0 cripples EMQX

Problem/Motivation

When I apply the latest update, EQMX stops working. The dashboard isn't accessible anymore and the broker stops working

Expected behavior

I want to access the dashboard and MQTT should continue to work

Actual behavior

Dashboard can't be opened. MQTT broker stops working

Steps to reproduce

Just update to 0.6.0 (from 0.5.1)

Proposed changes

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

EMQX addon fails to connect with cluster

Problem/Motivation

env_vars:

Expected behavior

Cluster joined successfully

Actual behavior

reason="econnrefused"
2023-09-28T01:26:44.888258-05:00 [error] crasher: initial call: gen_rpc_client:init/1, pid: <0.3297.0>, registered_name: [], exit: {{badrpc,econnrefused},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,407}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}, ancestors: [gen_rpc_client_sup,gen_rpc_sup,<0.1883.0>], message_queue_len: 0, messages: [], links: [<0.1889.0>], dictionary: [], trap_exit: true, status: running, heap_size: 2586, stack_size: 29, reductions: 7419; neighbours:
2023-09-28T01:26:49.981254-05:00 [warning] Ekka(AutoCluster): discovered nodes outside cluster: ['[email protected]']
2023-09-28T01:26:55.631845-05:00 [warning] Ekka(AutoCluster): discovered nodes outside cluster: ['[email protected]']

Steps to reproduce

(How can someone else make/see it happen)

Proposed changes

Give documentation on setup for clustery

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

No ACK from MQTT server in 10 seconds

Home Assistant logs randomly ( I'm not able to corelate them with any particular event ) throw few errors saying
"No ACK from MQTT server in 10 seconds (mid ####)” when using EMQX addon and default MQTT integration.
It works as expected for few hours then those errors appear in HA's log and SOME mqtt entities become unavailable.
Restarting the addon or HA gets those entities back. Until the next error.
No errors when using Mosquito addon.

Home Assistant 2022.9.6
Supervisor 2022.09.1
Operating System 9.0

EMQX version: 0.1.1 ( default settings ).

Update to latest release stops EQMX complete.

Problem/Motivation

(Why the issue was filed)

Expected behavior

(What you expected to happen)

Actual behavior

(What actually happened)

Steps to reproduce

(How can someone else make/see it happen)

Proposed changes

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

High Ram and CPU Spikes causing OOM

I run fresh installation of home assistant os on brand new odroid m1 (4GB version) and because of better HW I decided to go with better mqtt broker. But after installation of emqx addon, the ram and cpu consumption started spiking.

Detailed description (not from me) can be found here: emqx/emqx#10509

I have exactly same symptoms as creator of the issue in emqx repository.

image
image

Local Database (with Credentials) is lost after HAOS update to 12.4

Problem/Motivation

I've updated my HAOS to version 12.4 - all docker containers are restarted.
After the update, all connections to EMQX where canceled with message "invalid login"
I could not log into the dashboard using my admin password.
But the default password works.
Im EMQX I could see a configured Built-In Database but without any users.

Expected behavior

The Built-In Database should survive OS updates.

Actual behavior

The admin password and the Built-In Database are gone

Error rendering multiple views: "hotconf not found"

Problem

I can't see several views inside EMQX, and am instead left on a loading screen. Inspecting the connection reports a failure to load "hotconf"

Expected behavior

I expect to be able to see e.g. management MQTT settings

Actual behavior

I see a loading page which never populates. There is a 404 in developer tools

Steps to reproduce

  • Install HAOS on an Ubuntu server (not sure this is required, but that's my system)
  • Install EMQX
  • open Web UI
  • click Management -> MQTT settings

Proposed changes

I've no idea what this file does, but it looks to me like the path hasn't been properly set, potentially due to an errant / at the start.

In Chrome devTools Network tab, for example, I can see a successful (200) fetch request for
https://<my-homeassistant-ip>/api/hassio_ingress/<plugin-id>/api/v5/stats
...followed by an unsuccessful (404) one to
https://<my-homeassistant-ip>/api/v5/schemas/hotconf

note the lack of the hassio_ingress part of the path

I've had a quick look through the files in this repo, but couldn't see anything obvious, although I'm not familiar with how hassio plugins work. It may be that this needs something changing upstream?

Local Database Storage and backup

Problem/Motivation

I'm trying to figure out where the local database is stored on the system. The only mount inside the container appears to be the /ssl directory. I also don't see any docker volumes that were created.

I'd also like to make sure that the data is being kept as part of the Home Assistant backup.

Expected behavior

Some other addons can store data in /config

Actual behavior

Can't find where the storage is

Proposed changes

Documentation should list where the local storage is kept or put it in /config

Log usage unclear

Problem/Motivation

high memory usage

Expected behavior

normal memory usage

Actual behavior


2024-02-18T20:32:15.628659+01:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3(424), name: high_system_memory_usage
2024-02-18T21:30:15.690699+01:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3(418), message: <<"78.39% mem usage">>, name: high_system_memory_usage
2024-02-18T21:31:15.692591+01:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3(424), name: high_system_memory_usage
2024-02-18T23:31:15.819597+01:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3(418), message: <<"77.36% mem usage">>, name: high_system_memory_usage
2024-02-18T23:32:15.820445+01:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3(424), name: high_system_memory_usage
2024-02-19T03:31:16.070281+01:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3(418), message: <<"76.40% mem usage">>, name: high_system_memory_usage
2024-02-19T03:32:16.072468+01:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3(424), name: high_system_memory_usage
2024-02-19T07:50:16.341557+01:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3(418), message: <<"80.33% mem usage">>, name: high_system_memory_usage
2024-02-19T07:51:16.343575+01:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3(424), name: high_system_memory_usage
2024-02-19T13:31:16.710660+01:00 [warning] msg: alarm_is_activated, mfa: emqx_alarm:do_actions/3(418), message: <<"80.00% mem usage">>, name: high_system_memory_usage
2024-02-19T13:32:16.715605+01:00 [warning] msg: alarm_is_deactivated, mfa: emqx_alarm:do_actions/3(424), name: high_system_memory_usage

Steps to reproduce

just run

Proposed changes

no

Is there any change to give it more memory or find out what is needed?

Logging and Monitoring page not loading

Problem/Motivation

The logging page does not load and shows an empty page.
The same with the Monitoring menu, when "system" is selected. If you click on "integration" a page is loaded.

Expected behavior

Logging page loads like described in:
https://www.emqx.io/docs/en/v5.0/observability/log.html#configure-logging-via-dashboard

same with the Monitoring page.

Actual behavior

Both pages do not load and just show an empty page.

Steps to reproduce

  1. Install the EMQX add-on.
  2. Log in.
  3. Go in the menu on the left side to the Logging or Monitoring link and open it.

Proposed changes

Well I don't know the reason, why these pags are not loading. The standard documentation assumes, that after the install the pages are available and can be used to configure the logging itself. So it seems to be something with the installation.

404 error when trying delete one retained topic

Problem/Motivation

When deleting a retained msg i get 404 error in the top.

Expected behavior

That i don't get the 404 error and that i am able to delete msg from list

Steps to reproduce

  1. emit retained msg
  2. open tab with retained msges
  3. delete one exact topic ("clear all" works correctly)

Any way to store the user id and password in config?

Problem/Motivation

I have set it up and everything works well. Just wondering if there is a way for me to store the Username and Password somewhere in the configuration file ?

(Why the issue was filed)
The behavior currently is that when I select EMQX, I need to keep entering the user id and password

Expected behavior

Anonymous

Problem/Motivation

Can't figure out how to get anonymous clients to work
Tried EMQX_ALLOW_ANONYMOUS=true in env var but doesn't work

Expected behavior

(What you expected to happen)

Actual behavior

(What actually happened)

Steps to reproduce

(How can someone else make/see it happen)

Proposed changes

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

Add a bridge

Problem/Motivation

I want to add a bridge to the EMQ X interface , Bridge dont seem to be subscribing.

(Why the issue was filed)

Expected behavior

subscribe to the bridge broker

(What you expected to happen)

Actual behavior

Dont see it subscribing

(What actually happened)

Steps to reproduce

(How can someone else make/see it happen)

Proposed changes

In the standard mosquito broker you added a config file. Dont know where this config file might be created and into what format it requires it.

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

After upgrade to 0.5.0 EMQX no longer starts.

My current version is 0.4.1 and running good.
After upgrading to 0.5.0 EMQX no longer starts, the log I get is below


s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting

Add-on: EMQX
The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on
Add-on version: 0.5.0
You are running the latest version of this add-on.
System: Home Assistant OS 12.1 (amd64 / generic-x86-64)
Home Assistant Core: 2024.3.3
Home Assistant Supervisor: 2024.03.1

s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[16:52:26] INFO: Configuring timezone (Europe/Amsterdam)...
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-emqx: starting
s6-rc: info: service init-emqx successfully started
s6-rc: info: service emqx: starting
s6-rc: info: service emqx successfully started
s6-rc: info: service legacy-services: starting
[16:52:26] INFO: Starting EMQX...
s6-rc: info: service legacy-services successfully started
[16:52:26] INFO: Setting EMQX_NODE__NAME to core-emdx
EMQX_PLUGINS__INSTALL_DIR [plugins.install_dir]: /data/emqx/plugins
EMQX_RPC__PORT_DISCOVERY [rpc.port_discovery]: manual
EMQX_NODE__DATA_DIR [node.data_dir]: /data/emqx/data
EMQX_NODE__COOKIE [node.cookie]: ******
EMQX_NODE__NAME [node.name]: core-emdx
2024-04-03T16:52:29.769146+02:00 [error] failed_to_check_schema: emqx_conf_schema
2024-04-03T16:52:29.776520+02:00 [error] #{reason => integrity_validation_crash,stacktrace => [{emqx_conf_schema,validate_cluster_strategy,1,[{file,"emqx_conf_schema.erl"},{line,1494}]},{hocon_tconf,assert_integrity,4,[{file,"hocon_tconf.erl"},{line,182}]},{hocon_tconf,assert_integrity,3,[{file,"hocon_tconf.erl"},{line,176}]},{hocon_tconf,map,4,[{file,"hocon_tconf.erl"},{line,304}]},{hocon_tconf,map_translate,3,[{file,"hocon_tconf.erl"},{line,99}]},{hocon_tconf,generate,3,[{file,"hocon_tconf.erl"},{line,93}]},{hocon_cli,generate,1,[{file,"hocon_cli.erl"},{line,317}]},{escript,run,2,[{file,"escript.erl"},{line,750}]}],exception => {error,{badmatch,["core-emdx"]}},kind => validation_error,validation_name => check_node_name_and_discovery_strategy}
ERROR: call_hocon_failed: -v -t 2024.04.03.16.52.28 -s emqx_conf_schema -c /data/emqx/data/configs/cluster.hocon -c /opt/emqx/etc/emqx.conf -d /data/emqx/data/configs generate�(B
[16:52:31] INFO: Service EMQX exited with code 1 (by signal 0)
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service emqx: stopping
s6-rc: info: service emqx successfully stopped
s6-rc: info: service init-emqx: stopping
s6-rc: info: service init-emqx successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-timezone: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-timezone successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped


When revert to the backup for version 0.4.1 EMQX works fine again

Update documentation

Problem/Motivation

Was about to try out EMQX, but got stuck at point 6; it doesn't make sense:
6. Be sure to first set up authentication in for your MQTT client, but setting up an authentication method in the EMQX web UI under "Access Control" -> "Authentication".
7. Read the step above again and make sure you have set up authetication.

(Why the issue was filed)
Did point 7. several times, but kept on reading jibberish
Tried to create a mqtt user in several ways, but it failed to connect to HA
(trying the same credentials as I used to use in MQTTBroker, so I wouldn't need to change much ;)

Proposed changes

A clearer explanation, maybe even an example for people currently using MQTT Broker

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

New issue since latest HAOS RC update.

Problem/Motivation

Since latest HAOS update (beta version), addon will no longer start.

Actual behavior

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service base-addon-banner: starting
-----------------------------------------------------------
 Add-on: EMQX
 The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on
-----------------------------------------------------------
 Add-on version: 0.2.4
 You are running the latest version of this add-on.
 System: Home Assistant OS 10.0.rc1  (aarch64 / odroid-n2)
 Home Assistant Core: 2023.3.6
 Home Assistant Supervisor: 2023.03.2
-----------------------------------------------------------
 Please, share the above information when looking for help
 or support in, e.g., GitHub, forums or the Discord chat.
-----------------------------------------------------------
s6-rc: info: service base-addon-banner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service base-addon-timezone: starting
s6-rc: info: service base-addon-log-level: starting
s6-rc: info: service fix-attrs successfully started
[07:47:36] INFO: Configuring timezone (Europe/Brussels)...
s6-rc: info: service base-addon-timezone successfully started
s6-rc: info: service base-addon-log-level successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service init-emqx: starting
s6-rc: info: service init-emqx successfully started
s6-rc: info: service emqx: starting
s6-rc: info: service emqx successfully started
s6-rc: info: service legacy-services: starting
[07:47:36] INFO: Starting EMQX...
s6-rc: info: service legacy-services successfully started
ERROR: call_hocon_failed: -s emqx_conf_schema -c /opt/emqx/etc/emqx.conf multi_get node.data_dir node.name node.cookie node.db_backend cluster.proto_dist node.dist_net_ticktime�(B
[07:47:42] INFO: Service EMQX exited with code 137 (by signal 0)
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service emqx: stopping
s6-rc: info: service emqx successfully stopped
s6-rc: info: service init-emqx: stopping
s6-rc: info: service init-emqx successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service base-addon-timezone: stopping
s6-rc: info: service base-addon-log-level: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service base-addon-log-level successfully stopped
s6-rc: info: service base-addon-timezone successfully stopped
s6-rc: info: service base-addon-banner: stopping
s6-rc: info: service base-addon-banner successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

EMQX won't start

Problem/Motivation

Wanted to change from mosquitto to emqx because my mqqt connected devices keep getting unavailable several times per hour.

(Why the issue was filed)

Installed EMQX from HA addon repo, Stopped mosquitto, start EMXQ, Failled to start.
Rebooted HA with mosquitto start on boot turned off (to be sure stop might not be a full stop) then started EMQX and same error.

Expected behavior

(What you expected to happen)

Start EMQX

Actual behavior

(What actually happened)

EMQX not starting

Steps to reproduce

(How can someone else make/see it happen)

IDK, I think it should work fine though for me it does not start at all.

Proposed changes

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

EMQX
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/00-banner.sh

Add-on: EMQX
The most scalable open-source MQTT broker for IoT. An alternative for the Mosquitto add-on

Add-on version: 0.1.3
You are running the latest version of this add-on.
System: Home Assistant OS 9.3 (amd64 / qemux86-64)
Home Assistant Core: 2022.11.4
Home Assistant Supervisor: 2022.11.2

Please, share the above information when looking for help
or support in, e.g., GitHub, forums or the Discord chat.

cont-init: info: /etc/cont-init.d/00-banner.sh exited 0
cont-init: info: running /etc/cont-init.d/01-log-level.sh
cont-init: info: /etc/cont-init.d/01-log-level.sh exited 0
cont-init: info: running /etc/cont-init.d/02-set-timezone.sh
[13:36:19] INFO: Configuring timezone
cont-init: info: /etc/cont-init.d/02-set-timezone.sh exited 0
cont-init: info: running /etc/cont-init.d/emqx.sh
cont-init: info: /etc/cont-init.d/emqx.sh exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun emqx (no readiness notification)
s6-rc: info: service legacy-services successfully started
[13:36:19] INFO: Starting EMQX...
plugins.install_dir = EMQX_PLUGINS__INSTALL_DIR = "/data/emqx/plugins"
rpc.port_discovery = EMQX_RPC__PORT_DISCOVERY = manual
log.file_handlers.default.enable = EMQX_LOG__FILE_HANDLERS__DEFAULT__ENABLE = false
log.console_handler.enable = EMQX_LOG__CONSOLE_HANDLER__ENABLE = true
node.data_dir = EMQX_NODE__DATA_DIR = "/data/emqx/data"
node.cookie = EMQX_NODE__COOKIE = ******
node.name = EMQX_NODE__NAME = [email protected]
Listener ssl:default on 0.0.0.0:8883 started.
Listener tcp:default on 0.0.0.0:1883 started.
2022-11-26T13:36:32.784352+01:00 [error] Failed to start Ranch listener 'ws:default' in ranch_tcp:listen([{cacerts,'...'},{key,'...'},{cert,'...'},{ip,{0,0,0,0}},{port,8083},{backlog,1024},{buffer,4096},{high_watermark,1048576},{nodelay,true},{send_timeout,15000},{send_timeout_close,true}]) for reason eaddrinuse (address already in use)
2022-11-26T13:36:32.784622+01:00 [error] Supervisor: {<0.2171.0>,ranch_listener_sup}. Context: start_error. Reason: {listen_error,'ws:default',eaddrinuse}. Offender: id=ranch_acceptors_sup,pid=undefined.
Failed to start listener ws:default on 0.0.0.0:8083: eaddrinuse.
2022-11-26T13:36:32.784805+01:00 [error] crasher: initial call: supervisor:ranch_acceptors_sup/1, pid: <0.2173.0>, registered_name: [], exit: {{listen_error,'ws:default',eaddrinuse},[{ranch_acceptors_sup,listen_error,5,[{file,"ranch_acceptors_sup.erl"},{line,66}]},{supervisor,init,1,[{file,"supervisor.erl"},{line,330}]},{gen_server,init_it,2,[{file,"gen_server.erl"},{line,423}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,390}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}, ancestors: [<0.2171.0>,ranch_sup,<0.1909.0>], message_queue_len: 0, messages: [], links: [<0.2171.0>], dictionary: [{logger,error_logger}], trap_exit: true, status: running, heap_size: 2586, stack_size: 29, reductions: 9194; neighbours:
2022-11-26T13:36:32.785760+01:00 [error] crasher: initial call: application_master:init/4, pid: <0.2031.0>, registered_name: [], exit: {{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,"ws:default(0.0.0.0:8083) : eaddrinuse"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,"emqx_listeners.erl"},{line,625}]},{lists,foreach,2,[{file,"lists.erl"},{line,1342}]},{emqx_app,maybe_start_listeners,0,[{file,"emqx_app.erl"},{line,90}]},{emqx_app,start,2,[{file,"emqx_app.erl"},{line,50}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,293}]}]}}}},[{application_master,init,4,[{file,"application_master.erl"},{line,142}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,226}]}]}, ancestors: [<0.2030.0>], message_queue_len: 1, messages: [{'EXIT',<0.2032.0>,normal}], links: [<0.2030.0>,<0.1688.0>], dictionary: [], trap_exit: true, status: running, heap_size: 987, stack_size: 29, reductions: 160; neighbours:
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
{"Kernel pid terminated",application_controller,"{application_start_failure,emqx,{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,"ws:default(0.0.0.0:8083) : eaddrinuse"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,"emqx_listeners.erl"},{line,625}]},{lists,foreach,2,[{file,"lists.erl"},{line,1342}]},{emqx_app,maybe_start_listeners,0,[{file,"emqx_app.erl"},{line,90}]},{emqx_app,start,2,[{file,"emqx_app.erl"},{line,50}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,293}]}]}}}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,emqx,{bad_return,{{emqx_app,start,[normal,[]]},{'EXIT',{{failed_to_start,"ws:default(0.0.0.0:8083) : eaddrinuse"},[{emqx_listeners,'-foreach_listeners/1-fun-0-',2,[{file,"emqx_listeners.erl"},{line,625}]},{lists,foreach,2,[{file,"lists.erl"},{line,1342}]},{emqx_app,maybe_start_listeners,0,[{file,"emqx_app.erl"},{line,90}]},{emqx_app,start,2,[{file,"emqx_app.erl"},{line,50}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,293}]}]}}}}})

Crash dump is being written to: log/erl_crash.dump...done
[13:36:42] WARNING: EMQX crashed, halting add-on
s6-rc: info: service legacy-services: stopping
[13:36:42] INFO: EMQX stopped, restarting...
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

The latest image (0.2.9) can not access certificates from NginxProxyManager anymore

Problem/Motivation

EMQX doesn't start anymore due to unable to access the cert files.

Expected behavior

It will start and read the file the same as previous version (0.2.8i).

Actual behavior

It crashes and the emqx is shutting down

Steps to reproduce

Configure SSL by putting this in the config. This works fine in 0.2.8

- name: EMQX_LISTENERS__WSS__DEFAULT__SSL_OPTIONS__CERTFILE
value: ssl/nginxproxymanager/live/npm-3/fullchain.pem
- name: EMQX_LISTENERS__WSS__DEFAULT__SSL_OPTIONS__KEYFILE
value: ssl/nginxproxymanager/live/npm-3/privkey.pem

Upgrade to latest version (0.2.9)

Proposed changes

MQTT Settings Tab Not Loading

Problem/Motivation

I have installed the EMQX Home Assistant add-on running on a Raspberry Pi with 128GB storage and 8GB RAM. MQTT is working in the sense that it is receiving/brokering messages. So, that part is fine. However, when I click on the MQTT Settings tab in the EMQX UI (within Home Assistant) it does not load the General or Session tabs (see below image). The other tabs load just fine.

image

Expected behavior

I expected the General and Sessions tabs to load/render so I can view/modify the settings for the EMQX add-on.

Actual behavior

The two tabs mentioned above do not render.

Steps to reproduce

Click on the MQTT Settings tab.

What I've tried

I have:

  1. Rebooted the Raspberry Pi
  2. Restarted the Home Assistant service(s)
  3. Let it "sit" on the General tab for several minutes wondering if it might eventually load/render
  4. Refreshed the Settings page multiple times thinking it might load/render at some point
  5. Checked the running processes and the R-Pi is very "bored" (i.e., CPU % running in single digits, nearing zero)
  6. Checked available RAM... 5.4GB of the total 8GB is free

Container Health Check fails

Problem/Motivation

Container Health Check fails - HASS supervisor tries to restart container even though it is working properly.
Container health is marked as "unhealthy" when checked with docker -ps

CONTAINER ID   IMAGE                                                          COMMAND           CREATED          STATUS                      PORTS                                       NAMES
64f06a0896dd   ghcr.io/hassio-addons/emqx/amd64:0.2.5                         "/init"           11 minutes ago   Up 11 minutes (unhealthy)                                               addon_a0d7b954_emqx

Expected behavior

Container health check to work properly.

Actual behavior

Every time HASS is restarted the HASS supervisor restarts the EMQX container multiple times because it is marked as unhealthy. This happens about 10 times - then the HASS supervisor gives up because of it's restart rate limit of 10 times in 30mins.

23-03-27 21:11:22 WARNING (MainThread) [supervisor.addons.addon] Watchdog found addon EMQX is unhealthy, restarting...
23-03-27 21:11:22 INFO (SyncWorker_6) [supervisor.docker.interface] Stopping addon_a0d7b954_emqx application
23-03-27 21:11:27 INFO (SyncWorker_6) [supervisor.docker.interface] Cleaning addon_a0d7b954_emqx application
23-03-27 21:11:27 INFO (SyncWorker_10) [supervisor.docker.addon] Starting Docker add-on ghcr.io/hassio-addons/emqx/amd64 with version 0.2.5

...

<Task finished name='Task-3096043' coro=<Addon.watchdog_container() done, defined at /usr/src/supervisor/supervisor/addons/addon.py:989> exception=AddonsJobError('Rate limit exceeded, more then 10 calls in 0:30:00')>

Steps to reproduce

If the health check command from the Dockerfile is tried manually inside the container the error message is:

root@a0d7b954-emqx:/# /opt/emqx/bin/emqx_ctl
Node '[email protected]' not responding to pings.

Proposed changes

(If you have a proposed change, workaround or fix,
describe the rationale behind it)

EMQX not starting

Add-on version: 0.3.2
System: Home Assistant OS 11.4
Core: 2024.1.5
Supervisor: 2023.12.1

EMQX_PLUGINS__INSTALL_DIR [plugins.install_dir]: /data/emqx/plugins
EMQX_RPC__PORT_DISCOVERY [rpc.port_discovery]: manual
EMQX_NODE__DATA_DIR [node.data_dir]: /data/emqx/data
EMQX_NODE__COOKIE [node.cookie]: ******
EMQX_NODE__NAME [node.name]: [email protected]
Mnesia('[email protected]'): {load_dcl,emqx_dashboard_monitor} repaired, lost 55225 bad bytes
2024-01-23T08:01:36.893942+01:00 [error] Error in process <0.2248.0> on node '[email protected]' with exit value:, {function_clause,[{mnesia_log,add_recs,[[{log_header,decision_tab,"1.0","4.21.4",'[email protected]',{1698,528627,649767}},{decision_list,[{trans_tid,serial,2488679}]},{{emqx_dashboard_monitor,1705736380000},{emqx_monit,1705736380000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736390000},{emqx_monit,1705736390000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736400000},{emqx_monit,1705736400000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736410000},{emqx_monit,1705736410000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736420000},{emqx_monit,1705736420000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736430000},{emqx_monit,1705736430000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736440000},{emqx_monit,1705736440000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736450000},{emqx_monit,1705736450000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736460000},{emqx_monit,1705736460000,#{connections => 11,dropped => 1,live_connections => 11,received => 2,sent => 1,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736470000},{emqx_monit,1705736470000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736480000},{emqx_monit,1705736480000,#{connections => 11,dropped => 6,live_connections => 11,received => 19,sent => 13,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736490000},{emqx_monit,1705736490000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736500000},{emqx_monit,1705736500000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736510000},{emqx_monit,1705736510000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736520000},{emqx_monit,1705736520000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736530000},{emqx_monit,1705736530000,#{connections => 11,dropped => 1,live_connections => 11,received => 4,sent => 3,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736540000},{emqx_monit,1705736540000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736550000},{emqx_monit,1705736550000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736560000},{emqx_monit,1705736560000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736570000},{emqx_monit,1705736570000,#{connections => 11,dropped => 6,live_connections => 11,received => 24,sent => 24,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736580000},{emqx_monit,1705736580000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736590000},{emqx_monit,1705736590000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736600000},{emqx_monit,1705736600000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736610000},{emqx_monit,1705736610000,#{connections => 11,dropped => 1,live_connections => 11,received => 2,sent => 1,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736620000},{emqx_monit,1705736620000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736630000},{emqx_monit,1705736630000,#{connections => 11,dropped => 7,live_connections => 11,received => 20,sent => 13,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736640000},{emqx_monit,1705736640000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736650000},{emqx_monit,1705736650000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736660000},{emqx_monit,1705736660000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736670000},{emqx_monit,1705736670000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736680000},{emqx_monit,1705736680000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736690000},{emqx_monit,1705736690000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736700000},{emqx_monit,1705736700000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736710000},{emqx_monit,1705736710000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736720000},{emqx_monit,1705736720000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736730000},{emqx_monit,1705736730000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736740000},{emqx_monit,1705736740000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736750000},{emqx_monit,1705736750000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write}],763430436525],[{file,"mnesia_log.erl"},{line,999}]},{mnesia_log,insert_logchunk,3,[{file,"mnesia_log.erl"},{line,994}]},{mnesia_log,load_dcl,2,[{file,"mnesia_log.erl"},{line,986}]},{mnesia_loader,do_get_disc_copy2,4,[{file,"mnesia_loader.erl"},{line,74}]},{mnesia_controller,disc_load_table,4,[{file,"mnesia_controller.erl"},{line,2271}]},{mnesia_controller,'-load_and_reply/2-fun-0-',2,[{file,"mnesia_controller.erl"},{line,2166}]}]}, mfa: undefined
2024-01-23T08:01:37.988218+01:00 [warning] msg: still_waiting_for_emqx_app_to_be_ready, mfa: emqx_cluster_rpc:do_wait_for_emqx_ready/1(691)
2024-01-23T08:01:38.998185+01:00 [warning] msg: still_waiting_for_emqx_app_to_be_ready, mfa: emqx_cluster_rpc:do_wait_for_emqx_ready/1(691)
2024-01-23T08:01:40.008196+01:00 [warning] msg: still_waiting_for_emqx_app_to_be_ready, mfa: emqx_cluster_rpc:do_wait_for_emqx_ready/1(691)
2024-01-23T08:01:40.930785+01:00 [error] Supervisor: {local,mnesia_kernel_sup}. Context: child_terminated. Reason: killed. Offender: id=mnesia_locker,pid=<0.2173.0>.
2024-01-23T08:01:40.930953+01:00 [error] Supervisor: {local,mnesia_kernel_sup}. Context: shutdown. Reason: reached_max_restart_intensity. Offender: id=mnesia_locker,pid=<0.2173.0>.
2024-01-23T08:01:40.932164+01:00 [error] Mnesia('[email protected]'): ** ERROR ** (core dumped to file: "/opt/emqx/[email protected]_1705_993300_930708"), ** FATAL ** Loader crashed: {function_clause,[{mnesia_log,add_recs,[[{log_header,decision_tab,"1.0","4.21.4",'[email protected]',{1698,528627,649767}},{decision_list,[{trans_tid,serial,2488679}]},{{emqx_dashboard_monitor,1705736380000},{emqx_monit,1705736380000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736390000},{emqx_monit,1705736390000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736400000},{emqx_monit,1705736400000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736410000},{emqx_monit,1705736410000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736420000},{emqx_monit,1705736420000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736430000},{emqx_monit,1705736430000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736440000},{emqx_monit,1705736440000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736450000},{emqx_monit,1705736450000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736460000},{emqx_monit,1705736460000,#{connections => 11,dropped => 1,live_connections => 11,received => 2,sent => 1,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736470000},{emqx_monit,1705736470000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736480000},{emqx_monit,1705736480000,#{connections => 11,dropped => 6,live_connections => 11,received => 19,sent => 13,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736490000},{emqx_monit,1705736490000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736500000},{emqx_monit,1705736500000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736510000},{emqx_monit,1705736510000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736520000},{emqx_monit,1705736520000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736530000},{emqx_monit,1705736530000,#{connections => 11,dropped => 1,live_connections => 11,received => 4,sent => 3,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736540000},{emqx_monit,1705736540000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736550000},{emqx_monit,1705736550000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736560000},{emqx_monit,1705736560000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736570000},{emqx_monit,1705736570000,#{connections => 11,dropped => 6,live_connections => 11,received => 24,sent => 24,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736580000},{emqx_monit,1705736580000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736590000},{emqx_monit,1705736590000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736600000},{emqx_monit,1705736600000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736610000},{emqx_monit,1705736610000,#{connections => 11,dropped => 1,live_connections => 11,received => 2,sent => 1,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736620000},{emqx_monit,1705736620000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736630000},{emqx_monit,1705736630000,#{connections => 11,dropped => 7,live_connections => 11,received => 20,sent => 13,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736640000},{emqx_monit,1705736640000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736650000},{emqx_monit,1705736650000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736660000},{emqx_monit,1705736660000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736670000},{emqx_monit,1705736670000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736680000},{emqx_monit,1705736680000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736690000},{emqx_monit,1705736690000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736700000},{emqx_monit,1705736700000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736710000},{emqx_monit,1705736710000,#{connections => 11,dropped => 1,live_connections => 11,received => 3,sent => 2,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736720000},{emqx_monit,1705736720000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736730000},{emqx_monit,1705736730000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736740000},{emqx_monit,1705736740000,#{connections => 11,dropped => 0,live_connections => 11,received => 0,sent => 0,subscriptions => 72,topics => 66}},write},{{emqx_dashboard_monitor,1705736750000},{emqx_monit,1705736750000,#{connections => 11,dropped => 6,live_connections => 11,received => 18,sent => 12,subscriptions => 72,topics => 66}},write}],763430436525],[{file,"mnesia_log.erl"},{line,999}]},{mnesia_log,insert_logchunk,3,[{file,"mnesia_log.erl"},{line,994}]},{mnesia_log,load_dcl,2,[{file,"mnesia_log.erl"},{line,986}]},{mnesia_loader,do_get_disc_copy2,4,[{file,"mnesia_loader.erl"},{line,74}]},{mnesia_controller,disc_load_table,4,[{file,"mnesia_controller.erl"},{line,2271}]},{mnesia_controller,'-load_and_reply/2-fun-0-',2,[{file,"mnesia_controller.erl"},{line,2166}]}]}, state: {state,<0.2170.0>,true,[],[{<0.2248.0>,{disc_load,emqx_dashboard_monitor,local_only,undefined}}],{0,nil},[],[],{1,{emqx_dashboard_monitor,{late_load,emqx_dashboard_monitor,local_only,undefined,[]},nil,nil}},undefined,[],[],#Ref<0.4190645402.3434086401.141586>,false}
2024-01-23T08:01:41.018193+01:00 [warning] msg: still_waiting_for_emqx_app_to_be_ready, mfa: emqx_cluster_rpc:do_wait_for_emqx_ready/1(691)
2024-01-23T08:01:42.028215+01:00 [warning] msg: still_waiting_for_emqx_app_to_be_ready, mfa: emqx_cluster_rpc:do_wait_for_emqx_ready/1(691)
2024-01-23T08:01:43.038191+01:00 [warning] msg: still_waiting_for_emqx_app_to_be_ready, mfa: emqx_cluster_rpc:do_wait_for_emqx_ready/1(691)
2024-01-23T08:01:43.932315+01:00 [error] Supervisor: {local,mnesia_kernel_sup}. Context: shutdown_error. Reason: killed. Offender: id=mnesia_controller,pid=<0.2201.0>.
2024-01-23T08:01:43.932870+01:00 [error] Generic server emqx_common_shard terminating. Reason: {{badmatch,{error,{node_not_running,'[email protected]'}}},[{mria_rlog_server,process_schema,1,[{file,"mria_rlog_server.erl"},{line,233}]},{mria_rlog_server,handle_continue,2,[{file,"mria_rlog_server.erl"},{line,127}]},{gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,1123}]},{gen_server,loop,7,[{file,"gen_server.erl"},{line,865}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}. Last message: {continue,post_init}. State: {<0.2463.0>,emqx_common_shard}.
2024-01-23T08:01:43.933775+01:00 [error] crasher: initial call: mria_rlog_server:init/1, pid: <0.2464.0>, registered_name: emqx_common_shard, error: {{badmatch,{error,{node_not_running,'[email protected]'}}},[{mria_rlog_server,process_schema,1,[{file,"mria_rlog_server.erl"},{line,233}]},{mria_rlog_server,handle_continue,2,[{file,"mria_rlog_server.erl"},{line,127}]},{gen_server,try_dispatch,4,[{file,"gen_server.erl"},{line,1123}]},{gen_server,loop,7,[{file,"gen_server.erl"},{line,865}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,240}]}]}, ancestors: [<0.2463.0>,mria_shards_sup,mria_rlog_sup,mria_sup,<0.2158.0>], message_queue_len: 0, messages: [], links: [<0.2463.0>], dictionary: [{'$logger_metadata$',#{domain => [mria,rlog,server],shard => emqx_common_shard}}], trap_exit: true, status: running, heap_size: 6772, stack_size: 28, reductions: 6338; neighbours:

Does anyone have the same problem or can help with the information from the log?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.