DevOps scripts for Mainflux IoT platform.
Follow the instructions in charts
directory.
Detailed documentation can be found here.
DevOps scripts for Mainflux IoT platform
License: Apache License 2.0
Not sure about mTLS but MQTTS port is not exposed at all.
There are endpoints in Things and Users service to create groups but currently ingress is only configured for Users groups.
We can probably use:
/rgroups/users
/groups/things
or
/groups-users
/groups-things
Follow the example of Vault.
I manually modified the following files to adapt to v1.23+
but after helm install, mainflux-mqtt-0 not running, how can i solve it
env: Rancher rke version v1.3.11
charts/jaeger-operator/crds/crd.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: jaegers.jaegertracing.io
spec:
conversion:
strategy: None
group: jaegertracing.io
names:
kind: Jaeger
listKind: JaegerList
plural: jaegers
singular: jaeger
scope: Namespaced
versions:
- name: v1
schema:
openAPIV3Schema:
description: cluster
type: object
served: true
storage: true
modify postgresql and redis version in Chart.yaml
- name: postgresql
version: "10.14.3"
repository: "@bitnami"
- name: redis
version: "15.7.0"
repository: "@bitnami"
$ kubectl get po -n mf
NAME READY STATUS RESTARTS AGE
mainflux-adapter-coap-7477d8945-k72rf 0/1 CrashLoopBackOff 10 (3m56s ago) 30m
mainflux-adapter-http-677d4b8c9d-4zgft 0/1 CrashLoopBackOff 10 (3m27s ago) 30m
mainflux-auth-6968cb6649-5pnmz 1/1 Running 3 (29m ago) 30m
mainflux-auth-6968cb6649-k9g4l 1/1 Running 3 (29m ago) 30m
mainflux-auth-6968cb6649-zpxft 1/1 Running 3 (29m ago) 30m
mainflux-envoy-c4fcdb564-cmzfz 1/1 Running 0 30m
mainflux-envoy-c4fcdb564-ln7c8 1/1 Running 0 30m
mainflux-envoy-c4fcdb564-z7tgm 1/1 Running 0 30m
mainflux-grafana-5ddbcbb6fb-grb9m 1/1 Running 0 30m
mainflux-jaeger-operator-74ff955db6-b8zbw 1/1 Running 0 30m
mainflux-jaeger-operator-jaeger-684df956cd-bpq2r 1/1 Running 0 30m
mainflux-keto-84997fd4f-lpmgh 1/1 Running 0 30m
mainflux-mqtt-0 1/2 CrashLoopBackOff 10 (3m32s ago) 30m
mainflux-nats-0 1/1 Running 0 30m
mainflux-nats-1 1/1 Running 0 29m
mainflux-nats-2 1/1 Running 0 29m
mainflux-postgresqlauth-0 1/1 Running 0 30m
mainflux-postgresqlketo-0 1/1 Running 0 30m
mainflux-postgresqlthings-0 1/1 Running 0 30m
mainflux-postgresqlusers-0 1/1 Running 0 30m
mainflux-redis-auth-master-0 1/1 Running 0 30m
mainflux-redis-auth-replicas-0 1/1 Running 0 30m
mainflux-redis-auth-replicas-1 1/1 Running 0 29m
mainflux-redis-auth-replicas-2 1/1 Running 0 28m
mainflux-redis-mqtt-master-0 1/1 Running 0 30m
mainflux-redis-mqtt-replicas-0 1/1 Running 0 30m
mainflux-redis-mqtt-replicas-1 1/1 Running 0 29m
mainflux-redis-mqtt-replicas-2 1/1 Running 0 28m
mainflux-redis-streams-master-0 1/1 Running 0 30m
mainflux-redis-streams-replicas-0 1/1 Running 0 30m
mainflux-redis-streams-replicas-1 1/1 Running 0 29m
mainflux-redis-streams-replicas-2 1/1 Running 0 28m
mainflux-things-7df86f8fd5-j4j9x 1/1 Running 4 (29m ago) 30m
mainflux-things-7df86f8fd5-lrt9m 1/1 Running 3 (29m ago) 30m
mainflux-things-7df86f8fd5-qk85h 1/1 Running 3 (29m ago) 30m
mainflux-ui-6568c7d95-hzsmd 1/1 Running 0 30m
mainflux-users-5764fdcfd5-qx5bq 1/1 Running 3 (29m ago) 30m
$ kubectl logs -n mf mainflux-mqtt-0 -c mainflux-adapter-mqtt
2022/07/27 07:53:24 The binary was build using Nats as the message broker
{"level":"info","message":"gRPC communication is not encrypted","ts":"2022-07-27T07:53:24.550574065Z"}
{"level":"error","message":"Failed to connect to message broker: nats: no servers available for connection","ts":"2022-07-27T07:53:24.552383451Z"}
Attempting to pull or upgrade using
helm dependency update
resulted in the following dependency errors.
Downloading influxdb from repo https://kubernetes-charts.storage.googleapis.com/
Save error occurred: could not download https://charts.helm.sh/stable/influxdb-4.3.2.tgz: failed to fetch https://charts.helm.sh/stable/influxdb-4.3.2.tgz : 404 Not Found
and
Downloading mongodb from repo https://kubernetes-charts.storage.googleapis.com/
Save error occurred: could not download https://charts.helm.sh/stable/mongodb-7.8.10.tgz: failed to fetch https://charts.helm.sh/stable/mongodb-7.8.10.tgz : 404 Not Found
Are there any safe or/or tested alternative charts?
Replacing @stable
with @bitnami
in both cases seems to work.
Regards
Tracking issue until I phrase it well enough for the issue there. There are a couple of issues, first some variables are hard-coded and cannot be overridden by environment variables, and second, the k8s API doesn't respond in a timely fashion in order to discover the instances which will create a cluster, so an "artificial" sleep is included in our copy of the init script, which is in a ConfigMap, which is both ugly and, ultimately, unnecessary.
https://github.com/vernemq/docker-vernemq
CC @drasko
Kubernetes does not currently allow multiprotocol LoadBalancer services.
Nginx-ingress controller should listen on UDP port and route to CoAP adapter on backend.
Research possible solution:
influxdata/helm-charts#194
I'm testing out Mainflux and when I deployed using the Helm chart, I noticed that I could not reach the Mainflux UI. Instead I got the Jaeger query ui presented when browsing to http://localhost (I'm running this locally using Docker Desktop with K8s)
When inspecting the deployment, I noticed that in addition to the three Mainflux ingresses, there's a fourth one created by the Jaeger-operator.
The Mainflux UI is using the / as path, and the Jaeger-operator doesn't state one (which means that it defaults to /) which is what is causing the problem.
I used the following command line when installing the chart:
helm install --kube-context docker-desktop --namespace mf --create-namespace mainflux . --set defaults.replicaCount=1 --set nats.replicaCount=1 --set twins.enabled=true --set influxdb.enabled=true
Add cert manager for easier deployment automatic configuration of letsencrypt certificates
https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.yaml
Document what needs to be done to get working TLS. With custom certs or Let's Encrypt.
Envoy should not be exposed to internet and use external IP address anymore. This was workaround to send mqtt directly to envoy when it wasn't set to go through the nginx ingress.
User service throws error on startup, can't fetch email template.
{"level":"error","message":"Failed to configure e-mailing util: Parse e-mail template failed: open email.tmpl: no such file or directory","ts":"2020-02-24T18:35:09.376024096Z"}
I suppose that password recovery won't work.
Is it possible to support Autoscaling for messaging? By messaging I mean what needs to be scaled to get autoscaling for MQTT messaging functionality.
Probably :
Make possible to run external postgresql database for users and things. Every new service in helm charts (i.e. writers) should also be configurable to use external DB or internal database in Kubernetes
There are endpoints in Things and Users service to create groups but currently ingress is only configured for Users groups.
We can probably use:
/groups/users
/groups/things
or
/groups-users
/groups-things
PostgreSQL DB user named mainflux
by default is admin user. That is not enough privilege for latest users service becuase it creates extension in init script and for that we need superuser
role.
Bitnami PostreSQL chart that is used as dependency allow only postgres
user to be superuser
No
Rename "Mainflux" to "Magistrala" in HELM charts. This will make the naming more consistent.
Must-have.
In .Values.foo.bar
expressions, each of the parts needs to be a valid Go name, which can't include dashes.
But by helm convention charts names include dashes.
This is causing errors when accessing values in templates like {{ .Values.postgresql-users.postgresqlUsername }}
Solution: helm/helm#2192 (comment)
Following the guide here:
We are running into issues pub/subing with mqtt. We did a trial run locally with minikube and had zero issues.
Attempting to install and configure mainflux on a
We updated the ingress.yaml to include the different namespace.
We updated the args section of our ingress controller to include:
- '--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services'
- '--udp-services-configmap=$(POD_NAMESPACE)/udp-services'
Any attempt to connect to the broker is met with immediate disconnection.
mosquitto tools:
C:\Users\~>mosquitto_pub -d -L mqtt://DAVEEEEEEEEE:123@ingress-ip:1883/channels/1/messages -m "test-message"
Client (null) sending CONNECT
Error: The connection was lost.
When attempting to connect the mqtt adapter pod logs show:
{"level":"info","message":"Accepted new client","ts":"2021-07-06T14:18:45.895227937Z"}
{"level":"info","message":"Disconnect - Client with ID: and username DAVEEEEEEEEE disconnected","ts":"2021-07-06T14:18:45.910741149Z"}
{"level":"warn","message":"Broken connection for client: with error: failed proxying from MQTT broker to MQTT client : rpc error: code = Internal desc = internal server error","ts":"2021-07-06T14:18:45.911334353Z"}
This is as far as we can make it.
When we restart the mqtt-pods the adapter comes up first with the following logs and what they show as soon as the broker is in ready state.
(mqtt pod adapter logs) Adapter up, Broker starting.
{"level":"info","message":"Broker not ready: Get \"http://localhost:8888/health\": dial tcp 127.0.0.1:8888: connect: connection refused, next try in 321.984272ms","ts":"2021-07-06T13:39:06.768509455Z"}
{"level":"info","message":"Broker not ready: Get \"http://localhost:8888/health\": dial tcp 127.0.0.1:8888: connect: connection refused, next try in 496.769422ms","ts":"2021-07-06T13:39:07.095886553Z"}
{"level":"info","message":"Broker not ready: Get \"http://localhost:8888/health\": dial tcp 127.0.0.1:8888: connect: connection refused, next try in 1.549617092s","ts":"2021-07-06T13:39:07.595524909Z"}
{"level":"info","message":"Broker not ready: Get \"http://localhost:8888/health\": dial tcp 127.0.0.1:8888: connect: connection refused, next try in 1.56916555s","ts":"2021-07-06T13:39:09.152845067Z"}
{"level":"info","message":"Broker not ready: Get \"http://localhost:8888/health\": dial tcp 127.0.0.1:8888: connect: connection refused, next try in 3.129604625s","ts":"2021-07-06T13:39:10.725029325Z"}
{"level":"info","message":"gRPC communication is not encrypted","ts":"2021-07-06T13:39:13.914909814Z"}
{"level":"info","message":"Starting MQTT proxy on port 1884","ts":"2021-07-06T13:39:13.947094859Z"}
{"level":"info","message":"Starting MQTT over WS proxy on port 8081","ts":"2021-07-06T13:39:13.94715066Z"}
(mqtt pod adapter logs) As soon as the broker container is running:
{"level":"info","message":"Accepted new client","ts":"2021-07-06T13:40:39.237124584Z"}
{"level":"info","message":"Accepted new client","ts":"2021-07-06T13:40:42.406125652Z"}
{"level":"info","message":"Accepted new client","ts":"2021-07-06T13:40:42.666002353Z"}
{"level":"info","message":"Disconnect - Client with ID: and username disconnected","ts":"2021-07-06T13:40:44.309415374Z"}
{"level":"warn","message":"Broken connection for client: with error: failed proxying from MQTT client to MQTT broker : read tcp 127.0.0.1:60052->127.0.0.1:1883: read: connection reset by peer","ts":"2021-07-06T13:40:44.312490196Z"}
{"level":"info","message":"Accepted new client","ts":"2021-07-06T13:40:45.254606088Z"}
{"level":"info","message":"Disconnect - Client with ID: and username disconnected","ts":"2021-07-06T13:40:47.415504157Z"}
{"level":"warn","message":"Broken connection for client: with error: failed proxying from MQTT client to MQTT broker : read tcp 127.0.0.1:60116->127.0.0.1:1883: read: connection reset by peer","ts":"2021-07-06T13:40:47.416312263Z"}
{"level":"info","message":"Disconnect - Client with ID: and username disconnected","ts":"2021-07-06T13:40:47.671586657Z"}
{"level":"warn","message":"Broken connection for client: with error: failed proxying from MQTT client to MQTT broker : read tcp 127.0.0.1:60118->127.0.0.1:1883: read: connection reset by peer","ts":"2021-07-06T13:40:47.672246661Z"}
We haven't done any configuration changes other than what is stated above. Any help or direction is greatly appreciated at this point. From what we can tell the ingress controller is recognizing the tcp services config map and sending tcp traffic to mqtt-envoy which I assume interacts with the broker via the adapter. but the traffic dies there.
@blokovi I am also thinking about chart dependencies and umbrella charts. Thing to think about in for #later or next stage of refactoring,
We can scope Core services and make mainflux/core chart as the main chart. Creat micro charts for each add-on that we have and mark them with a condition: optional, then import them in core chart. Each micro chart will have mainflux/core marked as a dependency (you can't run grafana add-on without core).
This way we will avoid hardcoded conditions like if/else in the chart, which grows all the time and will become hard to maintain.
We also get tagging and version management simpler.
People get install only core or upgrade only add-ons without touching core release. It's much more flexible.
It will be much easier to maintain and develop if we decouple it.
If you agree, maybe we can open an issue, market it as enhancement
, and continue the conversation there?
Originally posted by @nmarcetic in #20 (comment)
│ 2023-10-15 10:38:47.697 GMT [509] FATAL: password authentication failed for user "postgres" │
│ 2023-10-15 10:38:47.697 GMT [509] DETAIL: Connection matched file "/opt/bitnami/postgresql/conf/pg_hba.conf" line 1: "host all all 0.0.0.0/0 md5" │
│ 2023-10-15 10:38:48.331 GMT [510] FATAL: password authentication failed for user "postgres" │
│ 2023-10-15 10:38:48.331 GMT [510] DETAIL: Connection matched file "/opt/bitnami/postgresql/conf/pg_hba.conf" line 1: "host all all 0.0.0.0/0 md5" │
│ 2023-10-15 10:38:49.114 GMT [511] FATAL: password authentication failed for user "postgres" │
│ 2023-10-15 10:38:49.114 GMT [511] DETAIL: Connection matched file "/opt/bitnami/postgresql/conf/pg_hba.conf" line 1: "host all all 0.0.0.0/0 md5" │
│ 2023-10-15 10:38:50.381 GMT [512] FATAL: password authentication failed for user "postgres" │
│ 2023-10-15 10:38:50.381 GMT [512] DETAIL: Connection matched file "/opt/bitnami/postgresql/conf/pg_hba.conf" line 1: "host all all 0.0.0.0/0 md5"
Currently we use Fluentd and ELK for log aggreagation and presentation.
It would be more aligned to Mainflux lightweight philosophy to use Loki.
Additionally - I think that operational metrics and logs can be presented on the same Grafana dashboard - which is a clear benefit.
Is there a way to do the optional installation of some services? For example, I don't want WS adapter or CoAP so they can be optional during helm install.
Would it be possible to get deployment instructions for minikube
? I believed I setup everything correctly (including the ingress addon) but half my pods end up in an error state. Thanks!
CoAP adapter should have both UDP and TCP port opened (on the same port number) same as in docker-compose:
https://github.com/mainflux/mainflux/blob/master/docker/docker-compose.yml#L287-L288
CoAP adapter has HTTP listener for /version
endpoint so opening TCP port is needed. Then that endpoint could probably be used for the liveness probe.
Related issue on core: https://github.com/mainflux/mainflux/issues/1459
FEATURE REQUEST
Is there an open issue addressing this request? If it does, please add a "+1" reaction to the
existing issue, otherwise proceed to step 2.
Describe the feature you are requesting, as well as the possible use case(s) for it.
The feature being requested is to make the configuration of charts customizable and remove any fixed values from the charts. The suggestion is to replace the hardcoded values with configurable variables. For example, instead of using {{ .Release.Name }} in the chart, the proposal is to use {{Values.defaults.msgBrokerProtocol}}://{{ .Release.Name }}-{{msgBrokerUrl}}:{{ .Values.defaults.msgBrokerPort }}. However, this alternative syntax is considered confusing and less readable compared to using {{messageBrokerUrl}}. The suggestion also includes adding a comment with a list of available URLs that can be used.
The possible use case for this feature is to provide flexibility in configuring charts and allowing users to customize values based on their specific requirements. By removing hardcoded values and introducing configurable variables, users can easily adapt the charts to their desired environment without modifying the chart's source code. This enhances reusability and simplifies the process of deploying and managing applications using these charts.
Reference : #120 (comment)
Improve getting started documentation, step by step guide how to install charts on running clusters.
Its simple and easy flow, let's document it well.
FYI There is no consistent way to install Nginx ingress controller, some providers have it already somewhere you need to install it manually, let's document this for AWS, GCE, and Digitalocean at least.
The ports of Mainflux services which can be configured via environment variables should have their corresponding, configurable Helm variable. That variable would need to be propagated in Deployments, Services and Ingress.
On Mainflux Deploying we are facing certain problems in api's for e.g.
Not able to remove things, on checking thing pod logs. its displaying error NOAUTH authentication required
.
On debugging redis DB,
Tried perform redis db commands e.g. KEYS *
but gives error as NOAUTH authentication required
.
On checking environment variables for redis pod, there are env variable as following REDIS_PASSWORD=<REDIS_PASSWORD>
.
and after loging in redis db and using REDIS_PASSWORD
with command auth <username> <REDIS_PASSWORD>
all commands are working without error.
could this be issue with redis connection ?
In order to have persistency, we need to map volume like this. We don't need persistent logs, but data for sure :)
As the nginx Ingress setup varies from one cluster to another, need to document the importance of tcp-services ConfigMap and what the steps a user must take to configure it. Unfortunately, due to the variances in this ConfigMap location and name, I don't see how it can be automated via this chart.
OPC-UA adapter & Twins are currently missing from the helm chart, maybe something else also.
Let's check what is missing and align the chart with the current version of Core (preparing for v1.0.0).
when trying to create additional deployment in the same cluster along with the existing one
one instance is installed as
helm install mainflux -n mf
when second instance is installed
helm install mainflux-prod -n mf-prod
installation is missing mainflux-jaeger-operator-jaeger
Deployment
(kubernetes resource), so pods are crashing with error
{"level":"error","message":"Failed to init Jaeger client: lookup mainflux-prod-jaeger-operator-jaeger-agent on 10.245.0.10:53: no such host","ts":"2021-07-28T09:10:36.719536418Z"}
maybe we should consider updating helm charts for jeager
additionally, dont know if that is the issue, when I tried to install with this command
helm install mainflux -n mf-prod
so same release name just different namespace I've got following error
W0727 16:54:35.902843 252425 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "mainflux-jaeger-operator-operator-psp" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "mf-prod": current value is "mf"```
https://www.jaegertracing.io/docs/1.24/operator/
Installing the Operator
The Jaeger Operator can be installed in Kubernetes-based clusters and is able to watch for new Jaeger custom resources (CR) in specific namespaces, or across the entire cluster. There is typically only one Jaeger Operator per cluster, but there might be at most one Jaeger Operator per namespace in multi-tenant scenarios. When a new Jaeger CR is detected, an operator will attempt to set itself as the owner of the resource, setting a label jaegertracing.io/operated-by to the new CR, with the operator’s
namespace and name as the label’s value.
WS adapter is removed from Core with Mainflux/1120 PR. Remove it from the helm chart we use MQTT over WS from now on.
Currently jeager is deployed using old version charts in devops repo, this should be updated with newer version and preferably by adding it in dependencies with reference on official repo, with current installation there are also problems when trying to install mainflux in multiple namespaces ( trying to achieve multitenancy )
Jaeger should be deployed using official latest helm charts.
Hello,
we are trying to setup mainflux on a managed Database service and we are wondering if there is SSL support for database connection if yes, then how to enable and path the SSL certificate for the database connection ?
## If you want to use an external database, set this to false and change postgresqlHost
enabled: false
name: postgresql-users
postgresqlHost: localhost
postgresqlUsername: postgres
postgresqlPassword: mainflux
postgresqlDatabase: users
resources:
requests:
cpu: 25m
persistence:
size: 1Gi```
Naming of several resources are misleading or can be confusing within MQTT stateful set:
adapter_mqtt-deployment.yaml
, file name shoud be renamed from deployment
to statefulset
mainflux/vernemq
is named mainflux-adapter-mqtt
, better name for this is broker
then adapter
mainflux/mqtt
is named mainflux-mqtt-proxy
. This is partilly correct, but a more accurate term would be adapter
instead of proxy
mainflux-adapter-mqtt
. That is maybe ok, but if we rename proxy to adapter, that could be confusing/metrics
endpoint is not properly annotated for Prometheus to scrape metrics
Add support for MQTT over WS with the latest mproxy integration. Latest mainflux/mqtt
image is tagged as 0.10.1
and pushed to docker hub
When deploying mainflux via helm with a hostname set via
--set "ingress.hostname=<hostname>"
Kubernetes logs the following
All hosts are taken by other resources
Event details:
involvedObject:
kind: Ingress
namespace: mainflux
name: mainflux-nginx-rewrite-ingress-http-adapter
uid: 34c83c09-460f-48fc-ab17-387ddee4b3d4
apiVersion: networking.k8s.io/v1beta1
resourceVersion: '28301636'
reason: Rejected
message: All hosts are taken by other resources
source:
component: nginx-ingress-controller
Only one ingress will ever get assigned to nginx.
There is a discussion here on https://docs.nginx.com
https://docs.nginx.com/nginx-ingress-controller/configuration/handling-host-collisions/
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.