helm repo add pmint93 https://pmint93.github.io/helm-charts
helm repo update
You can then run helm search repo pmint93
to see available charts.
See artifact hub for a complete list.
See CONTRIBUTING.md
My helm charts
Home Page: https://pmint93.github.io/helm-charts/
License: Apache License 2.0
helm repo add pmint93 https://pmint93.github.io/helm-charts
helm repo update
You can then run helm search repo pmint93
to see available charts.
See artifact hub for a complete list.
See CONTRIBUTING.md
We are using chart version 2.7.4 and im seeing the chart installation fail when im spinning up a test cluster, the same version is in use with an in use cluster and was last deployed 90 days ago.
The module we use is the same for both and points to the same Helm chart but we see the below error when trying to connect to a Postgres DB.
Metabase cannot initialize plugin Metabase Oracle Driver due to required dependencies. Metabase requires the Oracle JDBC driver in order to connect to Oracle databases, but we can't ship it as part of Metabase due to licensing restrictions. See https://metabase.com/docs/latest/administration-guide/databases/oracle.html for more details
Is there a workaround for this or a fix in a later version, any guidance would be appreciated.
When using a volume backed by block storage it is not possible to use the same volume on different nodes at the same time: Multi-Attach error for volume ... Volume is already used by pod(s) ...
(cf. e.g. https://stackoverflow.com/q/46887118)
Thus it would be nice if the Helm chart would allow me to set spec.strategy.type=Recreate
, to prevent multiple pods from running at the same time (on different nodes): https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/#DeploymentSpec
Do you have any idea how to upload this image in ARM cluster?
Hello,
Metabase Helm chart doesn't support dynamic Persistent Volume (PV) provisioning using custom storage classes. This feature would greatly enhance flexibility and scalability in Kubernetes environments.
Could we please consider adding support for dynamic PV provisioning with custom storage classes to the Helm chart for Metabase? This addition would simplify storage management and improve user experience.
Thank you for considering this request.
pgbouncer-other.ini
is malformed when additional settings are provided
file custom-values.yaml
extraSettings:
max_prepared_statements: '100'
Generated configmap :
helm template . -f values.yaml -f custom-values.yaml -s templates/configmap.yaml | tail -4
;; Read additional config from other file
%include /etc/pgbouncer/pgbouncer-other.ini
pgbouncer-other.ini: |-
ignore_startup_parameters = extra_float_digitsmax_prepared_statements = 100
Expected output :
helm template . -f values.yaml -f custom-values.yaml -s templates/configmap.yaml | tail -5
;; Read additional config from other file
%include /etc/pgbouncer/pgbouncer-other.ini
pgbouncer-other.ini: |-
ignore_startup_parameters = extra_float_digits
max_prepared_statements = 100
Hi, thanks for hosting the chart. Did you ever try to upgrade an existing 0.36.X metabase helm chart installation to 0.37? Do you think it could be just a metter of bumping the image version in the templates or could it involve something else?
By default the metabase pod runs as root in Kubernetes even though it defines the metabase user (UID 2000)
For example the postgresql bitnami helm chart provides secrets that only specify the needed passwords, but not the usernames.
Thus I want to be able to specify the username directly in helm values, while fetching the password from an existing secret.
Currently the helm chart does not support this as the self-generated secret is never generated or used if an existingSecret is provided.
Since Metabase 0.44, we can enable prometheus Monitoring endpoint.
But it require to expose the port
Documentation available here: https://www.metabase.com/docs/latest/installation-and-operation/observability-with-prometheus
Metabase just released an upgrade to patch a severy security vulnerability: https://github.com/metabase/metabase/releases
Can we have updated charts that point to the new version? Sorry I don't know how to work with helm and so am unable to help with a PR myself.
An increasingly common cloud provider configuration for databases is to require SSL for the DB connection, this requires using the MB_DB_CONNECTION_URI env var to set extra parameters.
Additionally, some cloud providers format their username user@instance which causes the upstream URI env var parser to break as it's not expecting @'s in the username field.
Per metabase/metabase#22862, when you need to set advanced MB_DB_CONNECTION_URI parameters and your username contains an @, you should pass MB_DB_CONNECTION_URI, MB_DB_USER and MB_DB_PASS together.
This helm chart currently uses an if/else workflow for mapping those three fields.
I've prepared and locally tested a pull request which has more granular control of rendering those variables to support this use case.
networking.k8s.io/v1 Ingress
has been available for long enough, I think the chart should be updated to use it.
In values.yaml it is mentioned that JKS file should be added in "keyStore" field, I created JKS but How should I import the JKS binary file to chart?
Helm chart 2.16.5
if the migration is not termintated before the end of the liveness initialDelaySeconds
, the pod restart and the next run fail with error:
database has migration lock; cannot run migrations.
Workarround is to manually increase the initialDelaySeconds
.
Solution can be to use the k8s startup probe
Hi @pmint93 - thank you for your work with this chart. It has been helpful. I need your help.
I want to switch to a postgres database.
I added a secret to my cluster with the connectionURI and referenced it in the helm chart, but it seems the deployment is not able to access the secret. I am not sure why that is.
my values
# Backend database
database:
# Database type (h2 / mysql / postgres), default: h2
type: postgres
# encryptionKey: << YOUR ENCRYPTION KEY >>
## Only need when you use mysql / postgres
# host:
# port:
# dbname:
# username:
# password:
## Alternatively, use a connection URI for full configurability. Example for SSL enabled Postgres.
# connectionURI: postgres://user:password@host:port/database?ssl=true&sslmode=require&sslfactory=org.postgresql.ssl.NonValidatingFactory"
## If a secret with the database credentials already exists, use the following values:
existingSecret: metabase-secrets
# existingSecretUsernameKey:
# existingSecretPasswordKey:
existingSecretConnectionURIKey: connectionURI
The error I am getting:
Verifying postgres Database Connection ...
2022-02-14 12:49:37,780 ERROR metabase.core :: Metabase Initialization FAILED
clojure.lang.ExceptionInfo: Unable to connect to Metabase postgres DB. {}
at metabase.db.setup$fn__33730$verify_db_connection__33735$fn__33736$fn__33737.invoke(setup.clj:102)
at metabase.db.setup$fn__33730$verify_db_connection__33735$fn__33736.invoke(setup.clj:100)
at metabase.db.setup$fn__33730$verify_db_connection__33735.invoke(setup.clj:94)
at metabase.db.setup$setup_db_BANG_$fn__33765$fn__33766.invoke(setup.clj:142)
at metabase.util$do_with_us_locale.invokeStatic(util.clj:693)
at metabase.util$do_with_us_locale.invoke(util.clj:679)
at metabase.db.setup$setup_db_BANG_$fn__33765.invoke(setup.clj:141)
at metabase.db.setup$setup_db_BANG_.invokeStatic(setup.clj:140)
at metabase.db.setup$setup_db_BANG_.invoke(setup.clj:136)
at metabase.db$setup_db_BANG_$fn__33873.invoke(db.clj:61)
at metabase.db$setup_db_BANG_.invokeStatic(db.clj:56)
at metabase.db$setup_db_BANG_.invoke(db.clj:51)
at metabase.core$init_BANG_.invokeStatic(core.clj:91)
at metabase.core$init_BANG_.invoke(core.clj:74)
at metabase.core$start_normally.invokeStatic(core.clj:135)
at metabase.core$start_normally.invoke(core.clj:129)
at metabase.core$_main.invokeStatic(core.clj:168)
at metabase.core$_main.doInvoke(core.clj:162)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at metabase.core.main(Unknown Source)
Caused by: org.postgresql.util.PSQLException: Connection to :5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at java.sql/java.sql.DriverManager.getConnection(Unknown Source)
at java.sql/java.sql.DriverManager.getConnection(Unknown Source)
at clojure.java.jdbc$get_driver_connection.invokeStatic(jdbc.clj:271)
at clojure.java.jdbc$get_driver_connection.invoke(jdbc.clj:250)
at clojure.java.jdbc$get_connection.invokeStatic(jdbc.clj:411)
at clojure.java.jdbc$get_connection.invoke(jdbc.clj:274)
at clojure.java.jdbc$db_query_with_resultset_STAR_.invokeStatic(jdbc.clj:1111)
at clojure.java.jdbc$db_query_with_resultset_STAR_.invoke(jdbc.clj:1093)
at clojure.java.jdbc$query.invokeStatic(jdbc.clj:1182)
at clojure.java.jdbc$query.invoke(jdbc.clj:1144)
at clojure.java.jdbc$query.invokeStatic(jdbc.clj:1160)
at clojure.java.jdbc$query.invoke(jdbc.clj:1144)
at metabase.driver.sql_jdbc.connection$can_connect_with_spec_QMARK_.invokeStatic(connection.clj:245)
at metabase.driver.sql_jdbc.connection$can_connect_with_spec_QMARK_.invoke(connection.clj:242)
at metabase.db.setup$fn__33730$verify_db_connection__33735$fn__33736$fn__33737.invoke(setup.clj:100)
... 21 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
at java.base/java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.base/java.net.SocksSocketImpl.connect(Unknown Source)
at java.base/java.net.Socket.connect(Unknown Source)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 40 more
2022-02-14 12:49:37,794 INFO metabase.core :: Metabase Shutting Down ...
2022-02-14 12:49:37,796 INFO metabase.server :: Shutting Down Embedded Jetty Webserver
2022-02-14 12:49:37,806 INFO metabase.core :: Metabase Shutdown COMPLETE
Please how do I fix this?
When mounting extra volumes for plugins or h2 database volume. In order to make sure the metabase has the permission to access these volume we need to set security context with fsGroup with respective group for the metabase user which we can set using enviroment variable like MUID and MGID.
Sample Deployement yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: metabase
app.kubernetes.io/instance: metabase
chart: metabase-2.14.4
heritage: Helm
release: metabase
name: metabase
namespace: metabase
spec:
replicas: 1
selector:
matchLabels:
app: metabase
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: metabase
release: metabase
spec:
containers:
- env:
- name: MB_JETTY_HOST
value: 0.0.0.0
- name: MB_JETTY_PORT
value: '3000'
- name: MB_DB_TYPE
value: h2
- name: MB_DB_FILE
value: /db/metabase.db
- name: MB_ENCRYPTION_SECRET_KEY
valueFrom:
secretKeyRef:
key: ENCRYPTION_KEY
name: metabase-db
- name: MB_PASSWORD_COMPLEXITY
value: normal
- name: MB_PASSWORD_LENGTH
value: '6'
- name: JAVA_TIMEZONE
value: UTC
- name: MB_PLUGINS_DIR
value: /plugins
- name: MB_EMOJI_IN_LOGS
value: 'true'
- name: MB_COLORIZE_LOGS
value: 'true'
- name: MUID
value: '1099'
- name: MGID
value: '10999'
image: 'metabase/metabase:v0.49.8'
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 6
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: metabase
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/health
port: 3000
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 3
resources: {}
securityContext:
runAsGroup: 1099
runAsUser: 1099
volumeMounts:
- mountPath: /db
name: db
- mountPath: /plugins
name: plugins
restartPolicy: Always
securityContext:
fsGroup: 1099
serviceAccount: metabase
serviceAccountName: metabase
volumes:
- name: db
persistentVolumeClaim:
claimName: metabase-db
- name: plugins
persistentVolumeClaim:
claimName: metabase-plugins
Now metabase use log4j2 to configure logging.
Would it be possible to have an update for metabase 0.44.6. It seems that Google made an unannounced deprecation that has affected "Sign in with Google" capability in metabase. Version 0.44.6 fixes this (relevant issue here: metabase/metabase#26184).
I'm do not have a lot of experience maintaining helm charts so I don't quite know how to contribute myself. If you'd prefer that I contribute this change, any guidance on how to go about doing this would be welcome.
Horizontal scaling has been fully supported since 0.30 (August 2018).
Read more here: https://www.metabase.com/learn/administration/metabase-at-scale.html.
Please update values.yaml.
Hi!
Currently, it provides monitoring.enabled
value.
However, ServiceMonitor
resource for Prometheus is not provisioned.
I think it's great option.
How do you think?
Thank you very much for this chart.
Sincerely,
Hi,
I'd love to use this helm chart however I'm getting the following error when I deploy to my Kubernetes 1.23 cluster.
Error: unable to recognize "": no matches for kind "Ingress" in version "extensions/v1beta1"
Is it possible that Capabilities.KubeVersion.GitVersion
in ingress.yaml should be replaced with Capabilities.KubeVersion.Version
as shown here? https://helm.sh/docs/chart_template_guide/builtin_objects/
Thanks
Hello 😄
I'd like to deploy this metabase chart so that I can access metabase from domain subpath - eg: my.domain/metabase
.
I am already able to access from the root of my.domain
using ingress-nginx ingress controller:
siteUrl: http://my.domain
ingress:
enabled: true
className: nginx
hosts:
- my.domain
path: /
pathType: Prefix
readinessProbe:
initialDelaySeconds: 100
This works fine and I can access metabase from my.domain
with no problems.
But things don't work if I try to configure a subpath:
siteUrl: http://my.domain/metabase
ingress:
enabled: true
className: nginx
hosts:
- my.domain
path: /metabase/
pathType: Prefix
readinessProbe:
initialDelaySeconds: 100
I have noticed some discussions where people have managed to get sub-path access working using a reverse proxy:
I'm wondering if there is some way we can bootstrap this chart to support access from a domain sub-path?
First, thanks for providing a helm chart for Metabase!
We are currently deploying Metabase on OpenShift using OpenShift Templates and are currently considering a move to a deployment via (this) Helm chart. We are using OpenShift's Route
(route.openshift.io/v1
) instead of Ingress
, therefore we would require some additional configuration options in the chart.
If you're open to such an addition, I'm more as happy to provide a pull request. Thanks!
I have set up the following values for the chart:
metabase:
extraEnv:
MB_REDIRECT_ALL_REQUESTS_TO_HTTPS: false
However, that environment variable is not getting passed to the pod. I only get the defaults:
JAVA_TIMEZONE : UTC
MB_COLORIZE_LOGS : true
MB_DB_TYPE : h2
MB_EMOJI_IN_LOGS : true
MB_JETTY_HOST : 0.0.0.0
MB_JETTY_PORT : 3000
MB_PASSWORD_COMPLEXITY : normal
MB_PASSWORD_LENGTH : 6
I have tried all sorts of other formats for extraEnv
, but haven't gotten any to work. For example
metabase:
extraEnv:
- MB_REDIRECT_ALL_REQUESTS_TO_HTTPS: false
metabase:
extraEnv:
- name: MB_REDIRECT_ALL_REQUESTS_TO_HTTPS
value: false
Both result in this error warning: cannot overwrite table with non table for extraEnv (map[])
when running helm template
Any guidance would be appreciated. Thanks!
after installing the helm chart, i am not able to access the application. how can we expose the service?
This issue is related to metabase/metabase#27497
New Metabase release 0.45.2
I'm trying to use this Helm chart to deploy on GKE, but I can't seem to connect the Metabase with CloudSQL DB.
Hi,
I also have the setup the Workload Identity and connected the Google Service Account to the Kubernetes Service Accounts. However the Metabase Helm install fails and couldn't find the CloudSQL DB.
Where do I specify the Workload Identity?
Here's the section for the Backend DB configuration from my values.yml
# Backend database
database:
# Database type (h2 / mysql / postgres), default: h2
type: postgres
## Only need when you use mysql / postgres
host: <IP Address masked>
port: 5432
dbname: metabase
username: metabase
password: <Password masked>
## One or more Google Cloud SQL database instances can be made available to Metabase via the *Cloud SQL Auth proxy*.
## These can be used for Metabase's internal database (by specifying `host: localhost` and the port above), or as
## additional databases (configured at Admin → Databases). Workload Identity should be used for authentication, so
## that when `serviceAccount.create=true`, `serviceAccount.annotations` should contain:
## iam.gke.io/gcp-service-account: your-gsa@email
## Ref: https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
googleCloudSQL:
## Found in Cloud Console "Cloud SQL Instance details" or using `gcloud sql instances describe INSTANCE_ID`
## example format: $project:$region:$instance=tcp:$port
## Each connection must have a unique TCP port.
instanceConnectionNames: [<my-project:my-region:my-instance=tcp:port Masked>]
## Option to use a specific version of the *Cloud SQL Auth proxy* sidecar image.
## ref: https://console.cloud.google.com/gcr/images/cloudsql-docker/GLOBAL/gce-proxy
sidecarImageTag: latest
## ref: https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine#running_the_as_a_sidecar
resources:
{}
Newer versions of kubernetes allow specifying the ingressClassName
directly on the ingress spec as opposed to an annotation which is now deprecated.
Here's what I mean
Shall I make a PR for this?
Hi Team,
We have deployed metabase in Openshift environment and it is connecting to postgres db public schema. Now we are trying to connect to a different schema but the metabase deployment is failing with below error. Can someone please help.
2024-08-06 15:06:43,868 INFO db.setup :: �[36mVerifying postgres Database Connection ...�[0m
2024-08-06 15:06:44,745 INFO db.setup :: Successfully verified PostgreSQL 13.11 application database connection. ✅
2024-08-06 15:06:44,745 INFO db.setup :: �[36mChecking if a database downgrade is required...�[0m
2024-08-06 15:06:44,819 ERROR metabase.core :: Metabase Initialization FAILED
org.postgresql.util.PSQLException: ERROR: relation "databasechangelog" does not exist
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.