GithubHelp home page GithubHelp logo

grafana / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
1.5K 138.0 2.2K 23.61 MB

License: Apache License 2.0

Smarty 77.82% Shell 1.62% Mustache 19.45% Makefile 0.53% Jsonnet 0.57%
grafana helm-charts kubenetes hacktoberfest

helm-charts's Introduction

Grafana Community Kubernetes Helm Charts

License Artifact HUB

The code is provided as-is with no warranties.

Usage

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repo as follows:

helm repo add grafana https://grafana.github.io/helm-charts

You can then run helm search repo grafana to see the charts.

Chart documentation is available in grafana directory.

Contributing

We'd love to have you contribute! Please refer to our contribution guidelines for details.

License

Apache 2.0 License.

helm-charts's People

Contributors

bitprocessor avatar chaudum avatar chrisduong avatar daixiang0 avatar davkal avatar faustodavid avatar hjet avatar jdbaldry avatar jkroepke avatar joe-elliott avatar krajorama avatar mapno avatar patrickabrennan avatar piontec avatar rlex avatar sergeyshaykhullin avatar sheikh-abubaker avatar sherifkayad avatar slim-bean avatar torstenwalter avatar trevorwhitney avatar unguiculus avatar verejoel avatar vlad-diachenko avatar whyeasy avatar xtigyro avatar yulez avatar zalegrala avatar zanac1986 avatar zanhsieh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Allow setting of envFromSecret for datasources sidecar

Per Grafana's docs, environment variables can be used in the config YAML for datasources. This means that something like a database password could be kept as its own secret in Kubernetes, allowing the datasource to exist as a ConfigMap and thus checked into source code.

Here's an example:

values.yaml for Grafana Helm chart

sidecar:
  dashboards:
    enabled: true
    envFromSecret: grafana-dashboard-secrets

Secrets

apiVersion: v1
kind: Secret
metadata:
  name: grafana-datasource-influxdb
  labels:
     grafana_datasource: "1"
type: Opaque
stringData:
  INFLUXDB_USERNAME: an-awesome-user
  INFLUXDB_PASSWORD: super-secure
  INFLUXDB_DATABASE: so-database-much-wow

Datasource ConfigMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasource-influxdb
  labels:
     grafana_datasource: "1"
data:
  datasource.yaml: |-
    # config file version
    apiVersion: 1

    # list of datasources that should be deleted from the database
    deleteDatasources: []
      # - name: Graphite
      #   orgId: 1

    # list of datasources to insert/update depending
    # what's available in the database
    datasources:
      # <string, required> name of the datasource. Required
      - name: InfluxDB
        # <string, required> datasource type. Required
        type: influxdb
        # <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
        access: proxy
        # <int> org id. will default to orgId 1 if not specified
        orgId: 1
        # <string> custom UID which can be used to reference this datasource in other parts of the configuration, if not specified will be generated automatically
        uid: influxdb
        # <string> url
        url: http://influxdb:8086
        # <string> database user, if used
        user: $INFLUXDB_USERNAME
        # <string> database name, if used
        database: $INFLUXDB_DATABASE
        # <bool> mark as default datasource. Max one per org
        isDefault: true
        # <string> json object of data that will be encrypted.
        secureJsonData:
          password: $INFLUXDB_PASSWORD
        version: 1
        # <bool> allow users to edit datasources from the UI.
        editable: true

Wrong service target port on image renderer

The target port defined in image renderer service is not the one used on deployment.

Service: https://github.com/grafana/helm-charts/blob/main/charts/grafana/templates/image-renderer-service.yaml#L25
Deployment: https://github.com/grafana/helm-charts/blob/main/charts/grafana/templates/image-renderer-deployment.yaml#L76

The value imageRenderer.service.targetPort isn't specified on readme but exists on values.yaml.

Two options in order to propose a MR :

  • remove imageRenderer.service.targetPort in favor of imageRenderer.service.port on service: targetPort
  • use imageRenderer.service.targetPorton deployment: containerPort and HTTP_PORT

Chart 5.6.6/5.6.7/5.6.8 not working with helm-operator

I tried to update with flux and helm-operator from 5.6.5 to 5.6.6 and, with the same values, I get this error:

{"caller":"release.go:316","component":"release","error":"installation failed: template: grafana/templates/deployment.yaml:46:10: executing \"grafana/templates/deployment.yaml\" at \u003cinclude \"grafana.pod\" .\u003e: error calling include: template: grafana/templates/_pod.tpl:334:23: executing \"grafana.pod\" at \u003c$value\u003e: wrong type for value; expected string; got bool","helmVersion":"v3","phase":"install","release":"grafana","resource":"kube-infrastructure:helmrelease/grafana","targetNamespace":"kube-infrastructure","ts":"2020-09-17T13:18:15.184678561Z"}

Catalog migration

Good day, we are using Rancher and installed here Grafana.
image
How to replace old URL with the new one without losing data?
Our current Catalog:
image
Thanks.

If you're seeing this Grafana has failed to load its application files

After deployment we are experiencing the following error:

If you're seeing this Grafana has failed to load its application files

 1. This could be caused by your reverse proxy settings.

 2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath

 3. If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build

 4. Sometimes restarting grafana-server can help

Grafana.ini

   7   grafana.ini: |
   8     [analytics]
   9     check_for_updates = true
  10     [grafana_net]
  11     url = https://grafana.net
  12     [log]
  13     mode = console
  14     [paths]
  15     data = /var/lib/grafana/data
  16     logs = /var/log/grafana
  17     plugins = /var/lib/grafana/plugins
  18     provisioning = /etc/grafana/provisioning

Ingress

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: metrics.mydomain
  namespace: grafana
  annotations:
    # nginx.ingress.kubernetes.io/rewrite-target: /$1
    # nginx.ingress.kubernetes.io/use-regex: "true"
    kubernetes.io/ingress.class: "nginx-mydomain-staging"
    cert-manager.io/cluster-issuer: letsencrypt-prod
    cert-manager.io/acme-challenge-type: dns01
    cert-manager.io/acme-dns01-provider: "cf-dns"
    nginx.ingress.kubernetes.io/server-snippet: |
      add_header Content-Security-Policy "default-src * data: 'unsafe-inline'; script-src * data: https://ssl.gstatic.com 'unsafe-inline'; img-src 'self' data: *;";
      add_header X-Frame-Options SAMEORIGIN;
      add_header X-XSS-Protection "1; mode=block;";
      add_header X-Content-Type-Options nosniff;
      add_header Referrer-Policy no-referrer;
      add_header Feature-Policy "geolocation 'self';";
spec:
  tls:
    - hosts:
      - metrics.mydomain.com
      secretName: metrics-mydomain-cert
  rules:
  - host: metrics.mydomain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 80

I believe its related to the domain configuration on grafana, as I can access the grafana dashboard from my browser if I do a port-forward. The problem only happens when trying to access from the external IP or the domain. However I have attempted many configurations such as:

[server]
domain = metrics.mydomain.com
[server]
domain = metrics.mydomain.com
root_url = "%(protocol)s://%(domain)s/"
[server]
domain = metrics.mydomain.com
root_url: "%(protocol)s://%(domain)s:%(http_port)s/"

Failed deployment with K8s 1.19.3

I've just upgraded my K8s from 1.18.6 to 1.19.3.

I'm seeing the Grafana pod fail to deploy due to an annotation that is getting added to the pod which does not match the PSP.

The PSP restricts the seccomp profiles to be "docker/default", whilst the pod (actually the replicaset that is managed by a deployment that creates the pod) has the annotation "runtime/default".

I can't see what is setting the annotation; my Grafana is deployed by the prometheus_operator helm chart so maybe it's this.

Should "runtime/default" be added as an allowed seccomp policy given that "docker/default" is deprecated since 1.11?

For reference, the error generated by the replicaset pod is:

Warning FailedCreate 3m9s replicaset-controller Error creating: pods "prom-grafana-7fc9d4d9b7-6j7tl" is forbidden: PodSecurityPolicy: unable to validate pod: [pod.metadata.annotations[container.seccomp.security.alpha.kubernetes.io/grafana-sc-datasources]: Forbidden: runtime/default is not an allowed seccomp profile. Valid values are docker/default pod.metadata.annotations[container.seccomp.security.alpha.kubernetes.io/grafana-sc-dashboard]: Forbidden: runtime/default is not an allowed seccomp profile. Valid values are docker/default pod.metadata.annotations[container.seccomp.security.alpha.kubernetes.io/grafana]: Forbidden: runtime/default is not an allowed seccomp profile. Valid values are docker/default]

migrate stable/grafana here

This issue documents the steps executed for the migration

@zanhsieh

Migration of git history

git clone [email protected]:helm/charts.git grafana-helm-charts
cd grafana-helm-charts
git filter-repo --path-glob 'stable/grafana/*' --path-rename stable/:charts/ --path LICENSE
git checkout -b main
# do not migrate deprecation PR
git reset --hard b984f87e1d932b6b1856910890068dac666c957e
git remote add origin [email protected]:grafana/helm2-grafana.git
git push origin main

With this we now have the complete history of the stable/grafana chart in the main branch of this repository.
I took main as I did not want to force push anything to master. The idea is to make main the default branch of this repository.

Check diff of master and main

git clone [email protected]:grafana/helm2-grafana.git
cd helm2-grafana
git checkout master
cd ..
git clone [email protected]:grafana/helm2-grafana.git helm-grafana
cd helm-grafana
git checkout main
cd ..

Looks as if main and master are already in sync except the deprecation, which was not merged by intention.

diff -r helm2-grafana/ helm-grafana/charts/grafana/                                                                                                     1 ↵
Only in helm2-grafana/: .git

Create empty gh-pages branch

This is needed as it will act as helm chart repository.

git checkout --orphan gh-pages
git rm -rf .
git commit --allow-empty -m "root commit"
git push origin gh-pages

Next steps

  • create PR which contains the following changes #4

    • new version of the chart
    • updated README
    • ci pipeline
  • make main default branch in the repositoy

  • protect main branch

  • protect gh-pages branch
    prevent force push and deletions
    image

  • rename repository to helm-charts

issues with multiple PRs trying to use the same version number

We had two PRs, which both changed the chart version to 5.6.12:

For both of them the CI pipeline passed.

While merging #35 we did not notice that there was now a version conflict. Release pipeline failed https://github.com/grafana/helm-charts/runs/1190341908?check_suite_focus=true

The question is how to deal with this. One solution would be to require all branches to be up to date with the target branch.

image

@zanhsieh @rtluckie @maorfr should we enable that setting or do you have a better idea?

Unable to install loki stack when grafana is disabled

I have loki stack insatalled in my k8s cluster and it works fine. Here is the config and install command:

helm repo add loki https://grafana.github.io/loki/charts
helm repo update
helm upgrade --install loki loki/loki-stack --atomic --create-namespace --namespace cluster-management --values ./loki/loki.yaml

And loki.yaml

loki:
  persistence:
    enabled: true
    size: 20Gi
  config:
    table_manager:
      retention_deletes_enabled: true
      # retain logs for two weeks
      retention_period: 336h

promtail:
  enabled: false

fluent-bit:
  enabled: true

grafana:
  enabled: false

prometheus:
  enabled: false

This worked fine with 1.x but fails with 2.x for the fluentbit with following error:

2020-12-11T07:09:02.805142608Z Fluent Bit v1.4.6
2020-12-11T07:09:02.805165872Z * Copyright (C) 2019-2020 The Fluent Bit Authors
2020-12-11T07:09:02.805172280Z * Copyright (C) 2015-2018 Treasure Data
2020-12-11T07:09:02.805175930Z * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
2020-12-11T07:09:02.805179296Z * https://fluentbit.io
2020-12-11T07:09:02.805182583Z 
2020-12-11T07:09:02.805185697Z Output plugin 'grafana-loki' cannot be loaded
2020-12-11T07:09:02.805189030Z Error: You must specify an output target. Aborting

We already have grafana installed so we don't want to re-install in the loki-stack

dashboards.<name>.file creates empty ConfigMap

In the documentation for Import dashboards it gives this example for importing a file from local filesystem (near the values.yaml file, I would assume).

dashboards:
  default:
    custom-dashboard:
      # This is a path to a file inside the dashboards directory inside the chart directory
      file: dashboards/custom-dashboard.json

When trying this myself with helm2 and helm3, it results in a ConfigMap with empty values.

Below are the steps I took when trying to use this feature of the helmchart.

dashboards:
  default:
    Prometheus:
      datasource: default
      gnetId: 2
      revision: 2
    empty-dashboard:
      file: dashboards/empty-dashboard.json
    prometheus-alerts-firing:
      file: dashboards/prometheus-alerts-firing.json

empty-dashboard.json is just {} which is a copy of the example "dashboards/custom-dashboard.json" already in this git repository.

$ wc -c values.yaml dashboards/empty-dashboard.json dashboards/prometheus-alerts-firing.json
 6852 values.yaml
    3 dashboards/empty-dashboard.json
24251 dashboards/prometheus-alerts-firing.json

Launching helm install --dry-run grafana grafana/grafana -f values.yaml --debug shows the ConfigMap that would be sent to the cluster would have empty values for the dashboards, instead of the file contents.

---
# Source: grafana/templates/dashboards-json-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-dashboards-default
  namespace: default
  labels:
    helm.sh/chart: grafana-5.6.7
    app.kubernetes.io/name: grafana
    app.kubernetes.io/instance: grafana
    app.kubernetes.io/version: "7.1.5"
    app.kubernetes.io/managed-by: Helm
    dashboard-provider: default
data:
  empty-dashboard.json:
    ""
  prometheus-alerts-firing.json:
    ""

If I install this without the --dry-run the ConfigMap I find is indeed empty, and grafana complains about being unable to load these dashboards.

Bizarrely, if I add custom-dashboard: {file: dashboards/custom-dashboard.json} exactly as it appears in the example documentation, this results in a non-empty ConfigMap, even though there is no such file "dashboards/custom-dashboard.json" on my local filesystem.

Maybe the {{.Files.Get "filename"}} in the template cannot access files outside of the helmchart package itself?

Question: How to set anonymous login via the helm chart?

I want disable the login screen for anonymous users, so that anyone can go in as a ReadOnly user. I know this is possible to set in grafana.ini, but I am failing to set this in values.yaml. This is what I have tried with:

auth.anonymous:
  enabled: true
  org_role: VIEWER
  org_name: 'Main Org.'

Am I missing something? There is no example for this in values.yaml, so perhaps there is no support for this yet?

Restart grafana service within a container

Hello,
Tried to add Zabbix data source by installing related plugin
https://github.com/alexanderzobnin/grafana-zabbix
https://alexanderzobnin.github.io/grafana-zabbix/installation/
so, based on the documentation I need to restart Grafana service but after recreating a container the new data source doesn't appear,
Is there any way to restart Grafana-server without recreating the container?
Additionally, attempted to solve the issue by changing the docker image from alpine to ubuntu one but it didn't help.

Any suggestions ?
Thank you in advance

Grafana deletes annnotation/alerts database on pod start

Hi,

I started to use the alerts from grafana and I need to keep the current status of the alerts (and annotations if it's possible) to not receive multiple notifications from the same alert if the pod restarts or is deleted/rescheduled

I tried first enabling the storage in a PVC and I saw the sqlite databas was recreated each time a new pod start.
I tried also configuring grafana to use a postgres database and the content of the database is deleted each time a new pod start.

I tried with the latest version of the helm chart and grafana version 7.2.1 y sidecar version 0.1.209

Thanks for your time!

Add support for PVC's extra spec

I want to define different reclaim policy for the PVC I use but the chart does not allow me to do so.

I would like to request adding a persistence.extraSpec variable to be appended to the spec of created PVC

Add plugin grafana-piechart-panel

Hello,

I am facing issues while trying to add the plugin grafana-piechart-panel to the Grafana helm chart, and I am importing many dashboards that are using grafana-piechart-panel. Please can you advise on the proper way to do it? should I create a configmap related to this plugin or how should I do it?

Thank you

Import Dashboard features

Hello,

I am trying to import different dashboards to Grafana interface using the different features provided, but I am not able to find the best way that replies to my request.

I want to be able to import the dashboards during the Grafana helm chart installation, and be able to modify the content of the Dashboards, update, delete, etc... without redeploying the configuration and recreating the pod.

I tried to add the dashboards under a specific volume on the pod, it worked properly, the dashboards appear in the Grafana interface, BUT, I am unable to do the proper modification without recreation of the pod.

I tried to use the sidecar, I created the dashboard as configmap with the appropriate labeling and attributes, but the dashboards were not appearing at all on Grafana GUI interface. And I was wondering if I am missing a specific configuration that was not clear enough for me.

In addition, I tried to use also the local-dashboard option; I added as value a Gitlab URL redirected to the dashboard JSON file. But the dashboard was not appearing on Grafana interface.

I am wondering if I am missing a specific information or detail, can you please advise what can I do to be able to get my Grafana working as requested.

Thank you

does not add a label for dashboards to configmap

I saw this problem.
When I load the dashboard (below values.yaml file)

persistence:
  enable: true
  type: pvc
  existingClaim: grafana-pvc

ingress:
  enabled: true
  tls:
    - hosts:
      - my.hostname
  hosts:
    - my.hostname
rbac:
  create: true
  pspEnabled: false
  namespaced: true
testFramework:
  enabled: false

dashboards:
  dashboard1:
    dashboard1:
      file: dashboards/dashboard1.json
  dashboard2:
    dashboard2:
      file: dashboards/dashboard2.json
  dashboard3:
    dashboard3:
      file: dashboards/dashboard3.json
  dashboard4:
    dashboard4:
      file: dashboards/dashboard4.json
  dashboard5:
    dashboard5:
      file: dashboards/dashboard5.json

admin:
  existingSecret: grafana-admin-creds
  userKey: admin-user
  passwordKey: admin-password
  dashboards:
    enabled: true
  datasources:
    enabled: true

sidecar:
  dashboards:
    enabled: true
  datasources:
    enabled: true

I use sidecar to connect dashboards and sidecar looks for configmap with label "grafana_dashboard"

But after the deployment, I see that the config maps are being created, but the required label is not in them.

I'll take a look at the template.
And I see that all the labels there are taken from a certain variable grafana.labels

It doesn't have it.

How to be in such a situation?

Unrecognized remote plugin message

Hi,

Running
grafana-6.1.5 7.3.1 (app)

Hitting error

t=2020-11-24T10:42:03+0000 lvl=warn msg="Running an unsigned backend plugin" logger=plugins pluginID=sbueringer-consul-datasource pluginDir=/var/lib/grafana/plugins/sbueringer-consul-datasource/dist
t=2020-11-24T10:42:03+0000 lvl=info msg="Registering plugin" logger=plugins id=sbueringer-consul-datasource
t=2020-11-24T10:42:03+0000 lvl=info msg="Registering plugin" logger=plugins id=xginn8-pagerduty-datasource
t=2020-11-24T10:42:03+0000 lvl=info msg="Registering plugin" logger=plugins id=briangann-datatable-panel
t=2020-11-24T10:42:03+0000 lvl=info msg="Registering plugin" logger=plugins id=grafana-image-renderer
t=2020-11-24T10:42:04+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=http subUrl= socket=
t=2020-11-24T10:42:04+0000 lvl=eror msg="Stopped RenderingService" logger=server reason="Failed to start renderer plugin: Unrecognized remote plugin message: \n\nThis usually means that the plugin is either invalid or simply\nneeds to be recompiled to support the latest protocol."
t=2020-11-24T10:42:04+0000 lvl=eror msg="Failed to start plugin" logger=plugins.backend pluginId=sbueringer-consul-datasource error="fork/exec /var/lib/grafana/plugins/sbueringer-consul-datasource/dist/grafana-consul-plugin_linux_amd64: permission denied"
t=2020-11-24T10:42:04+0000 lvl=warn msg="plugin failed to exit gracefully" logger=plugins.backend pluginId=grafana-image-renderer
Failed to start renderer plugin: Unrecognized remote plugin message: 

This usually means that the plugin is either invalid or simply
needs to be recompiled to support the latest protocol.
t=2020-11-24T10:42:04+0000 lvl=eror msg="A service failed" logger=server err="Failed to start renderer plugin: Unrecognized remote plugin message: \n\nThis usually means that the plugin is either invalid or simply\nneeds to be recompiled to support the latest protocol."
t=2020-11-24T10:42:04+0000 lvl=eror msg="Server shutdown" logger=server reason="Failed to start renderer plugin: Unrecognized remote plugin message: \n\nThis usually means that the plugin is either invalid or simply\nneeds to be recompiled to support the latest protocol."

Not sure what has changed recently, I'm not loading the consul plugin anywhere in the helm chart

Regards

Dashboard Sidecar with folderAnnotation insufficient privileges

When using the folderAnnotation option for loading dashboards using the sidecar I get the following error:

2020-11-27T15:19:53.372493130Z [2020-11-27 15:19:53] Working on configmap monitoring/my-config
2020-11-27T15:19:53.372537788Z [2020-11-27 15:19:53] Found a folder override annotation, placing the my-config in: SomeFolder
2020-11-27T15:19:53.372542311Z [2020-11-27 15:19:53] File in configmap home.json ADDED
2020-11-27T15:19:53.372766728Z [2020-11-27 15:19:53] Error: insufficient privileges to create SomeFolder. Skipping home.json.

values.yaml:

...
sidecar:
  image:
    repository: kiwigrid/k8s-sidecar
    tag: 1.1.0
  dashboards:
    enabled: true
    SCProvider: true
    label: grafana_dashboard
    folder: /tmp/dashboards
    defaultFolderName: null
    folderAnnotation: grafana_dashboard_folder

my-config.yaml:

apiVersion: v1
data:
  home.json: |
  {...}
kind: ConfigMap
metadata:
  name: my-config
  annotations:
    grafana_dashboard_folder: "SomeFolder"
  labels:
    grafana_dashboard: "1"
  • Chart version: 6.1.9
  • App version: 7.3.3

Grafana produces a lot of errors "/var/lib/grafana/dashboards: no such file or directory"

I have installed grafana using kube-prometheus. Grafana complains about a missing folder every few seconds...

│ grafana t=2020-10-19T13:23:13+0000 lvl=eror msg="Cannot read directory" logger=provisioning.dashboard type=file name=local error="stat /var/lib/gra │
│ fana/dashboards: no such file or directory"                                                                                                         │
│ grafana t=2020-10-19T13:23:13+0000 lvl=eror msg="Failed to read content of symlinked path" logger=provisioning.dashboard type=file name=local path= │
│ /var/lib/grafana/dashboards error="lstat /var/lib/grafana/dashboards: no such file or directory"                                                    │
│ grafana t=2020-10-19T13:23:13+0000 lvl=eror msg="failed to search for dashboards" logger=provisioning.dashboard type=file name=local error="stat /v │
│ ar/lib/grafana/dashboards: no such file or directory"

Dashboard import - Multiple configMap in one extraConfigmapMounts with subpath ?

Hi everyone,

does "extraConfigmapMounts" allow multiple configmap (one by dashboard in my case) ?

I try to use this exemple :

apiVersion: v1
kind: Pod
metadata:
  name: config-single-file-volume-pod
spec:
  containers:
    - name: test-container
      image: gcr.io/google_containers/busybox
      command: [ "/bin/sh", "-c", "cat /etc/special-key" ]
      volumeMounts:
      - name: config-volume-1
        mountPath: /etc/special-keys
        subPath: cm1
      - name: config-volume-2
        mountPath: /etc/special-keys
        subPath: cm2
  volumes:
    - name: config-volume-1
      configMap:
        name: test-configmap1
    - name: config-volume-2
      configMap:
        name: test-configmap2
restartPolicy: Never

The main goal is to use dashboard provider with a path who contain all the dashboards mounted from configmap (one by dashboard).
(https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards
)

Helm v3 upgrade fails with imageRenderer due to clusterIP issue

It is known Helm v3 issue that upgrades with --force fail if the following is rendered in the K8s Service manifest clusterIP: "" (helm/helm#7956). The correct solution is to render this field conditionally.

Steps to reproduce:

  • initial deploy with imageRenderer.enabled=true
  • perform helm upgrade

Results in:

Error: UPGRADE FAILED: failed to replace object: Service "monitoring-grafana-image-renderer" is invalid: spec.clusterIP: Invalid value: "": field is immutable

Unable to set maxUnavailable to 0

I'm trying to set maxUnavaiable to 0 but it doesn't work since 0 is evaluated as false as described here below "If/Else" https://helm.sh/docs/chart_template_guide/control_structures/

{{- if .Values.podDisruptionBudget.maxUnavailable }}
maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
{{- end }}

I think it would be easier and simpler to just map the whole .Values.podDisruptionBudget via toYaml directly. Similar to the blackbox helm chart:
https://github.com/prometheus-community/helm-charts/blob/4b896d92693a3ce40a0727f1016c7e1925f64872/charts/prometheus-blackbox-exporter/templates/poddisruptionbudget.yaml#L18

how to rename existing datasource from the datasource configmap?

I tried to change the datasource name in my configmap and then reload grafana (using kube-prometheus-stack helm)
but then it just added another datasource with the new name.

Is it posible to rename datasource in grafana via the configmap?


kubectl get cm prom-kube-prometheus-stack-grafana-datasource -o yaml
apiVersion: v1
data:
  datasource.yaml: |-
    apiVersion: 1
    datasources:
    - access: proxy
      editable: false
      name: prometheus1
      type: prometheus
      url: http://prometheus:9090
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: prom
    meta.helm.sh/release-namespace: prometheus
  labels:
    app: kube-prometheus-stack-grafana
    app.kubernetes.io/managed-by: Helm
    chart: kube-prometheus-stack-11.1.1
    grafana_datasource: "1"
    heritage: Helm
    release: prom
  name: prom-kube-prometheus-stack-grafana-datasource
  namespace: prometheus

but if I kubectl edit this configmap and change the name from prometheus1 -> prometheusX, it will not actually rename the datasource on grafana, it will add another one.

is it posible to rename grafana datasource in k8s? (reminder when using datasource as configmap its not posible to edit the datasource from the Grafana UI)

Question: How to add values to plugin datasource

image

I'm using alertmanager datasource and this is my config:

plugins: 
  - camptocamp-prometheus-alertmanager-datasource

datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
    - name: prometheus
      type: prometheus
      url: http://prometheus-server.monitoring.svc.cluster.local
      isDefault: true

    - name: 'Prometheus AlertManager'
      type: 'camptocamp-prometheus-alertmanager-datasource'
      url: http://prometheus-alertmanager.monitoring.svc.cluster.local
      isDefault: false
      editable: true

How can I edit severity levels or is this even possible?

Add support for `foldersFromFilesStructure`

As documented here foldersFromFilesStructure allows you replicate the file system structure in the dashboard as well.

Right now there is no parameter to do so:

data:
provider.yaml: |-
apiVersion: 1
providers:
- name: '{{ .Values.sidecar.dashboards.provider.name }}'
orgId: {{ .Values.sidecar.dashboards.provider.orgid }}
folder: '{{ .Values.sidecar.dashboards.provider.folder }}'
type: {{ .Values.sidecar.dashboards.provider.type }}
disableDeletion: {{ .Values.sidecar.dashboards.provider.disableDelete }}
allowUiUpdates: {{ .Values.sidecar.dashboards.provider.allowUiUpdates }}
options:
path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}
{{- end}}

"range can't iterate over production" when running charts

I have installed the latest version of the helm chart and am trying to run it using the values.yml file and putting it in a specific namespace however I keep getting the following error:

Error: template: grafana/templates/image-renderer-deployment.yaml:34:28: executing "grafana/templates/image-renderer-deployment.yaml" at <include (print $.Template.BasePath "/configmap.yaml") .>: error calling include: template: grafana/templates/configmap.yaml:15:33: executing "grafana/templates/configmap.yaml" at <$value>: range can't iterate over production

This happens whether I disable the service in the values.yml file or not. I can paste the values.yml file here if need be.

Dashboard sidecar stops working after 10 minutes

Describe the bug
kiwigrid/k8s-sidecar#85
After 10 minutes the dashboard sidecar stops receiving watch notifications

Originally posted on old repo: helm/charts#23565

Version of Helm and Kubernetes:
Helm version N/A
Kubernetes 1.16.10

Which chart:
stable/grafana

What happened:
After 10 minutes the dashboard sidecar stops receiving watch notifications

What you expected to happen:
The sidecar to always pick up new dashboards

How to reproduce it (as minimally and precisely as possible):
See kiwigrid/k8s-sidecar#85

New pod initialised on helm upgrade even without any changes

Any time I run helm upgrade even without touching the config, the grafana pod is torn down and new one is started in its place. If I run helm upgrade 5 times back-to-back, it goes through 5 pods. This seems strange.

I'm not sure what kind of information would help, please let me know.

I have a very basic setup. In Chart.yaml dependencies I have

  - name: grafana
    version: 5.6.2
    repository: https://grafana.github.io/helm-charts
  - name: prometheus
    version: 11.15.0
    repository: https://prometheus-community.github.io/helm-charts

and in values.yaml I have

grafana:
  datasources:
    datasources.yaml:
      apiVersion: 1
      datasources:
      - name: Prometheus
        type: prometheus
        url: 
        access: proxy
        isDefault: true
  dashboardProviders:
   dashboardproviders.yaml:
     apiVersion: 1
     providers:
     - name: default
       orgId: 1
       folder: ""
       type: file
       disableDeletion: false
       editable: true
       options:
         path: /var/lib/grafana/dashboards/default
  dashboards:
    default:
     kubernetes-cluster-monitoring-prometheus:
       gnetId: 315
       datasource: Prometheus

Zero Downtime Migration Guide from Old Helm Repository

Wondering if there is any documentation for users who were using the old Helm repositories for things such as Loki

https://grafana.github.io/loki/charts

and want to move to the new Helm repository is at

https://grafana.github.io/helm-charts

No templating on ingress resource

The ingress resource does not allow for templating of the values.yaml. eg.

values.yaml

domain: tld.com
ingress:
  hosts:
  - grafana.{{ .Values.domain }}

The template for the ingress needs to be updated to pass these values through the tpl function before outputting.

Support templating in `env` values

It would be helpful if values for environment variables via env could include template syntax. For example:

grafana:
  env:
    GF_RENDERING_SERVER_URL: "http://{{ .Release.Name}}-renderer:8081/render"
    GF_RENDERING_CALLBACK_URL: "http://{{ .Release.Name}}-grafana/"

Is your feature request related to a problem? Please describe.
Yes, when using values like above the {{ .Release.Name }} is not replaced with the release name. Because of this, I cannot dynamically configure the rendering server and callback URL. The user of my chart has to edit these values manually for my chart to work.

Describe the solution you'd like
If the pod template used the tpl function, the values would be resolved in the pod yaml.

{{- range $key, $value := .Values.env }}
      - name: "{{ tpl $key . }}"
        value: "{{ tpl $value . }}"
{{- end }}

sidecar.dashboards.enabled and persistence.enabled exclude each other

Following config doesnt make sence.

sidecar:
    dashboards:
      enabled: true
      label: grafana_dashboard
persistence:
    type: pvc
    enabled: true
    existingClaim: pvc-grafana-data

The first manages dashboards in configmaps. it discovers cm where dashboards are defined and loads them in pod to /tmp/dashboards

/dev/sda1               123.9G     22.0G    101.8G  18% /tmp/dashboards

this prevents following mount points from being mounted:

/var/lib/grafana
/etc/grafana/grafana.ini

When persistence is enabled it writes to /var/lib/grafana but it doesnt work if "sidecar.dashboards.enabled: true" because the pv is not mounted

however all the default k8s dashboards are created only when "sidecar.dashboards.enabled: true"

  1. It would be nice to have both: all the default k8s dashboards and persistence with mounted pv.

For now I need to create pv because persistence.existingClaim is mandatory if persistence enabled, though its not used because not mounted in /var/lib/grafana.

If 1. is not possible it would be good to make persistence.existingClaim not mandatory so we dont need to create pv which is not used anyway.

Thank you

Add basic auth support to servicemonitor

Since the metrics endpoint is on the same port as http. it would be nice for the servicemonitor from the chart to support setting basicauth to being able to hide it from plain view.

Prepend default port names with "http-"

This will make istio work out of the box and save anyone some time who is unaware of the rules behind protocol selection.

It will also reduce number of overrides someone needs to provide to the chart (i.e. .Values.podPortName and .Values.service.portName)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.