GithubHelp home page GithubHelp logo

victoriametrics / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
287.0 13.0 303.0 155.51 MB

Helm charts for VictoriaMetrics, VictoriaLogs and ecosystem

Home Page: https://victoriametrics.github.io/helm-charts/

License: Apache License 2.0

Makefile 5.79% Mustache 56.54% Python 26.75% Smarty 10.92%
helm helm-charts kubernetes victoriametrics

helm-charts's Introduction

Victoria Metrics Helm Charts

Artifact Hub License Helm: v3 Slack

This repository contains helm charts for VictoriaMetrics and VictoriaLogs.

Add a chart helm repository

Access a Kubernetes cluster.

Add a chart helm repository with follow commands:

helm repo add vm https://victoriametrics.github.io/helm-charts/

helm repo update

List all charts and versions of vm repository available to installation:

helm search repo vm/

The command must display existing helm chart e.g.

NAME                         	CHART VERSION	APP VERSION        	DESCRIPTION
vm/victoria-logs-single      	0.3.4        	v0.4.2-victorialogs	Victoria Logs Single version - high-performance...
vm/victoria-metrics-agent    	0.9.17       	v1.97.1            	Victoria Metrics Agent - collects metrics from ...
vm/victoria-metrics-alert    	0.8.7        	v1.97.1            	Victoria Metrics Alert - executes a list of giv...
vm/victoria-metrics-anomaly  	0.5.0        	v1.6.0             	Victoria Metrics Anomaly Detection - a service ...
vm/victoria-metrics-auth     	0.4.6       	v1.97.1            	Victoria Metrics Auth - is a simple auth proxy ...
vm/victoria-metrics-cluster  	0.11.11       	v1.97.1            	Victoria Metrics Cluster version - high-perform...
vm/victoria-metrics-gateway  	0.1.55       	v1.97.1            	Victoria Metrics Gateway - Auth & Rate-Limittin...
vm/victoria-metrics-k8s-stack	0.18.12       	v1.97.1            	Kubernetes monitoring on VictoriaMetrics stack....
vm/victoria-metrics-operator 	0.27.11       	0.41.1             	Victoria Metrics Operator
vm/victoria-metrics-single   	0.9.15       	v1.97.1            	Victoria Metrics Single version - high-performa...

Installing the chart

Export default values of victoria-metrics-cluster chart to file values.yaml:

helm show values vm/victoria-metrics-cluster > values.yaml

Change the values according to the need of the environment in values.yaml file.

Test the installation with command:

helm install victoria-metrics vm/victoria-metrics-cluster -f values.yaml -n NAMESPACE --debug --dry-run

Install chart with command:

helm install victoria-metrics vm/victoria-metrics-cluster -f values.yaml -n NAMESPACE

Get the pods lists by running these commands:

kubectl get pods -A | grep 'victoria-metrics'

# or list all resorces of victoria-metrics

kubectl get all -n NAMESPACE | grep victoria

Get the application by running this commands:

helm list -f victoria-metrics -n NAMESPACE

See the history of versions of victoria-metrics application with command.

helm history victoria-metrics -n NAMESPACE

How to uninstall VictoriaMetrics

Remove application with command.

helm uninstall victoria-metrics -n NAMESPACE

Kubernetes compatibility versions

helm charts tested at kubernetes versions from 1.25 to 1.29.

List of Charts

helm-charts's People

Contributors

acondrat avatar aeciopires avatar afoninsky avatar amper avatar andrewchubatiuk avatar b-a-t avatar bon3o avatar brandshaide avatar cambaza avatar denisgolius avatar dependabot[bot] avatar f41gh7 avatar haleygo avatar iordachelm avatar k1rk avatar kevinvirs avatar krakazyabra avatar memberit avatar quite4work avatar rakesh-von avatar schndr avatar sergeimonakhov avatar shusugmt avatar tanelso2 avatar tenmozes avatar valyala avatar victoriametrics-bot avatar weibo-zhao avatar weisdd avatar zekker6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Add "extraSecrets" to victoria-metrics-single to allow ingress basic auth

Hello,

I have set up a victoriametrics server + grafana + an ingress for the metrics sent by external agents. Everything works great, The only issue was that I had to create an extra helm chart to create a secret for basic auth on the metrics ingress.

I have seen that the chart can already extended by "extraContainers" and such. Would you mind to add an "extraSecrets" that allows to deploy a basic auth secret along with the server?

Kind regards,
Michael.

Deduplication of data

Hello,

I deployed Victoria-Metrics Cluster and I need to make sure that the data collected and stored on the storage level is not duplicated. How can I verify this? For example if we created 4 replicas for the storage pods, the data should not be stored 4 times. What I mean is that the metrics from prometheus should not be duplicated on the storage level.

Can you please explain the behavior of the below attributes?

  • replicationFactor (vminsert)
  • dedup.minScrapeInterval (vmselect)

Why I am unable to add VicotriaMetrics as prometheus datasource for Grafana once the dedup attribute is added?

In case we enabled the persistentVolumeClaim for vmselect, we will have a pvc for each replica created? We will face any impact on Grafana dashboard in case of a pod failure?

Thank you.

vmagent kubelet cAdvisor scraping forbidden

GCE cluster and vmagent can't scrape kubelet cadvisor:

1s, error="unexpected status code returned when scraping \"https://10.138.0.121:10250/metrics\": 403; expecting 200; response body: \"Forbidden (user=system:serviceaccount:devops:vmagent-gcp-stg-victoria-metrics-agent, verb=get, resource=nodes, subresource=metrics)\""

This will fix the problem.

 +++ b/charts/victoria-metrics-agent/templates/clusterrole.yaml
 @@ -15,6 +15,7 @@ rules:
    resources:
    - nodes
    - nodes/proxy
 +  - nodes/metrics
    - services
    - endpoints
    - pods

Prometheus operator chart had this resource in clusterrole

vmagent psp policy / runAsNonRoot

Hey,

it seems that vmagent can`t run in clusters with PSP enabled.

If I want to run the vmagent it is not possible as kubernetes detect it is running as root.

If I set following then the pod is able / allowed to start

securityContext:
  runAsNonRoot: true

https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-agent/values.yaml#L59

But then vmagent does not work properly. It seems that vmagent need to persist a queue on disk.

...
020-07-22T16:49:14.059Z        info    VictoriaMetrics/lib/logger/flag.go:20   flag "remoteWrite.urlRelabelConfig" = ""
2020-07-22T16:49:14.059Z        info    VictoriaMetrics/lib/logger/flag.go:20   flag "tls" = "false"
2020-07-22T16:49:14.059Z        info    VictoriaMetrics/lib/logger/flag.go:20   flag "tlsCertFile" = ""
2020-07-22T16:49:14.059Z        info    VictoriaMetrics/lib/logger/flag.go:20   flag "tlsKeyFile" = "secret"
2020-07-22T16:49:14.059Z        info    VictoriaMetrics/lib/logger/flag.go:20   flag "version" = "false"
2020-07-22T16:49:14.059Z        info    VictoriaMetrics/app/vmagent/main.go:77  starting vmagent at ":8429"...
2020-07-22T16:49:14.062Z        info    VictoriaMetrics/lib/memory/memory.go:35 limiting caches to 4898832384 bytes, leaving 3265888256 bytes to the OS according to -memory.allowedPercent=60
2020-07-22T16:49:14.062Z        error   VictoriaMetrics/lib/persistentqueue/persistentqueue.go:146      cannot open persistent queue at "vmagent-remotewrite-data/persistent-queue/696B6E463EF121CC": cannot create directory "vmagent-remotewrite-data/persistent-queue/696B6E463EF121CC": mkdir vmagent-remotewrite-data: permission denied; cleaning it up and trying again
2020-07-22T16:49:14.062Z        panic   VictoriaMetrics/lib/persistentqueue/persistentqueue.go:150      FATAL: cannot create directory "vmagent-remotewrite-data/persistent-queue/696B6E463EF121CC": mkdir vmagent-remotewrite-data: permission denied
panic: FATAL: cannot create directory "vmagent-remotewrite-data/persistent-queue/696B6E463EF121CC": mkdir vmagent-remotewrite-data: permission denied

goroutine 1 [running]:
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logMessage(0xac7d32, 0x5, 0xc0001961b0, 0x8e, 0x4)
        github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:191 +0xa72
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logLevelSkipframes(0x1, 0xac7d32, 0x5, 0xacb2b2, 0x9, 0xc00011fb90, 0x1, 0x1)
        github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:124 +0xd0
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logLevel(...)
        github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:116
github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.Panicf(...)
        github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:112
github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue.mustOpen(0xc0001a0040, 0x3a, 0x7ffdfcd53b94, 0x6c, 0x20000080, 0x2000000, 0x0, 0xc00019c000)
        github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue/persistentqueue.go:150 +0x367
github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue.MustOpen(...)
        github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue/persistentqueue.go:134
github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue.MustOpenFastQueue(0xc0001a0040, 0x3a, 0x7ffdfcd53b94, 0x6c, 0xc8, 0x0, 0x3a)
        github.com/VictoriaMetrics/VictoriaMetrics/lib/persistentqueue/fastqueue.go:40 +0x92
github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite.newRemoteWriteCtx(0x0, 0x7ffdfcd53b94, 0x6c, 0xc8, 0xc00019a050, 0xc, 0xc)
        github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:171 +0x19b
github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite.Init()
        github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/remotewrite/remotewrite.go:76 +0x2a3
main.main()
        github.com/VictoriaMetrics/VictoriaMetrics/app/vmagent/main.go:79 +0x370

vmagent use env from secret for remoteWrite.basicAuth

I use FluxCD & Sealed secret to protect kubernetes secret stored in private git and the doc here say it's possible to provide env variable instead of cli flag.
So it should possible to use this technique to provide basic auth information to VMAgent to avoid storing it in clear in git

bump helm chart version in case of changes

Hello,

Please update helm chart version if you change it.

My cluster is monitored by ArgoCD which constantly syncs installed helm charts, and today it restarted the cluster because you changed docker image tags w/o changing chart version, so even if chart version stays the same, it has to restart VM deployments.

There are no custom labels for pods

{{/*
Create unified labels for victoria-metrics components
*/}}
{{- define "victoria-metrics.common.matchLabels" -}}
app.kubernetes.io/name: {{ include "victoria-metrics.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}

{{- define "victoria-metrics.common.metaLabels" -}}
helm.sh/chart: {{ include "victoria-metrics.chart" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}

{{- define "victoria-metrics.server.labels" -}}
{{ include "victoria-metrics.server.matchLabels" . }}
{{ include "victoria-metrics.common.metaLabels" . }}
{{- end -}}

{{- define "victoria-metrics.server.matchLabels" -}}
app: {{ .Values.server.name }}
{{ include "victoria-metrics.common.matchLabels" . }}
{{- end -}}

Sometimes you need to configure additional labels for pods.
Example from grafana:

  template:
    metadata:
      labels:
        {{- include "grafana.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{ toYaml . | indent 8 }}
{{- end }}

https://github.com/grafana/helm-charts/blob/86c43461578f0d53d9f3edc49a2a36e4a984e7a7/charts/grafana/templates/deployment.yaml#L30

vmselect deployment not creating new volumeClaims.

The following part of the code does not create a new volume claim, but rather uses already existing one:

{{- if .Values.vmselect.persistentVolume.enabled }}
persistentVolumeClaim:
claimName: {{ if .Values.vmselect.persistentVolume.existingClaim }}{{ .Values.vmselect.persistentVolume.existingClaim }}{{- else }}{{ template "victoria-metrics.vmselect.fullname" . }}{{- end }}
{{- else }}
emptyDir: {}
{{- end -}}

But in the values.yaml it is stated:
persistentVolume:
# -- Create/use Persistent Volume Claim for vmselect component. Empty dir if false. If true, vmselect will create/use a Persistent Volume Claim

Would be nice to get it fixed :)
I could also do a PR, just let me know.

Victoria metrics cluster monitoring et data access

Hello,

I am not sure if I missed something, but I am unable to monitor victoriametrics using Prometheus. Please can you advise if using the victoriametrics cluster helm chart we will be able to monitor the cluster both ways: self monitor and through Prometheus?

Furthermore, I need to check the data written to the storage, is there any option to do it?

Regards,

victoriametrics/vmstorage v1.54.1-cluster policy / runAsNonRoot

Hey,

it seems that vmstorage can`t run in clusters with PSP enabled.

applied these changes to values.yaml

podSecurityContext:
runAsUser: 1000
runAsNonRoot: true
containerWorkingDir: "/tmp"

getting below error:

{"ts":"2021-02-18T18:39:04.605Z","level":"fatal","caller":"VictoriaMetrics/app/vmstorage/main.go:61","msg":"cannot open a storage at /storage with -retentionPeriod=1: cannot create lock file "/storage/flock.lock": open /storage/flock.lock: permission denied"}

Unexpected values when using extraArgs

Hi

I would add extra args with helm cli (--set) or with values file but i facing to unexpected values in rendered template.

Test1: Testing with values file

helm template test . -f test-values.yml

test-values.yml:

vmstorage:
  extraArgs:
    search.maxUniqueTimeseries: 1000000

Result:

containers:
  - name: victoria-metrics-cluster-vmstorage
     ...
     args:
       - "--retentionPeriod=1"
       - "--storageDataPath=/storage"
       - --search.maxUniqueTimeseries=1e+06

Test2: Testing with --set argument

helm template test . --set vmstorage.extraArgs.search.maxUniqueTimeseries=1000000

containers:
  - name: victoria-metrics-cluster-vmstorage
     ...
     args:
       - "--retentionPeriod=1"
       - "--storageDataPath=/storage"
       - --search=map[maxUniqueTimeseries:1000000]

Test3: Second test with --set argument

helm template test . --set vmstorage.extraArgs=search.maxUniqueTimeseries=1000000

Error: template: victoria-metrics-cluster/templates/vmstorage-statefulset.yaml:45:43: executing "victoria-metrics-cluster/templates/vmstorage-statefulset.yaml" at <.Values.vmstorage.extraArgs>: range can't iterate over search.maxUniqueTimeseries=1000000

I'm new in the Kubernetes and Helm world, so i'm not sure about the syntax using, i will try to make PR to have the possibility to use values file without unexpected result and --set command.

Label for statefulset more than 63 characters

When helm release name longer than 17 chars the error happens:

create Pod victoriametrics-gcp-stg-victoria-metrics-cluster-vmstorage-0 in StatefulSet victoriametrics-gcp-stg-victoria-metrics-cluster-vmstorage failed error: Pod "victoriametrics-gcp-stg-victoria-metrics-cluster-vmstorage-0" is invalid: metadata.labels: Invalid value: "victoriametrics-gcp-stg-victoria-metrics-cluster-vmstorage-5dcccc548": must be no more than 63 characters

I am not sure is it ok.

For me is not a problem to use short name.

Possible issue with ServiceMonitor CRD ?

I'm getting the following error when using Terraform Helm provider version 1.1.1 (which uses Helm 3.12)

Using default values except set "serviceMonitor" to "True". Single Server Helm chart version 0.0.6

Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Problem is solved if I run the same TF again so it looks like it tries to find that ServiceMonitor CRD too soon.

Fix error with missing "-" in configmap victoria-metrics-agent chart

https://golang.org/pkg/text/template/
victoria-metrics-agent/templates/configmap.yaml

{{ toYaml .Values.config | nindent 4 }}

with values:

config:
  global:
    scrape_interval: 10s
    external_labels:
      cluster: k8s

  # scrape self by default
  scrape_configs:

configmap result:

apiVersion: v1
data:
  scrape.yml: |2

        cluster: k8s
      external_labels: null
    global:
      scrape_interval: 10s
    scrape_configs:
      ...

after fix:

{{- toYaml .Values.config | nindent 4 }}
apiVersion: v1
data:
  scrape.yml: |
    global:
      external_labels:
        cluster: k8s-dv
      scrape_interval: 10s
    scrape_configs:
      ...

Duplicate Scrape Target Error on Kube-DNS

We're getting this error with the default configuration in helm when installing in k8s in AWS (EKS).

error VictoriaMetrics/lib/promscrape/scraper.go:269 skipping duplicate scrape target with identical labels; endpoint=http://10.100.30.94:9153/metrics, labels={eks_amazonaws_com_component="kube-dns", instance="10.100.30.94:9153", job="kubernetes-service-endpoints", k8s_app="kube-dns", k8s_cluster_name="k8s-devops", kubernetes_io_cluster_service="true", kubernetes_io_name="CoreDNS", kubernetes_name="kube-dns", kubernetes_namespace="kube-system", kubernetes_node="ip-10-100-30-216.us-east-2.compute.internal"}; make sure service discovery and relabeling is set up properly

I haven't figured how how to fix this one yet as the documentation has suggestions for fixing this issue when it comes up on a pod. However, it would be great if the default config worked with core k8s components out of the box without logging errors.

Changing vmstorage.persistentVolume.size has no effect

Hey,

I am deploying the cluster version helm charts, in aws on ec2.
When trying to increase storage for vmstorage in configuration vmstorage.persistentVolume.size, it has no effect.
At the moment it has the value of size: 8Gi and I can see there are 2 EBS with capacity of 8GiB (I have 2 replicas for vmstorage), when increasing it to size: 16Giand running helm upgrade it upgrades successfully but nothing is changed in the EBS and the nodes still have connected ebs with only 8 GiB.

Am I doing something wrong?
Thanks!

UPDATE:
After playing with it a bit more I am now getting this error when trying to update the size paramter:
Error: UPGRADE FAILED: cannot patch "victoria-victoria-metrics-cluster-vmstorage" with kind StatefulSet: StatefulSet.apps "victoria-victoria-metrics-cluster-vmstorage" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

How can I increase the size of the EBS?

Several issues in victoria-metrics-single

Found several issues using the new chart version:

  • deployment mode exists (server.statefulSet.enabled = false) but there is deployment resource defined
  • ClusterIP is defined to None in the service resource defined in statefulSet mode
  • Most options aren't available in the service resource defined in statefulSet mode (type, ExternalIPs...)

Will provide PR soon.

Setting up scraping with victoria metrics cluster

Hey,

I wanted to setup monitoring of VM-cluster with another instance of VM-cluster, I saw that VM-single has this option:

VictoriaMetrics can be used as drop-in replacement for Prometheus for scraping targets configured in prometheus.yml config file according to the specification. Just set -promscrape.config command-line flag to the path to prometheus.yml config

How can I do this with the helm chart for VM-cluster? Or is this only supported in VM-single?

Thanks!

Possibility to use victoria-metrics-agent Helm chart without scrape.yml config map

Hey,

It would be great to have a possibility to provision vmagent without scrape configuration to use it as a buffering proxy between Prometheus (remote_write) and Victoria Metrics.

As I can see, for now, ConfigMap with scrape.yml configuration is a mandatory component, since Deployment's containers definition in a podTemplate have a

...
          args:
            - -promscrape.config=/config/scrape.yml
...

hardcoded.

Thank you for your time!

vmoperator alway disable prometheus converter

Bug on chart https://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-operator

The deployment of the operator is always disabling the prometheus converter.

My values.yaml

operator:
  # -- By default, operator converts prometheus-operator objects.
  disable_promethues_converter: "false"
  # -- By default, operator creates psp for its objects.
  psp_auto_creation_enabled: "false"
  # -- Enables ownership reference for converted prometheus-operator objects,
  # it will remove corresponding victoria-metrics objects in case of deletion prometheus one.
  enable_converter_ownership: "false"

After deploying the spec of the deployment:

spec:
      containers:
      - args:
        - --zap-log-level=info
        - --enable-leader-election
        command:
        - manager
        env:
        - name: WATCH_NAMESPACE
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: OPERATOR_NAME
          value: victoria-metrics-operator
        - name: VM_ENABLEDPROMETHEUSCONVERTER_PODMONITOR
          value: "false"
        - name: VM_ENABLEDPROMETHEUSCONVERTER_SERVICESCRAPE
          value: "false"
        - name: VM_ENABLEDPROMETHEUSCONVERTER_PROMETHEUSRULE
          value: "false"
        - name: VM_ENABLEDPROMETHEUSCONVERTER_PROBE
          value: "false"
        - name: VM_PSPAUTOCREATEENABLED
          value: "false"
        - name: VM_ENABLEDPROMETHEUSCONVERTEROWNERREFERENCES
          value: "false"
        image: victoriametrics/operator:v0.5.0
        imagePullPolicy: IfNotPresent
        name: victoria-metrics-operator

The expected behaviour:

  • VM_ENABLEDPROMETHEUSCONVERTER_PODMONITOR set to "true"
  • VM_ENABLEDPROMETHEUSCONVERTER_SERVICESCRAPE set to "true"
  • VM_ENABLEDPROMETHEUSCONVERTER_PROMETHEUSRULE set to "true"
  • VM_ENABLEDPROMETHEUSCONVERTER_PROBE set to "true"

Even when commenting out operator.disable_promethues_converter in the value file it is still set the "false".

I notice the change after upgrading the chart from 0.1.1 to 0.1.2.

Thanks

How to use replicationFactor in helm charts

Hey,

How can I give replicationFactor to vminsert pods? Should I add it in the extraArgs field in vminsert and then also add -dedup.minScrapeInterval=1ms in extraArgs for vmselect?

Thanks!

Unable to supply multiple storageNode

When attempting to deploy a cluster, I am going to run vmstorage nodes on traditional virtual machines and running vminsert and vmselect in k8s deployed via Helm chart.

Attempting to supply multiple values for storageNode in extraArgs for either vminsert or vmselect results in only the last value being rendered with no Helm warning being thrown. This appears to be caused due to using the same key storageNode multiple times.

Supply this for

  extraArgs:
    replicationFactor: 2
    storageNode: fqdn1.mydomain.com:8400
    storageNode: fqdn2.mydomain.com:8400
    storageNode: fqdn3.mydomain.com:8400
    storageNode: fqdn4.mydomain.com:8400

This results in the following for a given pod

Containers:
  victoria-metrics-cluster-vminsert:
    Container ID:  docker://[REDACTED]
    Image:         victoriametrics/vminsert:v1.40.0-cluster
    Image ID:      docker-pullable://victoriametrics/vminsert@sha256:d2683a56b51a560e93b3a7ed5e0bdc660b7349d99f206fb4ae4ee6627c89e366
    Port:          8480/TCP
    Host Port:     0/TCP
    Args:
      --replicationFactor=2
      --storageNode=fqdn4.mydomain.com:8400

I also attempted to pass in a value like storageNode: "fqdn1.mydomain.com:8400 --storageNode=fqdn2.mydomain.com:8400 ... " but that definitely made things quite unhappy.

Should storageNode be moved out of extraArgs and turned into an array instead that can accept multiple values?

vmalert - how to specify more than 2 alert manager end points

Hello,
It seems like in current version of vmalert chart it is only possible to specify two end points for alert manager.
One via server.notifier.alertmanager.url and a second one via extraArgs, by specifying notifier.url key.
But we have 3 alert managers. How would I add a third one? Seems like current chart doesn't support it?

Thank you!

default values for vm/victoria-metrics-alert not work

actual:

> helm repo add vm https://victoriametrics.github.io/helm-charts/
...
> helm repo update
...
> helm show values vm/victoria-metrics-alert > alert.values.yaml
> helm install vmalert --dry-run vm/victoria-metrics-alert -f alert.values.yaml 
Error: template: victoria-metrics-alert/templates/server-deployment.yaml:2:49: executing "victoria-metrics-alert/templates/server-deployment.yaml" at <len .Values.server.config.alerts.groups>: error calling len: len of untyped nil

expected:
install anything

vm-agent persistent storage support

VM uses local fs to keep buffer in case if remote_write endpoint is not available. Currently, there is no possibility to mount emptyDir/persistent volume as a storage.

  1. Data will be lost after pod restart.
  2. Service exists with error if following security option is set: "securityContext.readOnlyRootFilesystem = true"
  3. It's not a good practice to write into root fs in containers.

Recommended default behaviour:

  • create emptyDir with possibility to change to other storages
  • set reasonable default "remoteWrite.maxDiskUsagePerURL" to avoid huge disc usage
  • point "remoteWrite.tmpDataPath" to a mounted storage folder

p.s.: helm chart has pod security policy but it's not enabled in the role -> setting this flag in values.yaml will be ignored:
https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-agent/templates/clusterrole.yaml

unused existingClaim for vmstorage persistentVolumeClaim in vmstorage-statefulset.yaml

vmselect-deployment.yaml file use existingClaim as below:

persistentVolumeClaim:
            claimName: {{ if .Values.vmselect.persistentVolume.existingClaim }}{{ .Values.vmselect.persistentVolume.existingClaim }}{{- else }}{{ template "victoria-metrics.vmselect.fullname" . }}{{- end }}
            {{- else }}
          emptyDir: {}
            {{- end -}}

but i can not find existingClaim use in vmstorage-statefulset.yaml

Send InfluxDB data to Victoriametrics cluster

Hi all,

I reviewed the charts available in the repo and I can see that in single mode the port 8428 is exposed as well as other ways of writing influx data. But I can't find that same thing with the cluster mode. How can I achieve that using the cluster?

Thanks!!

authentication

please add support for authentication

it looks like those flags could be set via server.extraArgs but support for kubernetes secrets would be preferred.
https://github.com/VictoriaMetrics/VictoriaMetrics/wiki/Single-server-VictoriaMetrics#security

in case this should be handled exclusively on the ingress level (as mentioned here: VictoriaMetrics/VictoriaMetrics#263 (comment)), support for common ingress controllers (nginx, traefik, etc) would be helpful. (e.g. generate a secret and add the annotation to the ingress)

Creating backups using the single helm chart

Been using victoriametrics for a few months now and it's absolutely a breeze to set up with these helm charts. But what I'm missing here is a way to back up to a network drive (cifs/smb in my case). Now I'd gladly add this to the helm chart with a cronjob. But up on researching this I found that the only way it seems to backup is using direct access to the data dir. Is there a way to backup metrics over TCP/UDP from another pod?

Bug: vmselect deployment should use selectNode argument to support metrics deletion

Hi,

deleting a metric via:
https://fqdn/delete/0/prometheus/api/v1/admin/tsdb/delete_series?match[]=metric-name

results in the following error in the vmselect log:

2020-11-09T08:23:49.299Z panic VictoriaMetrics/app/vmselect/prometheus/prometheus.go:304 BUG: missing -selectNode flag 2020-11-09T08:23:49.300Z error net/http/server.go:3093 http: panic serving 10.28.1.3:36772: BUG: missing -selectNode flag goroutine 20556321 [running]: net/http.(*conn).serve.func1(0xc0252c83c0) net/http/server.go:1801 +0x147 panic(0x9d2860, 0xc025739c80) runtime/panic.go:975 +0x3e9 github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logMessage(0xa4d4fa, 0x5, 0xc02574b480, 0x1d, 0x4) github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:191 +0xa0c github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logLevelSkipframes(0x1, 0xa4d4fa, 0x5, 0xa5c44a, 0x1d, 0x0, 0x0, 0x0) github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:124 +0xd1 github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.logLevel(...) github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:116 github.com/VictoriaMetrics/VictoriaMetrics/lib/logger.Panicf(...) github.com/VictoriaMetrics/VictoriaMetrics/lib/logger/logger.go:112 github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus.resetRollupResultCaches() github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus/prometheus.go:304 +0x3ef github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus.DeleteHandler(0xbfe25d054d2d8760, 0x174d1191d6a4e, 0xda5540, 0xc02c041638, 0xc01fa24a00, 0x2a, 0xc025739ba0) github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/prometheus/prometheus.go:294 +0x3eb main.deleteHandler(0xbfe25d054d2d8760, 0x174d1191d6a4e, 0xda5540, 0xaebbc0, 0xc025743a10, 0xc01fa24a00, 0xc025743a40, 0xc02c041638, 0x34) github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/main.go:311 +0xc5 main.requestHandler(0xaebbc0, 0xc025743a10, 0xc01fa24a00, 0x0) github.com/VictoriaMetrics/VictoriaMetrics/app/vmselect/main.go:178 +0x44b github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver.handlerWrapper(0xc0003103b0, 0xaebbc0, 0xc025743a10, 0xc01fa24a00, 0xa78ea0) github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver/httpserver.go:221 +0x267 github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver.gzipHandler.func1(0xaebbc0, 0xc025743a10, 0xc01fa24a00) github.com/VictoriaMetrics/VictoriaMetrics/lib/httpserver/httpserver.go:156 +0xa7 net/http.HandlerFunc.ServeHTTP(0xc000302220, 0xaebec0, 0xc038840700, 0xc01fa24a00) net/http/server.go:2042 +0x44 net/http.serverHandler.ServeHTTP(0xc00033c000, 0xaebec0, 0xc038840700, 0xc01fa24a00) net/http/server.go:2843 +0xa3 net/http.(*conn).serve(0xc0252c83c0, 0xaed480, 0xc13d824380) net/http/server.go:1925 +0x8ad created by net/http.(*Server).Serve net/http/server.go:2969 +0x36c

This is arguments is needed to that the vmselect receiving the delete can tell the other vmselect pods to update/invalidate their cache.

From the man page:
-selectNode array Addresses of vmselect nodes; usage: -selectNode=vmselect-host1:8481 -selectNode=vmselect-host2:8481 Supports array of values separated by comma or specified via multiple flags.

Use VictoriaMetrics as datasource in Grafana

Hello,

Based on VictoriaMetrics documentation we can use VictoriaMetrics as Prometheus Datasource in Grafana. I am trying to do it with no luck; I am not sure if I am adding the values correctly. Please can you advise?

From VictoriaMetrics documentation:
"Grafana setup
Create Prometheus datasource in Grafana with the following url:

http://:8428
Substitute with the hostname or IP address of VictoriaMetrics.

Then build graphs with the created datasource using PromQL or MetricsQL. VictoriaMetrics supports Prometheus querying API, which is used by Grafana."

Configuration of Grafana section on Kube-prometheus-stack values.yaml:

additionalDataSources:
- name: prometheus-Victoria-metrics
type: prometheus
url: http://IP:Port/select/0/prometheus/
editable: true
orgId: 1
version: 1

Kindly note that I deployed victoria-metrics-cluster chart.

Kindly can you advise also the type of datasources that can be used for Grafana Dashboards?

Thank you.

Incorrect relabel_config keep_if_equal for service endpoints.

Chart version 0.7.6 for victoria-metrics-agent has the following relabel_config for the kubernetes-pods, kubernetes-service-endpoints, and kubernetes-service-endpoints-slow jobs:

      - action: keep_if_equal
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        - __meta_kubernetes_pod_container_port_number

This is correct for the kubernetes-pods job, but for the service endpoint jobs it will not scrape endpoints for services for which the pods do not have the prometheus scrape annotation. To correct this it should be changed as follows for the service endpoints:

      - action: keep_if_equal
        source_labels:
        - __meta_kubernetes_service_annotation_prometheus_io_port
        - __meta_kubernetes_pod_container_port_number

Using vmagent as datasource in Grafana

Hello,

I'm using Victoriametrics in k8s. I've deployed Victoriametrics using operator. I'm using vmagent as drop-in replacement for Prometheus.

What should be datasource used in Grafana to add vmagent?

Can't install chart version 0.3.0

helm install my-release vm/victoria-metrics-cluster --dry-run
Error: YAML parse error on victoria-metrics-cluster/templates/clusterrole.yaml: error converting YAML to JSON: yaml: line 7: did not find expected key

This was with Helm 3, but I do get the same error with Helm 2. (Not during dry run - Helm 2 happily does a dry run but gives the error above when you attempt a real install. Helm 3 gives the error either way.)

vminsert service doesn't give a way to expose UDP 2003

I enabled the Graphite receiver with:

vminsert:
  extraArgs:
    graphiteListenAddr: ":2003"

However, there's no way in vminsert-service.yaml to add UDP/2003 to the vminsert Service, so that it can be forwarded to the vminsert Pods.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.