GithubHelp home page GithubHelp logo

percona / percona-helm-charts Goto Github PK

View Code? Open in Web Editor NEW
110.0 13.0 149.0 1.75 MB

Collection of Helm charts for Percona Kubernetes Operators.

Home Page: https://www.percona.com/software/percona-kubernetes-operators

License: Other

Shell 4.98% Dockerfile 13.26% Mustache 73.85% Smarty 7.91%

percona-helm-charts's Introduction

License

Percona Helm Charts

Percona is committed to simplify the deployment and management of databases on Kubernetes. Helm enables users to package, run, share and manage even complex applications. This repository contains Helm charts for the following Percona products.

Useful links:

Installing Charts from this Repository

You will need Helm v3 for the installation. See detailed installation instructions in the README file of each chart.

Contributing

Percona welcomes and encourages community contributions to help improve Percona Kubernetes Operators as well as other Percona's projects.

See the Contribution Guide for more information.

Join Percona Kubernetes Squad!

                    %                        _____                
                   %%%                      |  __ \                                          
                 ###%%%%%%%%%%%%*           | |__) |__ _ __ ___ ___  _ __   __ _             
                ###  ##%%      %%%%         |  ___/ _ \ '__/ __/ _ \| '_ \ / _` |            
              ####     ##%       %%%%       | |  |  __/ | | (_| (_) | | | | (_| |            
             ###        ####      %%%       |_|   \___|_|  \___\___/|_| |_|\__,_|           
           ,((###         ###     %%%        _      _          _____                       _
          (((( (###        ####  %%%%       | |   / _ \       / ____|                     | | 
         (((     ((#         ######         | | _| (_) |___  | (___   __ _ _   _  __ _  __| | 
       ((((       (((#        ####          | |/ /> _ </ __|  \___ \ / _` | | | |/ _` |/ _` |
      /((          ,(((        *###         |   <| (_) \__ \  ____) | (_| | |_| | (_| | (_| |
    ////             (((         ####       |_|\_\\___/|___/ |_____/ \__, |\__,_|\__,_|\__,_|
   ///                ((((        ####                                  | |                  
 /////////////(((((((((((((((((########                                 |_|   Join @ percona.com/k8s   

You can get early access to new product features, invite-only ”ask me anything” sessions with Percona Kubernetes experts, and monthly swag raffles. Interested? Fill in the form at percona.com/k8s.

Submitting Bug Reports

If you find a bug related to one of these Helm charts, please submit a report to the appropriate project's Jira issue tracker:

Learn more about submitting bugs, new feature ideas, and improvements in the Contribution Guide.

percona-helm-charts's People

Contributors

2zz avatar agelwarg avatar alex-souslik-hs avatar alexdga avatar appunni-dishq avatar baurmatt avatar br0var avatar bupychuk avatar cap1984 avatar dalbani avatar denisok avatar dragosboca avatar egegunes avatar hlesesne avatar hors avatar johnwc avatar kyriosgn0 avatar mkatana-silkycoders avatar nmarukovich avatar paulczar avatar pbabilas avatar qmiinh avatar ratio2 avatar sachinhr avatar slavautesinov avatar spron-in avatar tabhay avatar thomaspetit avatar tplavcic avatar yevhenkizin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

percona-helm-charts's Issues

Helm values for mongo db helm charts should allow override of individual replicaset configurations ,like size ,requests, limits etc

Copying it from roadmap project.

Current helm values file for mongo db uses array structure for configuring replica sets . This will be difficult to override using the helm --set option as it will override the entire array options .Please refer the recommended way i.e, using maps here. Helm charts should allow override of individual replica set configurations.
current configuration for replsets is as below
replsets:

name: rs0
size: 3
affinity:
affinityEnabled: true
advanced:
nodeAffinity:
reqdForSchIgnForExec:
nodeSelTerms:
matchExpr:

It can be made

replsets:
rs0:
size: 3
affinity:
affinityEnabled: true
advanced:
nodeAffinity:
reqdForSchIgnForExec:
nodeSelTerms:
matchExpr:

This will be very useful in overriding default configurations in helm using deployment tools like porter

[CLOUD-824] Helm charts are missing annotations capabilities

It is not possible right now to set annotations to core objects when deploying the operators and custom resources.

As an example, it causes the problem when I want to use argocd and helm charts, as I need to add annotations to resources to specify waves (ordering).

So we should have a capability to set annotations to CRs and operator deployments.
We already have a JIRA issue: https://jira.percona.com/browse/CLOUD-824
Creating github issue for better viz

PMM: provide a way to supply sensitive configuration items

At the moment the only way to configure for example Grafana is to add env variables to pmmEnv. Normally we supply sensitive configuration items using kubernetes secrests bypassing helm. This could be made possible by adding an optional value
pmmEnvExistingSecret which would expand into envFrom with this secret if not empty.

Percona Operator mysql: `Value for x-amz-checksum-crc32c header is invalid` on pitr pod

Fresh helm instalation

REPO URL: https://percona.github.io/percona-helm-charts/
CHART: pxc-db:1.14.3
CHART: pxc-operator:1.14.1

Logs from percona/percona-xtradb-cluster-operator:1.14.0-pxc8.0-backup-pxb8.0.35

2024/06/07 18:01:12 run binlog collector
2024-06-07T11:01:13.890069886-07:00 2024/06/07 18:01:13 Reading binlogs from pxc with hostname= mysql-01-pxc-db-pxc-0.mysql-01-pxc-db-pxc.percona-mysql-op.svc.cluster.local
2024-06-07T11:01:14.012974199-07:00 2024/06/07 18:01:14 Starting to process binlog with name binlog.000002
2024-06-07T11:01:14.773274568-07:00 2024/06/07 18:01:14 ERROR: collect binlog files: manage binlog: put binlog.000002 object: put object binlog_1717782524_167aee546bf864cf19bb33c8c4ee9da9: Value for x-amz-checksum-crc32c header is invalid.

Values

backup:
  enabled: true
  image:
    repository: percona/percona-xtradb-cluster-operator
    tag: 1.14.0-pxc8.0-backup-pxb8.0.35
  pitr:
    enabled: true
    storageName: s3-wasabi
    timeBetweenUploads: 60
    timeoutSeconds: 60
    resources:
      requests: {}
      limits: {}
  storages: 
    s3-wasabi:
      type: s3
      s3:
        bucket: 01-percona-mysql-backup
        credentialsSecret: cluster1-s3-credentials
        endpointUrl: https://s3.ca-central-1.wasabisys.com/
        prefix: ""

I also tried with 1.14.0-pxc8.0.36-backup-pxb8.0.35, same error.
I also get the same error when I add region: ca-central-1

I have looked at https://docs.percona.com/percona-operator-for-mysql/pxc/backups-storage.html and https://docs.percona.com/percona-operator-for-mysql/pxc/operator.html#backup-section as well.

How to Add SMTP settings to Grafani.ini / Percona

How can email settings be applied to the PMM deployment for alerting notifications ?

Config map with grafana.ini settings including smtp and mount as volume and overwrite default and apply to deployment ?

Pod doesn't start with pg-operator v2.3.0 with `watchAllNamespaces: true` set: `panic: WATCH_NAMESPACE must be set`

Description

Upgrading pg-operator helm chart from v2.2.2 to v2.3.0 resulting in pod crashing. The only value in place is watchAllNamespaces: true

This appears to be related to changes to the operator that make WATCH_NAMESPACE environment variable mandatory as the pod starts and crashes straight away:

> kubectl logs -n db-operator-system   pg-operator-7bbb8cc449-9bp5q
2024-01-05T08:10:03.804Z	INFO	feature gates enabled	{"PGO_FEATURE_GATES": "TablespaceVolumes=false,BridgeIdentifiers=false,InstanceSidecars=true,PGBouncerSidecars=false,AllAlpha=false,AllBeta=false"}
panic: WATCH_NAMESPACE must be set

goroutine 1 [running]:
main.assertNoError(...)
	/go/src/github.com/percona/percona-postgresql-operator/cmd/postgres-operator/main.go:53
main.main()
	/go/src/github.com/percona/percona-postgresql-operator/cmd/postgres-operator/main.go:118 +0x78d

Looking at the template, it looks like that env var is excluded when watchAllNamespaces: true. The log message above would suggest it's now required, so the helm chart is capable of creating an invalid deployment with this version of pg-operator.

psmdb-operator crashes when psmdb-db is deployed

I'm using both the psmdb-operator and psmdb-db helm charts. I have deployed the operator (without the db deployment) and it was working fine, without errors/crashes.

However, now that I have deployed the db, the operator enters a crashloop. When the operator starts crashlooping, it causes a complete restart of all the pods from psmdb-db as well.

Logs from operator:

2024-06-12T21:42:34.549Z        INFO    setup   Manager starting up     {"gitCommit": "54e1b18dd9dac8e0ed5929bb2c91318cd6829a48", "gitBranch": "release-1-16-0", "goVersion": "go1.22.3", "os": "linux", "arch": "amd64"}
2024-06-12T21:42:34.565Z        INFO    server version  {"platform": "kubernetes", "version": "v1.28.7+k3s1"}
2024-06-12T21:42:34.570Z        INFO    controller-runtime.metrics      Starting metrics server
2024-06-12T21:42:34.570Z        INFO    starting server {"name": "health probe", "addr": "[::]:8081"}
2024-06-12T21:42:34.570Z        INFO    controller-runtime.metrics      Serving metrics server  {"bindAddress": ":8080", "secure": false}
I0612 21:42:34.570960       1 leaderelection.go:250] attempting to acquire leader lease mongodb/08db0feb.percona.com...
I0612 21:42:53.320941       1 leaderelection.go:260] successfully acquired lease mongodb/08db0feb.percona.com
2024-06-12T21:42:53.321Z        INFO    Starting EventSource    {"controller": "psmdb-controller", "source": "kind source: *v1.PerconaServerMongoDB"}
2024-06-12T21:42:53.321Z        INFO    Starting Controller     {"controller": "psmdb-controller"}
2024-06-12T21:42:53.321Z        INFO    Starting EventSource    {"controller": "psmdbrestore-controller", "source": "kind source: *v1.PerconaServerMongoDBRestore"}
2024-06-12T21:42:53.321Z        INFO    Starting EventSource    {"controller": "psmdbbackup-controller", "source": "kind source: *v1.PerconaServerMongoDBBackup"}
2024-06-12T21:42:53.321Z        INFO    Starting EventSource    {"controller": "psmdbrestore-controller", "source": "kind source: *v1.Pod"}
2024-06-12T21:42:53.321Z        INFO    Starting Controller     {"controller": "psmdbrestore-controller"}
2024-06-12T21:42:53.321Z        INFO    Starting EventSource    {"controller": "psmdbbackup-controller", "source": "kind source: *v1.Pod"}
2024-06-12T21:42:53.321Z        INFO    Starting Controller     {"controller": "psmdbbackup-controller"}
2024-06-12T21:42:53.444Z        INFO    Starting workers        {"controller": "psmdbbackup-controller", "worker count": 1}
2024-06-12T21:42:53.445Z        INFO    Starting workers        {"controller": "psmdb-controller", "worker count": 1}
2024-06-12T21:42:53.445Z        INFO    Starting workers        {"controller": "psmdbrestore-controller", "worker count": 1}
E0612 21:42:53.685207       1 runtime.go:79] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map)
goroutine 313 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1f11320, 0x298b1f0})
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:75 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000802fc0?})
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:49 +0x6b
panic({0x1f11320?, 0x298b1f0?})
        /usr/local/go/src/runtime/panic.go:770 +0x132
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).setUpdateMongosFirst.func1()
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/smart.go:226 +0xd0
k8s.io/client-go/util/retry.OnError.func1()
        /go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:51 +0x30
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x411b9b?)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:145 +0x3e
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0}, 0xc000baaa18)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:461 +0x5a
k8s.io/client-go/util/retry.OnError({0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0}, 0x4171ba?, 0x0?)
        /go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:50 +0xa5
k8s.io/client-go/util/retry.RetryOnConflict(...)
        /go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:104
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).setUpdateMongosFirst(0x1ef45e0?, {0x29affe0?, 0xc0011ad140?}, 0x6?)
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/smart.go:220 +0xbc
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).createSSLByCertManager(0xc000b882d0, {0x29affe0, 0xc0011ad140}, 0xc000dcaf08)
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/ssl.go:187 +0x794
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileSSL(0xc000b882d0, {0x29affe0, 0xc0011ad140}, 0xc000dcaf08)
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/ssl.go:66 +0x30d
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile(0xc000b882d0, {0x29affe0, 0xc0011ad140}, {{{0xc0006dade8?, 0x5?}, {0xc0006dade0?, 0xc000d25d10?}}})
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:368 +0x16d0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x29b4dc8?, {0x29affe0?, 0xc0011ad140?}, {{{0xc0006dade8?, 0xb?}, {0xc0006dade0?, 0x0?}}})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000b820b0, {0x29b0018, 0xc0009c03c0}, {0x1fdf1a0, 0xc000dd27a0})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311 +0x3bc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000b820b0, {0x29b0018, 0xc0009c03c0})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:261 +0x1be
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:222 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 141
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218 +0x486
2024-06-12T21:42:53.730Z        INFO    Observed a panic in reconciler: assignment to entry in nil map  {"controller": "psmdb-controller", "object": {"name":"psmdb-db","namespace":"mongodb"}, "namespace": "mongodb", "name": "psmdb-db", "reconcileID": "7676acba-b62f-4d00-a4dc-51c0e17bc27c"}
panic: assignment to entry in nil map [recovered]
        panic: assignment to entry in nil map [recovered]
        panic: assignment to entry in nil map

goroutine 313 [running]:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:111 +0x1e5
panic({0x1f11320?, 0x298b1f0?})
        /usr/local/go/src/runtime/panic.go:770 +0x132
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000802fc0?})
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:56 +0xcd
panic({0x1f11320?, 0x298b1f0?})
        /usr/local/go/src/runtime/panic.go:770 +0x132
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).setUpdateMongosFirst.func1()
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/smart.go:226 +0xd0
k8s.io/client-go/util/retry.OnError.func1()
        /go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:51 +0x30
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x411b9b?)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:145 +0x3e
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0}, 0xc000efea18)
        /go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:461 +0x5a
k8s.io/client-go/util/retry.OnError({0x989680, 0x4014000000000000, 0x3fb999999999999a, 0x4, 0x0}, 0x4171ba?, 0x0?)
        /go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:50 +0xa5
k8s.io/client-go/util/retry.RetryOnConflict(...)
        /go/pkg/mod/k8s.io/[email protected]/util/retry/util.go:104
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).setUpdateMongosFirst(0x1ef45e0?, {0x29affe0?, 0xc0011ad140?}, 0x6?)
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/smart.go:220 +0xbc
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).createSSLByCertManager(0xc000b882d0, {0x29affe0, 0xc0011ad140}, 0xc000dcaf08)
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/ssl.go:187 +0x794
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).reconcileSSL(0xc000b882d0, {0x29affe0, 0xc0011ad140}, 0xc000dcaf08)
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/ssl.go:66 +0x30d
github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb.(*ReconcilePerconaServerMongoDB).Reconcile(0xc000b882d0, {0x29affe0, 0xc0011ad140}, {{{0xc0006dade8?, 0x5?}, {0xc0006dade0?, 0xc000d25d10?}}})
        /go/src/github.com/percona/percona-server-mongodb-operator/pkg/controller/perconaservermongodb/psmdb_controller.go:368 +0x16d0
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x29b4dc8?, {0x29affe0?, 0xc0011ad140?}, {{{0xc0006dade8?, 0xb?}, {0xc0006dade0?, 0x0?}}})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114 +0xb7
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000b820b0, {0x29b0018, 0xc0009c03c0}, {0x1fdf1a0, 0xc000dd27a0})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311 +0x3bc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000b820b0, {0x29b0018, 0xc0009c03c0})
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:261 +0x1be
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2()
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:222 +0x79
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 141
        /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218 +0x486

I'm using version 1.16.1 in both charts.
My values files are as follows:

perconaMongodb:
  enabled: true
  version: 1.16.1
  values:
    backup:
      enabled: false
      pitr:
        enabled: true
      storages:
        gcp:
          type: s3
          s3:
            credentialsSecret: gcp-backup-credentials
            bucket:  redacted
            region: us
            prefix: dev-onprem/mongodb
            endpointUrl: https://storage.googleapis.com
      tasks:
      - name: daily-gcp-us
        enabled: true
        schedule: "0 0 * * *"
        keep: 3
        storageName: gcp
        compressionType: gzip
    pmm:
      enabled: true
    replsets:
      rs0:
        volumeSpec:
          pvc:
            storageClassName: ceph-block
            resources:
              requests:
                storage: 10Gi
      rs1:
        resources:
          limits:
            cpu: "300m"
            memory: "0.5G"
          requests:
            cpu: "300m"
            memory: "0.5G"
        size: 3
        volumeSpec:
          pvc:
            storageClassName: ceph-block
            resources:
              requests:
                storage: 10Gi
    secrets:
      users: percona-mongodb-credentials
    sharding:
      configrs:
        volumeSpec:
          pvc:
            storageClassName: ceph-block
            resources:
              requests:
                storage: 10Gi
    tls:
      issuerConf:
        name: redacted
        kind: ClusterIssuer
perconaMongodbOperator:
  enabled: true
  version: 1.16.1
  values:
    watchNamespace: "mongodb"

Everything else is using the default values.

Unable to override replica size while deploying using percona/psmdb-db

Hi,

Thanks for the wonderful work on mongodb and other opensource databases.

I have been trying to get mongodb deployed in a kubernetes cluster. I am using the following values to override some values according to my use case.
But the value set for size property doesn't seem to have any effect. It always tries to create three replicas. Below is the contents of pmdb.yml.

replsets:
  - name: rs0
    size: 2
    podSecurityContext:
      fsGroup: 1001
      runAsGroup: 1001
      runAsUser: 1001
    resources:
      limits:
        cpu: "3900m"
        memory: 14Gi
      requests:
        cpu: 2
        memory: 10Gi

This is how I am trying to deploy,

helm install my-db percona/psmdb-db --namespace=percona -f ./deploy/mongodb/pmdb.yml 

Update:

This seems to be an issue in the operator itself. The reason I say this, the values are correctly submitted to kubernetes but the operator always is creating replicasets with 3 pods.
image

using own ssl certs does not work, cannot run pod unless default self signed certs

cp: cannot create regular file '/srv/nginx/certificate.conf': Read-only file system

2023-10-25 00:26:24,946 INFO exited: pmm-update-perform-init (exit status 1; not expected)
2023-10-25 00:26:25,852 INFO spawned: 'pmm-update-perform-init' with pid 16379
2023-10-25 00:26:26,850 INFO success: pmm-update-perform-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-10-25 00:26:30,029 INFO exited: pmm-update-perform-init (exit status 1; not expected)
2023-10-25 00:26:30,856 INFO spawned: 'pmm-update-perform-init' with pid 16543
2023-10-25 00:26:31,854 INFO success: pmm-update-perform-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

When trying to use or mount certs from helm installation ,the deployments crash and fail, creating the secret also independently and trying to mount is ignored and self signed only installed which allows deployment come online.

trying to install own certs does not appear to work correctly

pg-db - Syntax issue while helm install

While setting up secret in backups, it's not able to detect the secret 'pg-backup-secret' while helm install.

Issue:
issue

Fix:
Syntax needs to be changed in values.yaml
fix-

Percona Postgresql Operator Helm Chart doesn't reference namespace in templates

The Percona Helm Chart should follow the same pattern as the other operator helm charts regarding setting the namespace:

namespace: {{ .Release.Namespace }}

as is shown in the pxc operator example here: https://github.com/percona/percona-helm-charts/blob/878d860ab641e628b48d39725444bd33b3dd6322/charts/pxc-operator/templates/deployment.yaml#L5C3-L5C38

By contrast the postgresql operator helm chart templates have no namespace reference, here is an example:

This becomes an issue when installing via kustomize and should be changed in any case for the sake of consistency.

pxc-db: backup.enabled: false leads to values.yaml overlay not being correctly merged

Hello together.
I am trying to customize the pxc-db chart with a myvalues.yaml. i.e.

helm install pxc-db ./pxc-db -n dev-percona -f cvalues.yaml

I am trying to do some modifications for the backup part

backup:
    enabled: false
    storages:
        s3-us-west:
            type: s3
            verifyTLS: false
            s3:
                bucket: dev-percona
                credentialsSecret: s3-backup-secret-working
                region: us-east-1
                endpointUrl: https://minio.example.org

Unfortunately, when setting backup.enabled: false, the entire backup block is not mearged into the resulting yaml.

I would expect that it is merged, because I want to be able to perform manual backups against my s3 storage. This currently seems only to work if I also enable scheduled. backups.

pg-db chart does not support tolerations as list

chart version: 2.3.18

There are multiple examples that demonstrate the usage of tolerations in the values file:

# tolerations:
# - effect: NoSchedule
# key: role
# operator: Equal
# value: connection-poolers

But the helm template does not support them well, passing a tolerations list when installing a helm release would fail:

{{- if $instance.tolerations }}
tolerations:
- effect: {{ $instance.tolerations.effect }}
key: {{ $instance.tolerations.key }}
operator: {{ $instance.tolerations.operator }}
value: {{ $instance.tolerations.value }}
{{- end }}

Current workaround is passing a single tolerations map:

  tolerations: 
    effect: NoSchedule 
    key: role 
    operator: Equal 
    value: connection-poolers

"pmmserverkey" key doesn't exist in the secrets

Can't enable PMM: either "pmmserverkey" key doesn't exist in the secrets, or secrets and internal secrets are out of sync {"controller": "pxc-controller", "namespace": "test", "name": "mysql-pxc-db", "reconcileID": "57b64d65-c849-496e-b073-9e0dba63c1ee", "secrets": "mysql-pxc-db-secrets", "internalSecrets": "internal-mysql-pxc-db"}

Hello, where can I get pmmserver-api-key ? I want to monitor Percona mysql/mongodb from PMM
"pmm-secret" secret has only "admin" password

helm chart versions:
pxc-db version: 1.13.4 (PMM enabled)
pmm version: 1.3.8

Server pods name

I'm using the https://percona.github.io/percona-helm-charts/ chart.

These are the pods that came out of it. All working
local-percona-main-ps-rs0-0
local-percona-main-ps-rs0-1

The pods name are coming with the chart name
local-percona-main

The cluster name rs0

and the cluster number
-0
-1

Where is this "ps" coming for? The problem is sometimes it does not come with "ps" but something else entirely.

The name of the secrets as well respect this name.

Can i make it respect a single name like "local-percona-main-rs0-0" (Ignoring the random prefix)

Constant OutOfSync warning on psmdb-db Chart

Hey there,
thanks a lot for your outstanding work creating and maintaining these charts.

I have a problem with psmdb-db chart - applying it with Argo results in OutOfSync problem (however, all the resources were created), at the same time everything seems ok.
Do you know what can cause this problem or how I can get rid of the warning?

Here is my values file:

psmdb-db:
  secrets:
    users: mongodb-secrets

  replsets:
    - name: rs0
      size: 3
      nodeSelector:
        pool-type: mongodb-spot
      tolerations:
        - key: pool-type
          operator: Equal
          value: "mongodb-spot"
          effect: NoSchedule
      resources:
        limits:
          cpu: 3000m
          memory: 13G
        requests:
          cpu: 3000m
          memory: 13G
      volumeSpec:
        pvc:
          storageClassName: premium-rwo # ssd
          resources:
            requests:
              storage: 100Gi

  backup:
    enabled: false

and this is how it looks like in ArgoCD UI:
Screenshot 2023-10-29 at 20 10 44

crd content too long to be applied

perconaservermongodbs.psmdb.percona.com is too long

CustomResourceDefinition.apiextensions.k8s.io "perconaservermongodbs.psmdb.percona.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

May be related to the way argocd apply the crd

PMM Client Helm Chart

Is there a helm chart for PMM Client? We have some environments with RDS databases, and would like to try out PMM, but our infrastructure is exclusively kubernetes. Is there a helm chart for this?

Setting up the psmdb-db Helm chart similar to how I setup MongoDB replica set

Hi, as I saw that it's a drop in replacement for it, I'm about to swap MongoDb community for the Percona psmdb-db in my Kubernetes cluster and I need a bit of clarifications about the Chart's values.

  1. I need to specify 2 PVC ( one for data and one for logs) per pod but I see only parameters for one replsets[0].volumeSpec.pvc. How do I add more than one if it's possible?
  2. In the chart's value file I don't see any replsets[0].podSecurityContext, while I do see it for nonvotingand sharded so I guess it's just not present in the chart's Values file but available as a parameter to set.
  3. How to specify a db name? It will be the db name in the driver's connection string..so I 'd need to know what db name to use.
  4. Is backup remote storage possible on GCP for Firebase storage buckets?
  5. Is namespace definable only at chart install or is there a parameter to specify it?

Many thanks.
At the moment the manifest for MongoDBCommunity resource is:

apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
  name: {{ .Values.replica_set.name}} #mongo-rs
  namespace: {{ .Values.namespace}} #default
spec:
  members: {{ .Values.replica_set.replicas}} #1
  type: ReplicaSet
  # mongo version
  version: {{ .Values.replica_set.mongo_version}}
  security:
    authentication:
      modes:
        - SCRAM
  users:
    - name: {{ .Values.replica_set.admin_user.name}} #admin-user
      db: {{ .Values.replica_set.db_name}} #fixit
      passwordSecretRef:
        name: {{ .Values.secret.name}} #mongo-secret
      roles:
        - name: {{ .Values.replica_set.admin_user.role_1}} #clusterAdmin
          db: {{ .Values.replica_set.db_name}} #fixit
        - name: {{ .Values.replica_set.admin_user.role_2}} #userAdminAnyDatabase
          db: {{ .Values.replica_set.db_name}}
        - name: {{ .Values.replica_set.admin_user.role_3}} #ReadWriteAnyDatabase
          db: {{ .Values.replica_set.db_name}}   #fixit
      scramCredentialsSecretName: {{ .Values.replica_set.admin_user.scramCredentialsSecretName}} #my-scram-mg-fixit
  additionalMongodConfig:
    storage.wiredTiger.engineConfig.journalCompressor: zlib
  statefulSet:
    spec:
      # You can specify a name for the service object created by the operator, by default it generates one if not specified.
      # serviceName: mongo-rs-svc
      # get them with kubectl -n default get secret mongo-rs-fixit-admin-user -o json  and then decode them with echo "value" | base64 -d
      # standard connection string-> mongodb://admin-user:[email protected]:27017/fixit?replicaSet=mongo-rs&ssl=false
      # stadard srv connection string -> mongodb+srv://admin-user:[email protected]/fixit?replicaSet=mongo-rs&ssl=false
      
      template:
        spec:
          automountServiceAccountToken: false
          securityContext:
            privileged: false
            allowPrivilegeEscalation: false
            runAsNonRoot: true
            runAsUser: 1000
            readOnlyRootFilesystem: true
          containers:
            - name: mongod
              resources:
                limits:
                  cpu: {{ .Values.resources.mongod.limits.cpu}}
                  memory: {{ .Values.resources.mongod.limits.memory}}
                requests:
                  cpu: {{ .Values.resources.mongod.requests.cpu}}
                  memory: {{ .Values.resources.mongod.requests.memory}}
            - name: mongodb-agent
              resources:
                limits:
                  cpu: {{ .Values.resources.mongodb_agent.limits.cpu}}
                  memory: {{ .Values.resources.mongodb_agent.limits.memory}}
                requests:
                  cpu: {{ .Values.resources.mongodb_agent.requests.cpu}}
                  memory: {{ .Values.resources.mongodb_agent.requests.memory}}
          # nodeSelector:
          #   server-type: mongodb
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      - key: app
                        operator: In
                        values:
                          - mongo-replicaset
                  topologyKey: 'kubernetes.io/hostname'

      volumeClaimTemplates:
        - metadata:
            name: {{ .Values.volume_claim_templates.data.name}} #data-volume
          spec:
            accessModes:
              - {{ .Values.volume_claim_templates.data.access_mode}} #ReadWriteOnce
              # - ReadWriteMany
            storageClassName: {{ .Values.storage_class.data.name}} #mongo-sc-data
            resources:
              requests:
                storage: {{ .Values.volume_claim_templates.data.storage}} #16Gi
        - metadata:
            name: {{ .Values.volume_claim_templates.logs.name}} #logs-volume
          spec:
            accessModes:
              - {{ .Values.volume_claim_templates.logs.access_mode}} #ReadWriteOnce
              # - ReadWriteMany
            storageClassName: {{ .Values.storage_class.logs.name}} #mongo-sc-logs
            resources:
              requests:
                storage: {{ .Values.volume_claim_templates.logs.storage}} #4Gi

PMM: Add value to change log level

Hi,

Great product! I've been using using the Percona Operator for MySQL based on Percona XtraDB Cluster in combination with PMM, this is my configuration.

pmm:
  enabled: true
  serverHost: monitoring-service

This is all running great, the only thing I'm noticing, the pmm-client container is generarting a lot of (info), logs.
For example:

time="2024-03-12T06:55:00.052+00:00" level=info msg="Sending 48 buckets." agentID=/agent_id/xxx component=agent-builtin type=qan_mysql_perf
schema_agent

It should be great if there a way we can change the loglevel, for example with:

pmm:
  enabled: true
  serverHost: monitoring-service
  log:
    level: error

pg-db chart schedule cloud backups

Hi. I'm trying to setup backups on S3 for a PostgreSQL database, but I can't understand how to schedule them using the helm chart.

In line 300 I can see that the schedule option can be used only on repo1, that is used for local backup. If I use one of the other repos, I do not have any possibility to configure the schedule.

Am I missing something or it is a problem to solve? For example moving the schedule section out of the name check.

Thanks

Enabling ingress causes install to fail

Installation Command

helm -n pmm install -f values.yaml pmm percona/pmm --create-namespace

values.yaml (ingress section only)

ingress:
  enabled: true
  nginxInc: true
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  community:
    annotations: {}
  ingressClassName: "nginx"

  hosts:
    - host: qa-pmm.olympiafinancial.ca
      paths:
        - /
  pathType: Prefix
  tls:
    - secretName: qa-pmm.olympiafinancial.ca-tls

Error

W0403 12:29:03.606199   15978 warnings.go:70] path /server. cannot be used with pathType Prefix
Error: INSTALLATION FAILED: 1 error occurred:
	* admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "qa-pmm.olympiafinancial.ca" and path "/" is already defined in ingress percona/pmm-server

I will end up creating the ingress manually. But this configuration doesn't have anything wrong on its face, and there's no documentation to suggest what I am doing wrong. Please provide either documentation as to how to correctly configure the ingress, or fix the ingress creation.

PMM: allow setting the resources from the chart.

currently there is no way you can set the resource for the PMM, it would be great if we can set the pmm resources

pmm:
  enabled: false
  image:
    repository: percona/pmm-client
    tag: 2.41.2
  serverHost: monitoring-service
  resources:
    limits:
      cpu:
      memory:
   requests: 
      cpu:
      memory:   

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.