GithubHelp home page GithubHelp logo

timescale / helm-charts Goto Github PK

View Code? Open in Web Editor NEW
260.0 26.0 223.0 3.43 MB

Configuration and Documentation to run TimescaleDB in your Kubernetes cluster

License: Apache License 2.0

Makefile 9.56% Shell 51.62% Mustache 15.29% Smarty 23.52%
charts hacktoberfest helm helm-charts kubernetes promscale timescaledb

helm-charts's People

Contributors

0xffox avatar adamdang avatar agronholm avatar alice-sawatzky avatar arajkumar avatar cevian avatar drpebcak avatar feikesteenbergen avatar ggwpp avatar hiddeco avatar jocrau avatar jpigree avatar kaarolch avatar kadaffy avatar linki avatar mathisve avatar mfreed avatar nhudson avatar onprem avatar paulfantom avatar pierrebesson avatar prydonius avatar ramonguiu avatar renovate[bot] avatar robatticus avatar robgordon89 avatar sposetti avatar ssola avatar thedodd avatar throrin19 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm-charts's Issues

Migrate from 0.5.4 to 0.6.X with Upgrade script and nameOverride changed

Hi,

I'm really looking forward to upgrading to the 0.6 version of the timescaledb-single helm chart to get TimescaleDB 1.7.

I just finished reading through the upgrade guide and grabbed the shell script to migrate the secrets across. I have run into a problem where I have set the nameOverride field in the values.yaml to "mydbname-timescale" from "timescaledb".

Along with that, I have specified a specific namespace to place the helm chart into. The namespace is called "databases".

The current script provided doesn't seem to take these changes into account. What sort of changes need to be made to the script to successfully migrate across?
Will it involve more than just changing the namespace + name of the secrets?

Cheers,
Nick

Missing securityContext to fulfil podSecurityPolicy requirements

I have been trying to get this deployed to my cluster, however I need to manually re-add securityContext sections to numerous containers because I am using Pod Security Policy to ensure all containers are running as non-root.

Some examples include, but not limited to:

  • initContainer tstune
  • container timescaledb
  • container pgbackrest
  • backup cronJob timescaledb-full-daily
  • backup cronJob timescaledb-incremental-hourly

I'm happy to assist with the configuration and creating a PR, however I am not sure about Helm best practices here. If the image can be changed in values.yaml, so should be the securityContext values, right?

Requested WAL segment has already been removed (?)

So in my master timescale instance, I'm getting the following errors:

timescaledb /var/run/postgresql:5432 - rejecting connections                                                                                                                                                                               │
│ timescaledb 2020-03-28 19:45:04 UTC [3155]: [5e7fa940.c53-1] [unknown]@[unknown],app=[unknown] [00000] LOG:  connection received: host=[local]                                                                                             │
│ timescaledb 2020-03-28 19:45:04 UTC [3155]: [5e7fa940.c53-2] postgres@postgres,app=[unknown] [57P03] FATAL:  the database system is starting up                                                                                            │
│ timescaledb 2020-03-28 19:45:04 UTC [3156]: [5e7fa940.c54-1] [unknown]@[unknown],app=[unknown] [00000] LOG:  connection received: host=[local]                                                                                             │
│ timescaledb 2020-03-28 19:45:04 UTC [3156]: [5e7fa940.c54-2] postgres@postgres,app=[unknown] [57P03] FATAL:  the database system is starting up                                                                                            │
│ timescaledb 2020-03-28 19:45:04,856 WARNING: Retry got exception: 'connection problems'                                                                                                                                                    │
│ timescaledb 2020-03-28 19:45:05 UTC [3165]: [5e7fa941.c5d-1] @,app= [00000] LOG:  started streaming WAL from primary at 1/20000000 on timeline 3                                                                                           │
│ timescaledb 2020-03-28 19:45:05 UTC [3165]: [5e7fa941.c5d-2] @,app= [XX000] FATAL:  could not receive data from WAL stream: ERROR:  requested WAL segment 000000030000000100000020 has already been removed                                │
│ 

Now looking at the last line, it's saying it couldn't find the WAL segment. I went to inside the pods (the master & worker instance) to inspect, and indeed there's no such WAL file
I then realized that backup is disabled, by default according to this doc.

Looking into the source code and I found these :

  # If no backup is configured, archive_command would normally fail. A failing archive_command on a cluster
  # is going to cause WAL to be kept around forever, meaning we'll fill up Volumes we have quite quickly.
  #
  # Therefore, if the backup is disabled, we always return exitcode 0 when archiving

And connecting all those dots together, if I understand correctly, this means that by default, timescale will delete WAL file (due to the exit code 0), even with no backup provided.
Doesn't this mean Timescale can potentially create a corrupted database state (if used without backing up WAL files), as what I'm seeing in the first snippet above?
If so, I think maybe we should enable backup by default, or have the option to disable the use of WAL for recovery (if that's possible).

EDIT:
Oh apparently there's already basebackup as default : https://github.com/timescale/timescaledb-kubernetes/blob/286b1fb9239679060ea8b6caf06b5016c01cea66/charts/timescaledb-single/values.yaml#L179
Now this makes it weird that I was getting the WAL segment unfound error, given I haven't modified the WAL and WAL archives files before..

Database initialization

I'm relatively new to kubernetes and helm but I don't see a way to inject initialization scripts or more generally, initialize a database/tables/indices on install.

In the postgres helm chart, there is an initdbScriptsConfigMap value that may be provided or, you can simply include content directly via files/docker-entrypoint-initdb.d/ (which will be used to create a config map).
In either case, the result is that the content of the files are mounted into the postgres container (in /docker-entrypoint-initdb.d/) and used during database initialization.

Is there a similar capability with the timescaledb-kubernetes helm chart?

Externalize the handling of Secrets

Currently, database credentials are created via Helm configuration values:

    credentials:
      admin: cola
      postgres: tea
      standby: pinacolada

which Helm turns into a K8s Secret. The Secret is then referenced in the SatetefulSet, eg.:

    - name: PATRONI_REPLICATION_PASSWORD
      valueFrom:
        secretKeyRef:
          name: {{ template "timescaledb.fullname" . }}-passwords
          key: standby

This makes it difficult to integrate with custom vaults (AWS Secrets Manager) or externally generated secrets. Also, the default passwords need to be overwritten and will grant access to the DB if admin forgets it.

I propose to completely externalize the management of secrets. They would have to be generated separately (which I assume will happen in any reasonable complex deployment scenario).

I will submit a PR for this tomorrow (which will be dependent on #111), which makes it easier to discuss the proposed change.

TimeScaleDB SingleNode helm chart 0.5.8 fails to come up

I am trying to setup a timescaledb-single helm chart 0.5.8 with 3 replicas, backup disabled.

The first node itself fails to come up. I deleted the statefulset and the pvc and tried to bring it back up, it still fails to come up.

My values.yaml is as follows

    replicaCount: 3

    secretnames:
      credentials: timescaledb-credentials
      certificate: timescaledb-certificate

    persistentVolumes:
      data:
        enabled: True
        storageClass: 'standard'
        size: 200Gi
      wal:
        enabled: False
        size: 20G
        storageClass: 'standard'
    resources:
      limits:
        cpu: 500m
        memory: 4Gi
      requests:
        cpu: 500m
        memory: 4Gi

The node is constantly logging at an interval of 10 seconds

2020-05-08 07:21:07,751 ERROR: failed to bootstrap (without leader)
2020-05-08 07:21:17,750 ERROR: Error creating replica using method pgbackrest: /etc/timescaledb/scripts/pgbackrest_restore.sh exited with code=1
2020-05-08 07:21:17,751 ERROR: failed to bootstrap (without leader)
2020-05-08 07:21:27,751 ERROR: Error creating replica using method pgbackrest: /etc/timescaledb/scripts/pgbackrest_restore.sh exited with code=1

container crashes in single node deployment

postgresql container enters crash loop after helm install with following error.

install: cannot change owner and permissions of ‘/var/lib/postgresql/wal/pg_wal’: Operation not permitted

unable install

I tried to install timescaledb in a kubernetes cluster, accroding to the instruction:

random_password () { < /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c32; }
helm install --name my-release charts/timescaledb-single
--set credentials.postgres="$(random_password)"
--set credentials.admin="$(random_password)"
--set credentials.standby="$(random_password)"

but got following error:

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(StatefulSet.spec.template.spec.volumes[0].emptyDir): unknown field "defaultMode" in io.k8s.api.core.v1.EmptyDirVolumeSource

then i checked the file charts/timescaledb-single/templates/statefulset-timescaledb.yaml and found that:
volumes:
- name: socket-directory
emptyDir:
defaultMode: 488 # 0750 permissions

Is it right?

Backrest pod is failing

I have followed the admin guide to enable the backup and manually triggered the cronjob. The backrest pod was keep failing. I removed the --silent flag from the curl command and the error is "invalid json document".

I saw that in this commit (6e2ea58?diff=unified) the wget is replaced by curl. i reverted back the cmd to wget but now, the backup server which is listening to port 8081 return "Internal Server Error"

Use of clusterName can be limiting in a few cases.

The clusterName helper template defined here which we are using in a few different locations, actually turns out to be a bit limiting.

  • It's use here in the pgbackrest secert has a few issues.
    • We should wrap this entire line in a conditional. If the user has not specified a value for repo1-path, then we can certainly use this line. However, if a user is attempting to take control of the backup path, then it will actually inhibit the startup process and will log errors like so: ERROR: [031]: option 'repo1-path' cannot be set multiple times
    • As a fallback, if the value is not set, then using the current pattern should work.
  • By extension, the service defined here uses the clusterName, which is arguably fine, but the fact that it is used here just means that we can not haphazardly adjust the value of clusterName in order to control the repo1-path. The values are coupled in a bad way right now.

Allowing repo1-path to be directly overwritten by a user, and wrapping its current template line in some conditional logic should address all of the above issues.

Unable to add helm repo

I've got following error when I try to add repo. Helm client is v3.

 helm repo add timescalesdb 'https://github.com/timescale/timescaledb-kubernetes'                                                                                       master
Error: looks like "https://github.com/timescale/timescaledb-kubernetes" is not a valid chart repository or cannot be reached: failed to fetch https://github.com/timescale/timescaledb-kubernetes/index.yaml : 404 Not Found

Looking for Dockerized HA version of TimeScaleDB

Hi,
I am trying to setup an HA dockerized version of TimeScaleDB referencing the following blog: https://blog.timescale.com/blog/high-availability-timescaledb-postgresql-patroni-a4572264a831/
I see that https://github.com/zalando/spilo is a project that comprises of Patroni + PostGreSQL which also has TimeScaleDB extension enabled as part of it
I see that this is something that the TimeScaleDB team also is probably coming up soon as per this thread: #88
My question is should I pursue the Spilo approcach to reach my solution or is the TimeScaleDB team going to release an official version of dockerized timescaledb-docker-ha soon.
Thanks

Originally posted by @rushabh-harness in #88 (comment)

timescaledb log_autovacuum_min_duration: 0 settings creates so many logs

postgresql settings log_autovacuum_min_duration: 0 at https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/values.yaml#L140 creates too many events for every autovacuum activities. It creates one event for each hypertable
log_autovacuum_min_duration: 0 means log all autovacuum events.

It is disabled by default in default postgresql.conf and we can disable it too in the chart values.

Default value:

#log_autovacuum_min_duration = -1	# -1 disables, 0 logs all actions and

Here is the sample logs

_internal._hyper_535_37056_chunk" system usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.04 s
2020-05-20 23:26:55 UTC [19097]: [5ec5bcb0.4a99-103] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_575_37082_chunk" system usage: CPU: user: 0.05 s, system: 0.00 s, elapsed: 0.19 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-104] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_625_37088_chunk" system usage: CPU: user: 0.06 s, system: 0.01 s, elapsed: 0.30 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-105] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_739_37097_chunk" system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.04 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-106] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_1605_37199_chunk" system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-107] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_1431_37232_chunk" system usage: CPU: user: 0.01 s, system: 0.00 s, elapsed: 0.04 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-108] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_1413_37230_chunk" system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-109] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_2185_37241_chunk" system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.02 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-110] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_2201_37242_chunk" system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s
2020-05-20 23:26:56 UTC [19097]: [5ec5bcb0.4a99-111] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_1613_37283_chunk" system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s
2020-05-20 23:26:57 UTC [19097]: [5ec5bcb0.4a99-112] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_1491_37315_chunk" system usage: CPU: user: 0.02 s, system: 0.00 s, elapsed: 0.07 s
2020-05-20 23:26:57 UTC [19097]: [5ec5bcb0.4a99-113] @,app= [00000] LOG:  automatic analyze of table "postgres._timescaledb_internal._hyper_1421_37

can not use patronictl to change the temp_file_limit config

I was deploying a timescaledb cluster in my k8s cluster with the image: timescaledev/timescaledb-ha:pg11-ts1.5

I exec:

$ patronictl edit-config -p temp_file_limit=10GB

but it report a err:

Traceback (most recent call last):
  File "/usr/bin/patronictl", line 11, in <module>
    load_entry_point('patroni==1.6.0', 'console_scripts', 'patronictl')()
  File "/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/decorators.py", line 27, in new_func
    return f(get_current_context().obj, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/patroni/ctl.py", line 1198, in edit_config
    show_diff(before_editing, after_editing)
  File "/usr/lib/python3/dist-packages/patroni/ctl.py", line 1033, in show_diff
    cdiff.markup_to_pager(cdiff.PatchStream(buf), opts)
  File "/usr/lib/python3/dist-packages/cdiff.py", line 640, in markup_to_pager
    pager_cmd, stdin=subprocess.PIPE, stdout=sys.stdout)
  File "/usr/lib/python3.7/subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.7/subprocess.py", line 1522, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'less': 'less'

is the docker image not install patronictl correctly?or the command is wrong?

How to install postgis?

I try to use apt in my custom Dockerfile to install postgis but get error

could not open extension control file "/usr/share/postgresql/11/extension/postgis.control": No such file or directory

Can you provide your sourec of Dockerfile,or tell something about how to install postgresql extensions?

The Dockerfile:

FROM timescaledev/timescaledb-ha:pg11-ts1.5

USER root
RUN apt update \
    && apt install -y postgis=2.5.1+dfsg-1 \
    && rm -rf /var/lib/apt/lists/*
USER postgres

timescaledb service end point

i have tried this helm chart two three times, but it is not exposing 5432 port properly, even with default configuration, loadbalancer remain down, i am checking with digitalocean

is there way i can connect it directly ?

Trying to change PVC size fails

Trying to change persistentVolumes.data.size results in this error:

Error: UPGRADE FAILED: cannot patch "timescale-timescaledb" with kind StatefulSet: StatefulSet.apps "timescale-timescaledb" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

On closer examination it looks like this is because that value tries to change the volumeClaimTemplates section of the statefulset.

is endpoints/restricted in role definition absolutely necessary?

I don't have cluster-wide access to Kubernetes I'm deploying to. Thus the chart won't install saying:

attempting to grant RBAC permissions not currently held: {APIGroups:[""], Resources:["endpoints/restricted"], Verbs:["create" "get" "patch" "update" "list" "watch" "delete"]}

Can there be a solution for this?

Adjust memory available to shm

Currently there is no direct way to accomplish this, but if the chart supported mounting tmpfs to /dev/shm it could resolve some issues.

how to restore from s3 when db is re-init?

If my database is totally distroyed like rm -rf,and need to recreate the database,how can I restore data from s3?

If I just write the old config but with a new pvc,I have get:

ERROR: [028]: backup and archive info files exist but do not match the database
HINT: is this the correct stanza?
HINT: did an error occur during stanza-upgrade?

And If I mannally call pgbackrest restore , seems I need to run command before database startup,can I use your image and change command in yaml to do that?

recent releases not in helm repo

the most recent releases haven't been pushed to the helm rep. Is this deliberate/could they be pushed?

edit

I'm using helm 2 (not by my choice), is it just that it is not being pushed to the repo for helm 2?

Croned backups broken since 0.5.1

First of all, thanks for this great chart!

Issue

Backups triggered by cronjobs are broken because of bad json formatting

Logs from job

❯ kubectl logs -f timescaledb-full-weekly-1579399920-h778d
curl: (22) The requested URL returned error: 400 Bad Request

Manifest of the job

❯ kubectl get cronjob timescaledb-incremental-daily -o yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  creationTimestamp: "2020-01-13T12:11:22Z"
  labels:
    app: timescaledb
    backup-type: incr
    chart: timescaledb-single-0.5.1
    cluster-name: timescaledb
    heritage: Tiller
    release: timescaledb
  name: timescaledb-incremental-daily
  namespace: timescaledb
  resourceVersion: "30260985"
  selfLink: /apis/batch/v1beta1/namespaces/timescaledb/cronjobs/timescaledb-incremental-daily
  uid: d027edd8-35fd-11ea-9c6a-0aca0a52dc65
spec:
  concurrencyPolicy: Forbid
  failedJobsHistoryLimit: 1
  jobTemplate:
    metadata:
      creationTimestamp: null
      labels:
        app: timescaledb
        backup-type: incr
        chart: timescaledb-single-0.5.1
        cluster-name: timescaledb
        heritage: Tiller
        release: timescaledb
    spec:
      activeDeadlineSeconds: 60
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: timescaledb
            backup-type: incr
            chart: timescaledb-single-0.5.1
            cluster-name: timescaledb
            heritage: Tiller
            release: timescaledb
        spec:
          containers:
          - args:
            - --connect-timeout
            - "10"
            - --include
            - --silent
            - --show-error
            - --fail
            - --request
            - POST
            - --data
            - |
              {"type": "incr"
            - http://timescaledb-backup:8081/backups/
            command:
            - /usr/bin/curl
            image: curlimages/curl
            imagePullPolicy: Always
            name: timescaledb-incr
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: ClusterFirst
          restartPolicy: OnFailure
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
  schedule: 12 02 * * 1-6
  successfulJobsHistoryLimit: 3
  suspend: false

The POST data has a missing {

Remove the need for plaintext PostgreSQL credentials

Currently, in values.yaml, the credentials for the PostgreSQL users are specified as follows:

credentials:
  admin: cola
  # the postgres and standby users are vital for Patroni, and therefore should not be
  # removed altogether.
  postgres: tea
  standby: pinacolada

The password for postgres is not strictly needed - Patroni connects using a socket locally and uses the standby user remotely.
pgBackrest also connects locally through the socket, which again does not require a password.

The standby user does need a way to authenticate over the network - we might be able to use certificates here instead of passwords.

There is no easy solution for the admin user: Somehow an application will need to connect with a secret.

Some possibilities:

Allow a secret to be referenced for the passwords

Inform users that providing a hash also works

This does not seem very user friendly, but this does already actually work, and would remove
the strict need of a literal password in the values.yaml:

credentials:
  admin: SCRAM-SHA-256$4096:PXzHoUaqasigcbsBiXi5MQ==$Eguayat8bXgwC19CvH1dAlsmh2Mkj2C7wtOOLsISkEI=:YtyaUsh7sYrdYZlcHVB/HUlIzg/du67PFiOILfcrf1c=

Remove postgres password altogether

The major downside of this approach is that some extensions require superuser privileges to be created/updated etc.
That mostly can be mitigated by configuring pgextwlist.

Allow standby to authenticate using an SSL certificate.

As all pods of the StatefulSet use the same Secret as their TLS certificate this sounds easy, however we need to take care of the case where the certificate changes, for example:

  • Cert A is issued to the StatefulSet.
  • Every PostgreSQL server will allow standby to connect using certificate A
  • Cert A is replaced with Cert B
    • If a replica pod is rescheduled, it cannot connect to the primary, as it only accepts A
    • If a master pod is rescheduled, a replica will become master, which only accepts A

Possible solution: Keep a list of certificates that are allowed, not only the last one.

Provde instructions on how to install on OpenShift

Currently it's not clear if or how to install the timescale-single chart on openshift 4.x.

Running it unchanged gives the following error: ERROR: ObjectCache.run ApiException() even when running as anyuid.

A better explanation of the steps to take, would be very useful.

Support for Azure and gcp

Hey guys,

I see in the readme that this chart only supports AWS. Can anyone expand on what might be needed for support to be extended to Azure and/or GCP?

Thanks

Allow to load env vars using `envFrom` references

The current Helm values allow to reference single entries in ConfigMaps or Secrets using env. A more convenient way to reference all entries at once is envFrom.

This will be especially helpful to process auto-generated secrets as a whole without changing the Helm values.

I will submit a PR for this shortly.

Patroni list all timescaleDB servers but I have no leaders

Hello,

I try to launch the single-node chart in my kubernetes cluster. But I have a big problem : I have a timeout on try to connecting to the server using AWS NLB for the leader service.

I try to connect to servers and launch patronictl list. I have the list of all my timescaleDB servers, but none of them is leader and the cluster is uninitialized :

+ Cluster: timescaledb-single (uninitialized) +---------+----+-----------+
|        Member        |      Host     | Role |  State  | TL | Lag in MB |
+----------------------+---------------+------+---------+----+-----------+
| timescaledb-single-0 | 172.31.13.213 |      | running |  1 |         0 |
| timescaledb-single-1 | 172.31.32.227 |      | running |  1 |         0 |
| timescaledb-single-2 |  172.31.25.84 |      | running |  1 |         0 |
+----------------------+---------------+------+---------+----+-----------+

Is it normal ?

This is my value.yml passed to the chart :

replicaCount: 3
loadBalancer:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
persistentVolumes:
  data:
    size: 10Gi
secretNames:
  credentials: credentials
  certificate: certificate
  # pgbackrest: pgbackrest
patroni:
  postgresql:
    pg_hba:
    - local     all             all                                   peer
    - hostssl   all             all                127.0.0.1/32       md5
    - hostssl   all             all                ::1/128            md5
    - hostssl   replication     standby            all                md5
    - hostssl   all             all                all                md5
    - host      all             all                all                md5

Upgrading helm release causes stuck at "PENDING_UPGRADE" due to backup job pods (which are completed, so not in READY state)

Description

We had an existing timescale release. We decided to up the max_connections by running:

$ helm upgrade -n xyz --install <name>  timescaledb/timescaledb-single ... --set patroni.bootstrap.dcs.postgresql.parameters.max_connections=750 ... --version=0.5.5

However, it's perpetually stuck at this state until the timeout, when it fails:

helm history timescaledb-live
REVISION	UPDATED                 	STATUS         	CHART                   	APP VERSION	DESCRIPTION
1       	Wed Mar 11 19:53:04 2020	SUPERSEDED     	timescaledb-single-0.5.3	           	Install complete
2       	Thu Apr  2 10:00:39 2020	DEPLOYED       	timescaledb-single-0.5.5	           	Upgrade complete
3       	Mon Apr 13 14:24:21 2020	FAILED         	timescaledb-single-0.5.5	           	Upgrade "timescaledb-xyz" failed: timed out waiting for ...

Before Revision 3 got swithced to FAILED after timing out, it was at:

3       	Mon Apr 13 XX:XX:XX 2020	PENDING_UPGRADE	timescaledb-single-0.5.5	           	Preparing upgrade

Environment

Chart version: 0.5.5
Helm version: v2.16.5
Kubernetes version: 1.14

Current Workaround

It looks like this job never gets triggered: https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/templates/job-update-patroni.yaml

To work around this, I tried to manually do these steps:

  1. Update patroni configmap to change value for "max_connections" from 100 to 300
$ kubectl edit configmap timescaledb-xyz-patroni -n live
configmap/timescaledb-xyz-patroni edited
  1. Now, curl the patroni server to get the current configuration
$ kubectl exec -it timescaledb-xyz-0 -c timescaledb  -n live -- bash
postgres@timescaledb-xyz-0:~$ curl http://timescaledb-xyz-config:8008/config
  1. (2) spits out a JSON, copy it and edit the max_connections to 300, then PATCH it back to the server:
postgres@timescaledb-xyz-0:~$ curl --connect-timeout 10 --include --silent --show-error --fail -X PATCH --data '<TheJSONFromAbove>' -H "content-type: application/json"  "http://timescaledb-xyz-config:8008/config"
  1. Delete the pod so statefulset can bring it back up again with the new config.

Other notes

I noticed --set of the dcs config worked fine on a test cluster where timescaledb was deployed on the default namespace. Not sure if the fact this error occurs on clusters where it's deployed on a non-default namespace is related.

pgBackRest fails if encryption is enabled and repo1-cipher-pass is not set

https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/values.yaml#L55
By default pgBackRest encryption is disabled. In case we want to use aes-256-cbc cipher, a passphrase is required by setting repo1-cipher-pass value.

Example:

backup:
  enabled: true
  pgBackRest:
    # https://pgbackrest.org/configuration.html
    process-max: 4
    start-fast: "y"
    repo1-retention-diff: 2
    repo1-retention-full: 2
    repo1-s3-region: eu-central-1
    repo1-s3-bucket: <backup_name>
    repo1-s3-endpoint: s3.amazonaws.com
    repo1-type: s3
    repo1-cipher-type: aes-256-cbc
    repo1-cipher-pass: <long_passphrase_for_encryption>

Upgrade disk space forbidden

Hello,

I try to upgrade disk space directly in helm responses. I use EBS storage with
allowVolumeExpansion: true in StorageClass used.

When i upgrade my deployed chart, I have this error :

Failed to install app timescaledb. Error: UPGRADE FAILED: failed to replace object: StatefulSet.apps "timescaledb" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

Unable to upgrade to chart version 0.5.4 (on Helm2 client)

Steps to reproduce

  1. Have a timescale v0.5.3 on your cluster (see "Environment" below):

    $ helm list
    NAME                    	REVISION	UPDATED                 	STATUS  	CHART                       	APP VERSION	NAMESPACE
    ...
    timescale               	9       	Tue Mar 3 19:03:37 2020	DEPLOYED	timescaledb-single-0.5.3    	           	<redacted>
    
  2. Try to upgrade to v0.5.4:

    $ helm upgrade --install timescale  timescaledb/timescaledb-single --set  ... --version=0.5.4
    

    Results in:

    Error: parse error in "timescaledb-single/templates/statefulset-timescaledb.yaml": template: timescaledb-single/templates/statefulset-timescaledb.yaml:225: function "concat" not defined
    

Workaround available

Stick to 0.5.3; we're unable to upgrade our helm client/server to v3 due to a deployment system that currently requires v2 (as v3 is not backwards compatible)

Environment

Kubernetes version 1.14.9
Helm Client/Server: 2.14.2

Deployment Failed on Kubernetes

Hi,

Thanks for Helm chart, I'm trying to deploy this on the local cluster with the following add-ons enabled for Kubernetes:

  1. DNS
  2. Registry
  3. Dashboard

After running microk8s helm install --name timescale-db charts/timescaledb-single --set credentials.postgres="Password --set credentials.admin="Password" --set credentials.standby="Password", after the execution I can see various services are being created. But failing to start default pod with following logs:

microk8s kubectl logs pod/timescale-db-timescaledb-0 -n default
install: cannot change owner and permissions of ‘/var/lib/postgresql/data’: No such file or directory
install: cannot change owner and permissions of ‘/var/lib/postgresql/wal/pg_wal’: No such file or directory.

Is there anything which I need to change in config?

Running generate_kustomization.sh fails on mac

I downloaded the helm chart via the suggested command:

helm pull timescale/timescaledb-single --untar

And then attempted to run generatore_kustomization.sh on my mac.

The following errors occured:

sed: ./kustomize/example/kustomization.yaml: No such file or directory
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
Generating a RSA private key
..............................................................................................................++++
............++++

No only is the example directory missing but macs don't like tr. To get around the example error I had to download the example directory manually. And to get around the tr error, I had to add this to the generate_kustomization.sh scriptr:

export LC_CTYPE=C

pgbackrest container can't start when running on slave pod

When using backup featuer in helm chart,only the master pod's pgbackrest container can
start up , the log of slave container is like below:

ERROR: [027]: primary database not found
       HINT: check indexed pg-path/pg-host configurations
2019-12-23 06:34:28 - bootstrap - Creating pgBackrest stanza
INFO: stanza-create command begin 2.19: --compress-level=3 --config=/etc/pgbackrest/pgbackrest.conf --log-level-stderr=info --pg1-path=/var/lib/postgresql/data --pg1-port=5432 --pg1-socket-path=/var/run/postgresql --repo1-cipher-type=none --repo1-path=/timescaledb-single/pg-test --repo1-s3-bucket=timescaledb-backup --repo1-s3-endpoint=minio.haorun.win --repo1-s3-key=<redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=us-east-2 --no-repo1-s3-verify-tls --repo1-type=s3 --stanza=poddb
ERROR: [056]: unable to find primary cluster - cannot proceed
INFO: stanza-create command end: aborted with exception [056]

I checked the issues of pgbackrest pgbackrest/pgbackrest#878, so by default only the master db can be backup.

But in a statefuleset ,when pgbackrest fail to start the service timescaledb-single-replica won't have available endpoints,causing conneting to slave is imposiable

nodeport or hostpoat support?

nodeport or hostport is useful in debug or daliy maintenance from pc. I try to write a nodeport servcie ,but sometimes get error like

cannot execute CREATE TABLE in a read-only transaction

How to write a correct nodeport service?And can you add offical support for it.

timescale can't start up due to /var/lib/postgresql/data not created

I'm useing this helm chart to create a ha timescale cluster . But The timescale pod can't run

here is my helm values and pod log.

---
persistentVolumes:
  wal:
    storageClass: "timescaledb-wal-local-volume"
    enabled: "true"
  data:
    storageClass: "timescaledb-data-local-volume"
    enabled: "true"
loadBalancer:
  enabled: "false"
prometheus:
  enabled: "true"

pod log

install: cannot change owner and permissions of ‘/var/lib/postgresql/data’: No such file or directory
install: cannot change owner and permissions of ‘/var/lib/postgresql/wal/pg_wal’: No such file or directory
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "C.UTF-8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: could not create directory "/var/lib/postgresql/data": Permission denied
pg_ctl: database system initialization failed
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3/dist-packages/patroni/__init__.py", line 160, in patroni_main
patroni.run()
File "/usr/lib/python3/dist-packages/patroni/__init__.py", line 125, in run
logger.info(self.ha.run_cycle())
File "/usr/lib/python3/dist-packages/patroni/ha.py", line 1344, in run_cycle
info = self._run_cycle()
File "/usr/lib/python3/dist-packages/patroni/ha.py", line 1253, in _run_cycle
return self.post_bootstrap()
File "/usr/lib/python3/dist-packages/patroni/ha.py", line 1149, in post_bootstrap
self.cancel_initialization()
File "/usr/lib/python3/dist-packages/patroni/ha.py", line 1144, in cancel_initialization
raise PatroniException('Failed to bootstrap cluster')
patroni.exceptions.PatroniException: 'Failed to bootstrap cluster'
creating directory /var/lib/postgresql/data ...

Changes in the dcs configuration are never applied

When in the Helm chart, the section of the dcs is changed, for example:

patroni:
  bootstrap:
    dcs:
      postgresql:
        parameters:
          max_wal_size: 1234MB

This change is correctly set on the ConfigMap that feeds Patroni, but the changes are not applied to the RELEASE-config endpoint, which is watched by Patroni.

To make this work correctly, we need to somehow patch the configuration of the RELEASE_config enpoint if and only iff the configuration has changed:

kubecte get ep/RELEASE-config -o json \
  | jq '.metadata.annotations.config' -r \
  | jq '.postgresql.parameters.work_mem'
"16MB"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.