GithubHelp home page GithubHelp logo

zammad / zammad-helm Goto Github PK

View Code? Open in Web Editor NEW
55.0 11.0 67.0 15.04 MB

Zammad Helm chart for Kubernetes

Home Page: https://artifacthub.io/packages/helm/zammad/zammad

License: GNU Affero General Public License v3.0

Mustache 100.00%
zammad kubernetes helm docker ruby

zammad-helm's Introduction

zammad-helm's People

Contributors

annismckenzie avatar aslubsky avatar cabillman avatar dependabot[bot] avatar devplayer0 avatar galexrt avatar grafjo avatar guyguy333 avatar hanneshal avatar ifrido avatar jasjukaitis avatar jhlav avatar juliusrickert avatar klml avatar mgruner avatar monotek avatar mrgeneration avatar ohemelaar avatar pavels avatar pcfens avatar playworker avatar possani avatar rkaldung avatar sniper7kills avatar stijnbrouwers avatar swistaczek avatar timoschwarzer avatar zifeo avatar ziodave avatar zoomoid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zammad-helm's Issues

persistence.storageClass value is not applied to every PersistenceVolumeClaim

Version of Helm and Kubernetes:
1.24, helm 3.8.2

What happened:
After installing the chart version 6.7.0 and 6.7.1 I've noticed that the only PVC that gets the attribute storageClassName configured in values.yml (as persistence.storageClass) is the release-name-zammad StatefulSet.

What you expected to happen:
All the four PVC needs to have the correct storageClass defined via values.

How to reproduce it (as minimally and precisely as possible):
with values:

persistence:
  storageClass: host-storage #This is the name of my storageClass

run:
helm template --debug -f values.yml zammad/zammad

Anything else we need to know:
I have defined storageclass and volumes like this:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: host-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zammad-postgres-pv
spec:
  storageClassName: host-storage
  capacity:
    storage: 8Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/pv/zammad/postgres"

The other pv's are the same, with their own metadata and path

postgresql.postgresqlUsername is in the values file, but isn't used in the chart

What happened:
I tried to use postgresql.postgresqlUsername

What you expected to happen:
The services would try to connect to (externally deployed) database with this username, but it still attempts to connect as "postgres" user.

Anything else we need to know:
Searching your repo, "postgresqlUsername" is only found in values.yaml and README.md

apps/v1

C:\Users\shaba>helm upgrade --install zammad zammad/zammad --namespace=zammad
Release "zammad" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"]

I can't find any references to apps/v1beta1 in the chart. It seems like it's coming from remote dependencies. Unfortunentally, this means I can't install this on my cluster since its a later version.

Version of Helm and Kubernetes:

C:\Users\shaba>helm version
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}

C:\Users\shaba>kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

What happened:
Deployment failed

What you expected to happen:
Deployment succeeds

How to reproduce it (as minimally and precisely as possible):
Deploy on a cluster where apps/v1 is active

Anything else we need to know:
Statefulset was promoted to apps/v1 ~1.16 I believe. May be able to replicate there if that helps.

Splitting the services and horizontal scaling

Hi,

Hope it's OK I'm skipping the template, as this is more of an architectural thing.

I've been running Zammad for a couple of months now, in Kubernetes, with a rather high adaptation within our organisation. Both the users and I have been very pleased with the application.

However, the Kubernetes implementation has a lot of lacks, in terms of creating something stable, especially splitting the services inside the zammad statefulset to individual containers. I assume the statefulset does not scale horizontally as it is, right? Assuming the hardcoded spec.replicas gives a hint ;)

I'm not very well adversed in Rails, but if there's something I can offer in terms of Kubernetes and Helm charts, I'm happy to help out.

I have a lot of various smaller systems running in Kubernetes, basically using it to have a PAAS to rapidly deploy stuff like Zammad. In order to have databases uniform and backed up, I use KubeDB, which offers both Elasticsearch, Postgres and memcached, so it was very easy to apply for Zammad. I would recommend making this a requirement for the Zammad Helm chart, to relieve some of the burden of the Helm Chart and focus on the actual Zammad deployment.

If someone who's well adversed with Zammad and have an interest in getting this improved is interested, please reach out and I will help with my mediocre Kubernetes knowledge :)

rsync permission denied (13) zammad-init container

Is this a request for help?:

yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

bug report?

Version of Helm and Kubernetes:

Kubernetes 1.20.5
Helm 3.5.4

What happened:

Zammad-init container is throwing permission denied errors

What you expected to happen:

working zammad pods

How to reproduce it (as minimally and precisely as possible):

I created persistentvolume
then used
helm repo add zammad https://zammad.github.io/zammad-helm
helm upgrade --install zammad zammad/zammad --namespace=zammad -f values.yml (cause I added the persistentVolume)

Anything else we need to know:

All I did was using the commands (https://docs.zammad.org/en/latest/install/kubernetes.html) then created a pv.yaml with this content and used the storageclassname in the value.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /opt
storageClassName: pv1

and then the zammad-0 pod log throws permission denied (13) errors for /opt/zammad.
Since I'm very new to kubernetes and helm it could be very likely that I just don't know what I'm doing and my configuration fucks up things or that I'm missing something how to get zammad via helm working so please bare with me.

Helm chart stability issue, constant pod restarts unless scheduler pod locks up

Is this a request for help?: Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Version of Helm and Kubernetes: 3, 1.19

What happened:
We have been experiencing stability issues with the system causing users to only be able to use the system in 10-15 minute bursts. In order to stop the sidecars from restarting every 5-10 minutes we disabled health checks. With health checks disabled we prevent 502, however the system works with bursts of speed and instances of extreme slowness (1-2 minute page loads)

What you expected to happen: System is speedy 24/7

How to reproduce it (as minimally and precisely as possible):

Install helm, setup multiple email groups and checks.

We also have LDAP enabled.

Anything else we need to know: We believe we have narrowed the issue down to the scheduler sidecar.

When the scheduler sidecar fails to run the jobs (when everything is in sleep and the entire pod needs restarted) the system stays stable and sidecars stop restarting. However; we stop receiving inbound tickets ๐Ÿคฆ

zammad-scheduler I, [2021-02-20T13:30:31.879427 #1-47083127478620] INFO -- : Running job thread for 'Check Channels' (Channel.fetch) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.879579 #1-47083127478620] INFO -- : Running job thread for 'Import OTRS diff load' (Import::OTRS.diff_worker) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.879653 #1-47083127478620] INFO -- : Running job thread for 'Generate Session data' (Sessions.jobs) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.879722 #1-47083127478620] INFO -- : Running job thread for 'Process pending tickets' (Ticket.process_pending) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.879819 #1-47083127478620] INFO -- : Running job thread for 'Process escalation tickets' (Ticket.process_escalation) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.879902 #1-47083127478620] INFO -- : Running job thread for 'Process auto unassign tickets' (Ticket.process_auto_unassign) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.880004 #1-47083127478620] INFO -- : Running job thread for 'Check streams for Channel' (Channel.stream) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.880069 #1-47083127478620] INFO -- : Running job thread for 'Import Jobs' (ImportJob.start_registered) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.880825 #1-47083127478620] INFO -- : Running job thread for 'Delete old online notification entries.' (OnlineNotification.cleanup) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.880909 #1-47083127478620] INFO -- : Running job thread for 'Closed chat sessions where participients are offline.' (Chat.cleanup_close) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.880977 #1-47083127478620] INFO -- : Running job thread for 'Execute jobs' (Job.run) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.881042 #1-47083127478620] INFO -- : Running job thread for 'Generate user based stats.' (Stats.generate) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.881360 #1-47083127478620] INFO -- : Running job thread for 'Handle data privacy tasks.' (DataPrivacyTaskJob.perform_now) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.881434 #1-47083127478620] INFO -- : Running job thread for 'Cleanup closed sessions.' (Chat.cleanup) status is: sleep โ”‚โ”‚ zammad-scheduler I, [2021-02-20T13:30:31.881533 #1-47083127478620] INFO -- : Running job thread for 'Cleanup expired sessions' (SessionHelper.cleanup_expired) status is: sleep

image

We had 105 restarts in 17h
image

Fix release notes URL construction

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes: v1.21.1+k3s1

What happened: On deployment using an ingress, the URL constructed in NOTES.txt is wrong due to a schema change from a single string type to a map containing pathType and path values.

What you expected to happen: A fully qualified URL to reach the zammad frontend

How to reproduce it (as minimally and precisely as possible): Enable ingress in values.yaml, add a path and set any pathType. Then deploy the chart and wait for the output.

Anything else we need to know:

Backup doesn't works on k8s

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:

  • Minikube 1.14.1
  • Helm 2.13.1

What happened:
I want to run backup from any zamad container (checked all: railssever, websocket and scheduler).

What you expected to happen:
Backup works from some containers or there is option to configure separate pod that can do backup periodicaly.

How to reproduce it (as minimally and precisely as possible):

  • deploy zamad on minikube
  • exec into any zamad container
  • configure backup (rename file and creat backup dir)
  • try to do backup

Anything else we need to know: -

Problem with helm deployment

Is this a request for help?: Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes: helm v3.3.1 and Kubernetes 1.19

What happened: When I deploy the chart with the basic values, the zammad-0 pod of the zammad Statefull Set doesn't go live. I can see the following errors in the nginx container:

Screenshot from 2020-11-29 20-01-12

Note that the railserver container and the websocket do not log any messages.

How to reproduce it (as minimally and precisely as possible): juste a basic deployment with the basic latest helm chart and values.yaml

Thank you

This is the result of kubectl describe on the pod

Name:         zammad-0
Namespace:    zammad
Priority:     0
Node:         scw-internal-polynom-cluster-default-df9480afb/10.64.140.135
Start Time:   Sat, 28 Nov 2020 06:29:26 +0100
Labels:       app.kubernetes.io/instance=zammad
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=zammad
              app.kubernetes.io/version=3.6.0
              controller-revision-hash=zammad-6bc6965c8
              helm.sh/chart=zammad-3.1.0
              statefulset.kubernetes.io/pod-name=zammad-0
Annotations:  <none>
Status:       Running
IP:           100.64.6.78
IPs:
  IP:           100.64.6.78
Controlled By:  StatefulSet/zammad
Init Containers:
  zammad-init:
    Container ID:   docker://2c7a23893e0c2c83287125c56cd71cae2626d40d52e5af113d6d427eb8a39c23
    Image:          zammad/zammad-docker-compose:zammad-3.6.0-1
    Image ID:       docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 28 Nov 2020 06:30:11 +0100
      Finished:     Sat, 28 Nov 2020 06:30:13 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /docker-entrypoint.sh from zammad-init (ro,path="zammad-init")
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
  postgresql-init:
    Container ID:   docker://ad626c966124d9ba024f50039b565e6225d7ca9fa7ea1dbf4a849c83b1d75df9
    Image:          zammad/zammad-docker-compose:zammad-3.6.0-1
    Image ID:       docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sat, 28 Nov 2020 06:30:14 +0100
      Finished:     Sat, 28 Nov 2020 06:30:51 +0100
    Ready:          True
    Restart Count:  0
    Environment:
      POSTGRESQL_PASS:  <set to the key 'postgresql-pass' in secret 'zammad-postgresql-pass'>  Optional: false
    Mounts:
      /docker-entrypoint.sh from zammad-init (ro,path="postgresql-init")
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
  elasticsearch-init:
    Container ID:   docker://dd53ec2a7b37e8f05f9d07109e2367bbc9b30aeb4c63c218c17470c6e049cab0
    Image:          zammad/zammad-docker-compose:zammad-3.6.0-1
    Image ID:       docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       OOMKilled
      Exit Code:    0
      Started:      Sat, 28 Nov 2020 06:39:03 +0100
      Finished:     Sat, 28 Nov 2020 06:40:00 +0100
    Ready:          True
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /docker-entrypoint.sh from zammad-init (ro,path="elasticsearch-init")
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
Containers:
  zammad-nginx:
    Container ID:  docker://75ba2b23de4a80ce18cfb57dae232ffa680e61d040963e721790530eb613690a
    Image:         zammad/zammad-docker-compose:zammad-3.6.0-1
    Image ID:      docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /usr/sbin/nginx
      -g
      daemon off;
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 29 Nov 2020 20:05:57 +0100
      Finished:     Sun, 29 Nov 2020 20:06:36 +0100
    Ready:          False
    Restart Count:  715
    Limits:
      cpu:     100m
      memory:  64Mi
    Requests:
      cpu:        50m
      memory:     32Mi
    Liveness:     http-get http://:8080/ delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:8080/ delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/nginx/sites-enabled from zammad-nginx (rw)
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
  zammad-railsserver:
    Container ID:  docker://f15cc3c0ebb2d05adebc50d81c6196c6ee041634aa1e3d7f2f63fb7eaca49521
    Image:         zammad/zammad-docker-compose:zammad-3.6.0-1
    Image ID:      docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
    Port:          3000/TCP
    Host Port:     0/TCP
    Command:
      bundle
      exec
      rails
      server
      puma
      -b
      [::]
      -p
      3000
      -e
      production
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 29 Nov 2020 20:02:33 +0100
      Finished:     Sun, 29 Nov 2020 20:03:17 +0100
    Ready:          False
    Restart Count:  698
    Limits:
      cpu:     200m
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     512Mi
    Liveness:     http-get http://:3000/ delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:3000/ delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
  zammad-scheduler:
    Container ID:  docker://da66b8d3237941e1b20509ad29afc6f618b75ed190a7883ef22cd8c09b7dc831
    Image:         zammad/zammad-docker-compose:zammad-3.6.0-1
    Image ID:      docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
    Port:          <none>
    Host Port:     <none>
    Command:
      bundle
      exec
      script/scheduler.rb
      run
    State:          Running
      Started:      Sat, 28 Nov 2020 06:40:02 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  512Mi
    Requests:
      cpu:        100m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
  zammad-websocket:
    Container ID:  docker://b70d0679484b11d997b5722d889ab56deba9d56393abb8e74374d2f358da325e
    Image:         zammad/zammad-docker-compose:zammad-3.6.0-1
    Image ID:      docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
    Port:          6042/TCP
    Host Port:     0/TCP
    Command:
      bundle
      exec
      script/websocket-server.rb
      -b
      0.0.0.0
      -p
      6042
      start
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 29 Nov 2020 20:05:30 +0100
      Finished:     Sun, 29 Nov 2020 20:06:09 +0100
    Ready:          False
    Restart Count:  697
    Limits:
      cpu:     200m
      memory:  512Mi
    Requests:
      cpu:        100m
      memory:     256Mi
    Liveness:     tcp-socket :6042 delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:    tcp-socket :6042 delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  zammad:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  zammad-zammad-0
    ReadOnly:   false
  zammad-nginx:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      zammad-nginx
    Optional:  false
  zammad-init:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      zammad-init
    Optional:  false
  default-token-kmsdp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kmsdp
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                      From     Message
  ----     ------     ----                     ----     -------
  Warning  BackOff    57m (x10255 over 37h)    kubelet  Back-off restarting failed container
  Warning  Unhealthy  52m (x2046 over 37h)     kubelet  Liveness probe failed: HTTP probe failed with statuscode: 502
  Warning  Unhealthy  32m (x1901 over 37h)     kubelet  Readiness probe failed: HTTP probe failed with statuscode: 502
  Warning  Unhealthy  17m (x2080 over 37h)     kubelet  Liveness probe failed: Get "http://100.64.6.78:3000/": dial tcp 100.64.6.78:3000: connect: connection refused
  Warning  Unhealthy  7m56s (x2224 over 37h)   kubelet  Readiness probe failed: dial tcp 100.64.6.78:6042: connect: connection refused
  Warning  BackOff    2m46s (x10685 over 37h)  kubelet  Back-off restarting failed container

Why StatefulSet?

Why did you choose a StatefulSet for deploying Zammad, instead of a Deployment?

Using existingClaim with Helm chart

Is this a request for help?:


this is a BUG REPORT

Version of Helm and Kubernetes: helm v3.4.0 , EKS v1.17

What happened:

k get po
NAME                                      READY   STATUS    RESTARTS   AGE
zammad-dev-0                              4/4     Running   0          2d10h
zammad-memcached-674dbf5d47-789w6         1/1     Running   0          2d10h
zammad-master-0                           1/1     Running   0          2d10h

I need to specify a specific pvc for elasticsearch (zammad-master-0) when performing an install via helm chart with the vaules existingClaim .

the pvc has been mounted by zammad-dev-0 and not zammad-master-0 .

What you expected to happen:

I'm expecting that the pvc will be mounted only by zammad-master-0 .

How to reproduce it (as minimally and precisely as possible):
1- Installed zammad

helm install zammad-dev zammad/zammad  --namespace zammad --values=zammad-values.yaml  --version=3.4.0

2 PVCs have been created for the sts component

k get pvc

NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zammad-dev-zammad-0             Bound    pvc-5896006c-3efa-492b-b55d-6e3ad3a7ce0d   15Gi       RWO            gp2            2d19h
zammad-master-zammad-master-0   Bound    pvc-2a10fb93-aea0-4169-b7aa-76f94fe5c522   30Gi       RWO            gp2            2d18h

2-Uninstall zammad

 helm uninstall zammad-dev

3-Delete elasticsearch PVC

k delete pvc zammad-master-zammad-master-0

4-Create my own PVC

k get pvc
NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
My-pvc-zammad-master            Bound    pv-zammad                                  30Gi       RWO            gp2            4d18h
zammad-dev-zammad-0             Bound    pvc-5896006c-3efa-492b-b55d-6e3ad3a7ce0d   15Gi       RWO            gp2            2d19h

5-Reinstall Zammad with existingClaim = My-pvc-zammad-master

helm install zammad-dev zammad/zammad  --namespace zammad --values=zammad-values.yaml  --version=3.4.0

My-pvc-zammad-master is mounted by zammad-dev-0 and zammad-master-0 created a new pvc with 30Gi storage size

k describe pvc zammad-master-zammad-master-0
Name:          zammad-master-zammad-master-0
Namespace:     zammad
StorageClass:  gp2
Status:        Bound
Volume:        pvc-2a10fb93-aea0-4169-b7aa-76f94fe5c522
Labels:        app=zammad-master
Finalizers:    [kubernetes.io/pvc-protection snapshot.storage.kubernetes.io/pvc-as-source-protection]
Capacity:      30Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    zammad-master-0
Events:        <none>
k describe pvc My-pvc-zammad-master 
Name:          pvc-zammad-bilel-master
Namespace:     zammad
StorageClass:  gp2
Status:        Bound
Volume:        pv-zammad
Labels:        <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      30Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    zammad-dev-0
Events:        <none>

Anything else we need to know:

cat zammad-values.yaml 
envConfig:
  postgresql:
    db: zammad
    host: xxxxx
    pass: xxxxx
    port: 5432
    user: zammad
ingress:
  enabled: true
  hosts:
  - host: xxxxx
    paths:
    - /
persistence:
  existingClaim: My-pvc-zammad-master

How to specify an existingClaim for each statefulset objects ? ( zammad-dev-0 and zammad-master-0 )

livenessProbe timeouts too low

Version of Helm and Kubernetes:

Kubernetes Version: v1.18.3

What happened:

The zammad-railsserver gets restarted often and prematurely due to low livenessProbe timeouts. The helm chart does not set a timeout for the livenessProbe (so it defaults to 1) and does not supply a possibility to customize the timeouts.

What you expected to happen:

The helm chart provides a sensible timeout for zammad or provides a method to customize the livenessProbes (instead of turning them of completely).

How to reproduce it (as minimally and precisely as possible):

Run zammad with this helm chart.

Anything else we need to know:

Allow modification of pooled connections / service crash

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
3, 1.20

What happened:
zammad-scheduler pod goes into crashloopbackoff crashing entire service

What you expected to happen:
zammad-scheduler stays up

How to reproduce it (as minimally and precisely as possible):
Please create an option to increase the workers and connection pool. The current limit of 5 on the zammad-scheduler causes the entire service to crash leading to service disruption.

ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.023 seconds); all pooled connections were in use.

Pod restarts

Anything else we need to know:

API : Error ID rn0Mel5q: Please contact your administrator


**Is this a BUG REPORT **:

Version of Helm and Kubernetes:

Chart version: zammad-3.0.0
Kubernetes version: v1.15.11
DB:Postgresql 11.5

What happened:
we get a random error when we perofrm a search for Tickets / groups .

โ€œError ID rn0Mel5q: Please contact your administratorโ€

DB logs

DataBase Logs :
[2020-12-24T07:49:40.588575 #1-69853656656080] ERROR โ€“ : Error ID dd-TPKCZ: PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
: SELECT โ€œactive_job_locksโ€.* FROM โ€œactive_job_locksโ€ WHERE โ€œactive_job_locksโ€.โ€œlock_keyโ€ = $1 LIMIT $2 FOR UPDATE

What you expected to happen:

Get the list of tickets / groups

Upgrade from 4.0.7 to 4.1.0 not possible

It throws. Seems like sth. in the template is broken.

zammad$ helm upgrade zammad zammad/zammad -f values.yaml
coalesce.go:165: warning: skipped value for extraEnv: Not a table.
Error: UPGRADE FAILED: template: zammad/templates/ingress.yaml:45:21: executing "zammad/templates/ingress.yaml" at <.path>: can't evaluate field path in type interface {}

Running zammad as NONROOT still has issue in postgresql-init container

Is this a request for help?: YES


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Version of Helm and Kubernetes:
helm version

version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.3-dhc", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"dirty", BuildDate:"2020-10-15T07:10:10Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

What happened:
Trying to install zammad as NONROOT user in private cloud. It stuck in postgresql-init container at ruby related commands.

What you expected to happen:
Zammad should be installed successfully.

How to reproduce it (as minimally and precisely as possible):
Downloaded zammad-helm project from GIT to local.
created myValues.yaml file

image:
  repository: myRepoPath/zammad-docker-compose
  tag: zammad-3.6.0-65
  pullPolicy: Always
  imagePullSecrets: 
   - name: "my-pull-secret"

elasticsearch:
  enabled: false
  enableInitialisation: false
  
memcached:
  image:
      registry: myRegistry
      repository: myPath/memcached

postgresql:
  image:
      registry: myRegistry
      repository: myPath/postgresql

Install zammad in private cloud using below helm command.

helm install zammad . --values=myValues.yaml -n zammad

Anything else we need to know:
I am trying with latest zammad v3.6.0-65 which has fix for bug #82 in it.
First of all thanks for your fixes in bug #82 , the updated commands in zammad-init container are working fine now.

Now when it reaches postgresql-init, it stuck at "bundle exec rake db:migrate" step.

I have added few echo statements in templates/configmap-init.yaml file as shown below (in and around the if statement) :

..........
  postgresql-init: |-
    #!/bin/bash
    set -e
    sed -e "s#.*adapter:.*#  adapter: postgresql#g" -e "s#.*database:.*#  database: {{ .Values.envConfig.postgresql.db }}#g" -e "s#.*username:.*#  username: {{ .Values.envConfig.postgresql.user }}#g" -e "s#.*password:.*#  password: ${POSTGRESQL_PASS}\\n  host: {{ if .Values.postgresql.enabled }}{{ .Release.Name }}-postgresql{{ else }}{{ .Values.envConfig.postgresql.host }}{{ end }}\\n  port: {{ .Values.envConfig.postgresql.port }}#g" < contrib/packager.io/database.yml.pkgr > config/database.yml
    echo "level 1"
    if ! (bundle exec rails r 'puts User.any?' 2> /dev/null | grep -q true); then
        echo "level 2"
        bundle exec rake db:migrate
        bundle exec rake db:seed
    else
        echo "level 3"
        bundle exec rake db:migrate
    fi
    echo "postgresql init complete :)"

If I check the logs, it print below two statement only, refer to echo statement in above file:

C:\Ajeet\zammad-helm-master-nonroot\zammad>kubectl logs zammad-0 -c postgresql-init -n zammad
level 1
level 2

I even tried to execute it manually as shown below, but it will exit (with code 137) after long time (>10 mins) but nothing shows in logs.

C:\Ajeet\zammad-helm-master-nonroot\zammad>kubectl exec -it zammad-0 -c postgresql-init -n zammad bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
zammad@zammad-0:~$ bundle exec rake db:migrate
command terminated with exit code 137

This is pod status

C:\Ajeet\zammad-helm-master-nonroot\zammad>kubectl get pod -n zammad
NAME                                READY   STATUS     RESTARTS   AGE
zammad-0                            0/4     Init:1/2   1          20m
zammad-memcached-5fbc5dc6db-hfxl6   1/1     Running    0          20m
zammad-postgresql-0                 1/1     Running    0          20m

when I am doing describe pod zammad-0 it is showing postgresql-init Terminated with Error code =1.

...........
  postgresql-init:
    Container ID:   containerd://d36149fd04a99e462ec33cef9d373e1f496d39aa666d9e9c7eb48581f9812610
    Image:          myRepoPath/zammad-docker-compose:zammad-3.6.0-65
    Image ID:       myRepoPath/zammad-docker-compose@sha256:743a6a93e0744738f396438869f082068a3e627747ed96a0c6000ce890485933
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 08 Mar 2021 23:20:29 +0530
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 08 Mar 2021 23:09:16 +0530
      Finished:     Mon, 08 Mar 2021 23:20:29 +0530
    Ready:          False
    Restart Count:  1
    Environment:
      POSTGRESQL_PASS:  <set to the key 'postgresql-pass' in secret 'zammad-postgresql-pass'>  Optional: false
    Mounts:
      /docker-entrypoint.sh from zammad-init (ro,path="postgresql-init")
      /opt/zammad from zammad (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-df875 (ro)

zammad-init fails with rsync error

Is this a request for help?: yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes: Helm 3.4.2, Kubernetes 1.19.7

What happened: the zammad-init container fails with the following errors:

... long list of files ...
rsync: failed to set times on "/opt/zammad/vendor/assets/stylesheets/.gitkeep.WccGxi": Operation not permitted (1)
rsync: failed to set times on "/opt/zammad/vendor/plugins/.gitkeep.vKQix2": Operation not permitted (1)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]

What you expected to happen: I expect zammad-init to run successfully

How to reproduce it (as minimally and precisely as possible): install zammad using helm

Anything else we need to know: ES and Postgres are provided, I can share my-values.yaml if needed.

AUTOWIZARD_JSON not base64 decoding.

Is this a request for help?:
No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
helm :

version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}

Kubernetes:

>kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}

What happened:
Autowizard is trying to decode a secret which is already base64 decoded.

What you expected to happen:
Run to completion and pull in the config and start up completely.

How to reproduce it (as minimally and precisely as possible):

  1. Create a file autowizard.json
{
    "Token": "secret_zammad_autowizard_token",
    "TextModuleLocale": {
    "Locale": "en-us"
    },
    "Users": [
    {
        "login": "[email protected]",
        "firstname": "Zammad",
        "lastname": "Admin",
        "email": "[email protected]",
        "organization": "Shadowcom Test",
        "password": "testtest"
    }
    ],
    "Settings": [
    {
        "name": "product_name",
        "value": "ZammadTestSystem"
    },
    {
        "name": "system_online_service",
        "value": true
    }
    ],
    "Organizations": [
    {
        "name": "ZammadTest"
    }
    ]
}
  1. Created a secret from a config json file, as in the values file.
kubectl -n zammad create secret generic autowizard \
        --from-file=autowizard=${DEVDIR}/autowizard.json
  1. Set the values in values.yaml
secrets:
  autowizard:
    useExisting: true
autowizard:
  enable: true
  1. Install via helm
helm upgrade --install zammad-test zammad/zammad -n zammad --create-namespace \
    -f values.yaml

This appears to be caused by the following lines in zammad/templates/configmap-init.yaml. In the zammad-init
section the following code :

    if [ -n "${AUTOWIZARD_JSON}" ]; then
        echo "${AUTOWIZARD_JSON}" | base64 -d > auto_wizard.json
    fi

This value does not need to be base64 decoded, as the variable is mounted from a secret in zammad/templates/statefulset.yaml and is already decoded:

env:
  {{ if .Values.autoWizard.enabled }}
  - name: "AUTOWIZARD_JSON"
    valueFrom:
      secretKeyRef:
        name: {{ template "zammad.autowizardSecretName" . }}
        key: {{ .Values.secrets.autowizard.secretKey }}
  {{ end }}

I believe to fix it all you need to do is remove | base64 -d from the code above. I confirmed by modifying the cart that the variable contains the already base64 decoded test.

Anything else we need to know:

The documentation in the chart values.yaml file is confusing. It makes it appear that the user can simply uncomment the config section and it will take that. After looking at the code it is clear the intention is to have the config as a secret. I would recommend updating the comments indicating that config must be kept in a secret.

zammad-init fails with rsync command not found

**Is this a request for help? yes:


**Is this a BUG REPORT or FEATURE REQUEST? BUG REPORT :

Version of Helm and Kubernetes: helm v3.6.3 and kubernetes v1.21.4

What happened:
the zammad-init container fails with the following error:

/usr/local/bin/docker-entrypoint.sh: line 3: rsync: command not found

kubectl logs -f pod/zammad-0 zammad-init -n zammad //commad used for logs

What you expected to happen:
I expect zammad-init to run successfully

How to reproduce it (as minimally and precisely as possible):
Used k8s kubespray with flannel network
and using helm deploy with existing database and elastic search running on different machine as standalone

Anything else we need to know:

Elasticsearch is not reachable

Is this a request for help?: Yes

Version of Helm and Kubernetes:

Helm v3.5.2
---
k3s version v1.20.4+k3s1 (838a906a)
go version go1.15.8
---
kubectl:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4+k3s1", GitCommit:"838a906ab5eba62ff529d6a3a746384eba810758", GitTreeState:"clean", BuildDate:"2021-02-22T19:49:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4+k3s1", GitCommit:"838a906ab5eba62ff529d6a3a746384eba810758", GitTreeState:"clean", BuildDate:"2021-02-22T19:49:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

What happened: Installation does not run successfully. Log from elasticsearch-init:
Unable to process GET request to elasticsearch URL 'http://zammad-master:9200'. Elasticsearch is not reachable, probably because it's not running or even installed.

How to reproduce it (as minimally and precisely as possible):
helm install zammad zammad/zammad --namespace zammad

Anything else we need to know:
I tried to change the Elasticsearch host but it doesn't change anything:
helm upgrade --set-string envConfig.elasticsearch.host=127.0.0.1 zammad zammad/zammad --namespace zammad

UPGRADE FAILED: no Secret with the name "zammad-postgresql-pass" found

Is this a request for help?:
Unfortunately the update does not work. Unfortunately I always get the error message:
Failed to install app zammad. Error: UPGRADE FAILED: no Secret with the name "zammad-postgresql-pass" found

The secret is there, though.

thank you it is due to this error:
name: "{{ .Release.Name }}-postgresql-pass"

name: {{ template "zammad.fullname" . }}-postgresql-pass

Is this a BUG REPORT

Version of Helm and Kubernetes:

Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

What happened:
After an update helmet always bleaches with the error message "Failed to install app zammad. Error: UPGRADE FAILED: no Secret with the name "zammad-postgresql-pass" found" but unfortunately this is not correct.

How to reproduce it (as minimally and precisely as possible):
Update from version 1.0.0 to 1.0.2.

Problem after upgrading zammad from version 3.2 to 3.4

Is this a request for help?: yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:
helm version 3.2.4
Kubernetes 1.16.10 (Google Cloud GKE v1.16.10-gke.8)

What happened:
I'm upgrading a zammad 3.2 (helm chart zammad-1.2.1 / zammad/zammad-docker-compose:3.2.0-13) installation to zammad 3.4 (helm chart zammad-2.2.0 / zammad/zammad-docker-compose:3.4.0-11).
After the upgrade something weird happens to the user/groups/roles config. There are several problems I've noticed so far:

  • unable to assign an existing ticket to another agent (list of agents is empty)
  • "insufficient rights" to open my tickets
  • cant see any tickets at all
  • parts of the "roles" in the roles config menu seem to be gone
  • ...

What you expected to happen:
user/groups/roles config should keep working. access to existing tickets should be available. ticket should be assignable to other agents

How to reproduce it (as minimally and precisely as possible):
What I did was:

  1. installed zammad helm chart 2.2.0 in a new namespace (helm install zammad-test -n zammad-test zammad/zammad) and let the system initially come up.
  2. made a database dump on the old installation (stopped the rails server before doing so)
  3. shut down the rails server on the new installation
  4. importing the db dump on the new postgresql pod (drop existing database, create an empty one, importing the db dump in the new, empty database)
  5. starting the rails pod again and let it go through all init container (works without problems) and wait until the system is ready
  6. logon on to the new system (users from the old system are working) and "click around" a bit - especially try to reassign tickets and open/edit/save grous/roles configs.
  7. Very soon I can not view any tickets anymore, parts of "edit users" and "edit roles" dialogues are gone

Anything else we need to know:
I tried it now several times to import different db dump created on the the 3.2 system to import on the 3.4 system.
The dumping/importing itself is running without problems. Also the app server starts without issues on the newly imported database - only after a short while, the system seems to degenerate.
I'm wondering it there were some changes to the db schema in 3.4 that are somehow missing?
I cant find any obvious errors in the logs (not on the database and also not zammad app server)

[FR] Support Redis with user

Right now it is not possible to specify an external redis that requires to provide a user.
However, redis introduces with 6.x support for different users (and privs).

Many cloud vendors tend to only offer Redis as a Service with custom users, without providing any fallback for the old connection strings. Therefore I ask to support redis usernames as input parameter similar as it is already supported for psql or elastic search

https://github.com/zammad/zammad-helm/blob/main/zammad/templates/statefulset.yaml

Feature Request: Disable elasticsearch-init

I know - it's not recommended to run zammad without elasticsearch - the documentation says that it's possible for small teams etc.
Right now, it's not possible to disable elasticsearch config / the elasticsearch-init container in the helm chart.
Let me know if you like the idea, so I can pull request this feature.

Allow empty strings pvc storageClassName

In some situations, it coud be useful to define pvc storageClassName as empty string (not null) to disable dynamic provisioning.

Right now, it's not fully possible in this chart, and the way of doing it may vary depends on the chart/subchart:

PostgreSQL (using the storageClass "-" de facto standard) :

postgresql:
  ...
  persistence:
    ...
    storageClass: "-"

Elasticsearch (using full "volumeClaimTemplate" block) :

elasticsearch:
  ...
  volumeClaimTemplate:
    ...
    storageClassName: ""

Zammad itself:

persistence:
  ...
  storageClass: ""

In this last example, zammad is using half of the de facto standard :)
First, storageClass is used instead of storageClassName (which, in my opinion, is a good thing).
But then, the special value "-" is not handled, and worst, if storageClass is an ampty string, it leads to no storageClassName at all because of the "with" usage right here: https://github.com/zammad/zammad-helm/blob/master/zammad/templates/statefulset.yaml#L298

how to use SMTP

Is this a request for help?:
im IT beginner

i need using smtp but error " Host not reachable"
SMTP
-host
smtp.gmail.com
-address
my@address
-password
google app password 16
-port
589

I think the pod and service ports should be opened. Which pod and service should be open?
Please help me


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:

  • helm -
    version.BuildInfo{Version:"v3.5.0"
  • k8s -
    client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2"
    Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.10-gke.301"
    i use gke

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

health check problems with helm chart 6.7.0

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
Helm 3.9 / Kubernetes 1.22

What happened:
Since upgrading the zammad helm chart from 6.6.0 to 6.7.0 the Monitoring inside Zammad (Admin --> Monitoring (under System) tells me scheduler may not run (last execution of Stats.generate about 2 months ago) - please contact your system administrator.

image

I'm also using the provided healthcheck url in our monitoring system, to check zammads health and of course I'm also getting alarms there.

When I look in the scheduler pods log file, its not looking too bad - at least theres some ok looking activity there:

I, [2022-06-29T15:58:01.015498 #1-109200]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:58:01.017337 #1-109200]  INFO -- : ended Channel.fetch took: 0.018719892 seconds.
I, [2022-06-29T15:58:31.026244 #1-109200]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:58:31.027890 #1-109200]  INFO -- : ended Channel.fetch took: 0.007268574 seconds.
I, [2022-06-29T15:59:01.034358 #1-109200]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:59:01.035733 #1-109200]  INFO -- : ended Channel.fetch took: 0.006543665 seconds.
I, [2022-06-29T15:59:13.740571 #1-108860]  INFO -- : ProcessScheduledJobs running...
I, [2022-06-29T15:59:13.748378 #1-120320]  INFO -- : execute ImportJob.start_registered (try_count 0)...
I, [2022-06-29T15:59:13.752938 #1-120320]  INFO -- : ended ImportJob.start_registered took: 0.009868007 seconds.
I, [2022-06-29T15:59:23.752695 #1-108860]  INFO -- : Running job thread for 'Process ticket escalations.' (Ticket.process_escalation) status is: sleep
I, [2022-06-29T15:59:31.043167 #1-109200]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:59:31.044938 #1-109200]  INFO -- : ended Channel.fetch took: 0.008018813 seconds.
I, [2022-06-29T15:59:33.762666 #1-108860]  INFO -- : Running job thread for 'Check 'Channel' streams.' (Channel.stream) status is: sleep
I, [2022-06-29T15:59:43.770872 #1-120400]  INFO -- : execute Ticket.process_pending (try_count 0)...
I, [2022-06-29T15:59:43.973586 #1-120400]  INFO -- : ended Ticket.process_pending took: 0.208151392 seconds.
I, [2022-06-29T15:59:53.780674 #1-120540]  INFO -- : execute Ticket.process_auto_unassign (try_count 0)...
I, [2022-06-29T15:59:53.784191 #1-120540]  INFO -- : ended Ticket.process_auto_unassign took: 0.008527539 seconds.
I, [2022-06-29T16:00:01.052044 #1-109200]  INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T16:00:01.053573 #1-109200]  INFO -- : ended Channel.fetch took: 0.00745371 seconds.
I, [2022-06-29T16:00:03.785260 #1-108860]  INFO -- : Running job thread for 'Check channels.' (Channel.fetch) status is: sleep
I, [2022-06-29T16:00:09.836285 #1-109540]  INFO -- : execute Job.run (try_count 0)...
I, [2022-06-29T16:00:09.838140 #1-109540]  INFO -- : ended Job.run took: 0.010392464 seconds.
I, [2022-06-29T16:00:13.795866 #1-120640]  INFO -- : execute SessionTimeoutJob.perform_now (try_count 0)...
I, [2022-06-29T16:00:13.859815 #1-120640]  INFO -- : SessionTimeoutJob removed session '36821947' for user id '' (last ping: '', timeout: '-1')

May this issue somewhat be associated with the recent switch from scheduler to background-worker?

What you expected to happen:
Zammad health check should report actual system status

How to reproduce it (as minimally and precisely as possible):
Just use helmchart 6.7.0 to setup a zammad instance and you should see the health check issue

Anything else we need to know:

Can't set persistant storage

Is this a request for help?: Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug? (or maybe user error)

Version of Helm and Kubernetes:
$ helm version --short
v3.2.0+ge11b7ce
$ kubectl version --short
Client Version: v1.15.9
Server Version: v1.15.9-gke.24

What happened: Attempting to set an existingClaim fails:

persistence:
  enabled: true
  existingClaim: zammad-nfs-pvc
  accessModes:
    - ReadWriteOnce
  storageClass: nfs
  size: 15Gi
  annotations: {}

with error:
$ helm upgrade zammad zammad/zammad -f zammad-values-3.3.0-19.yaml --dry-run --debug

upgrade.go:120: [debug] preparing upgrade for zammad
Error: UPGRADE FAILED: template: zammad/templates/statefulset.yaml:237:31: executing "zammad/templates/statefulset.yaml" at <.Values.persistence.existingClaim>: can't evaluate field Values in type string
helm.go:84: [debug] template: zammad/templates/statefulset.yaml:237:31: executing "zammad/templates/statefulset.yaml" at <.Values.persistence.existingClaim>: can't evaluate field Values in type string
UPGRADE FAILED
main.newUpgradeCmd.func1
	/private/tmp/helm-20200423-41927-615sa8/src/helm.sh/helm/cmd/helm/upgrade.go:146
github.com/spf13/cobra.(*Command).execute
	/private/tmp/helm-20200423-41927-615sa8/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
	/private/tmp/helm-20200423-41927-615sa8/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
	/private/tmp/helm-20200423-41927-615sa8/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
	/private/tmp/helm-20200423-41927-615sa8/src/helm.sh/helm/cmd/helm/helm.go:83
runtime.main
	/usr/local/Cellar/[email protected]/1.13.10_1/libexec/src/runtime/proc.go:203
runtime.goexit
	/usr/local/Cellar/[email protected]/1.13.10_1/libexec/src/runtime/asm_amd64.s:1357

What you expected to happen: For it to use my existing persistent claim.

How to reproduce it (as minimally and precisely as possible):
Create NFS pvc and try to attach it by its name

Anything else we need to know: No

Wrong VolumeBinding with "zammad-0" and "zammad-zammad-0"?

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Version of Helm and Kubernetes:

helm v3.2.1
kubelet v1.18.2 (microk8)

What happened:
i want to start a default helm install test

What you expected to happen:
can't start withot error show pictures vv

How to reproduce it (as minimally and precisely as possible):
helm upgrade --install zammad zammad/zammad --namespace zammad

Anything else we need to know:

Screenshot 2020-05-28 11 43 30
Screenshot 2020-05-28 11 43 42

Documentation/CRD for autowizard config

FEATURE REQUEST

Version of Helm and Kubernetes:
irrelevant

What happened:

While experimenting with the helm chart, I noticed that there is a autowizard config that is able to configure most Settings out of the box. However I soon hit a dead end when i tried to configure an email channel using the autowizard, since I was unsure how the json of the autowizard should look like.

What you expected to happen:

A Custom Resource Definition defining how the autowizard config should look like would greatly reduce confusion and would make it easier for users to setup their zammad instance. At least, a documentation that describes what fields can be set in the autowizard should be available.

Elastic reindex every start

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes: latest helm chart with 1.19

What happened:
Starting the statefull set always starts an elastic full index causing our startup time to increase as time / usage exands

What you expected to happen:

Full index is not performed every start

How to reproduce it (as minimally and precisely as possible):

scale down and up the ss

Anything else we need to know:

image

Safe to change image

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
1.18

Is there any issue in upgrading the docker image to 3.5 ?

Installing zammad failed because elasticsearch-init is running as ROOT (zammad-master-0).

Is this a request for help?: YES


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG

Version of Helm and Kubernetes:
Helm version

version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}

Kubectl version

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.3-XXX", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"dirty", BuildDate:"2020-10-15T07:10:10Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

What happened:
Zammad installation fails due to ROOT image of elasticsearch-init as cluster has pod security policy to not allow root or privileged images.

What you expected to happen:
Elastic search should run as NON ROOT image

How to reproduce it (as minimally and precisely as possible):
Install using helm with below images:
zammad/zammad-docker-compose:zammad-4.0.0-7
zammad/zammad-docker-compose:zammad-elasticsearch-4.0.0-7

Results in error

create Pod zammad-master-0 in StatefulSet zammad-master failed error: pods "zammad-master-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

Anything else we need to know:
Similar issue was fixed for zammad-init container for bug #82.

Persistent volumes - how to control what is created and which size

Hi,

I would like to control how persistent volumes are created during the setup process.
I can see that template is supporting:
persistence.storageClass:
persistence.existingClaim:

When running the helm on rancher beside this PV (zammad-data) that I can manually specifiy in template variables, I can see that there are two additional volumes created so in total it looks like:

zammad-postgresql (container in workload zammad-postgresql) is claiming: zammad:data-zammad-postgresql-0
elasticsearch (container in workload zammad-master) is claiming: zammad:zammad-master-zammad-master-0
zammad-railsserver (container in workload zammad) is claiming: zammad:zammad-data - THIS ONE IS CONTROLLED BY HELM TEMPLATE

I would like to have control over postgresql and elasticsearch claims so that I can attach existing preconfigured volumes.
Thanx,
D

Ability to add custom nginx configuration

Would you be willing to accept a patch that allows for inserting additional nginx configuration into the nginx configmap from helm values?

We run our instance of Zammad behind a Google IAP and would like the ability to rewrite some of the headers before it hits zammad. I'm specifically targeting this area - https://github.com/zammad/zammad-helm/blob/master/zammad/templates/configmap-nginx.yaml#L52

I'm trying avoid the need to run an additional instance of nginx in front for the purpose of rewriting a few headers.

How To Configure Zammad with exsisting postgress DB / or AWS RDS

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Version of Helm and Kubernetes:

Version 3

What happened:
I'm testing Zammad on Kubernetes using Helm charts. It works fine. But, I already have a separate Postgres database.

What you expected to happen:

I want to integrate Zammad with an existing Postgres DB, or Amazon RDS DB

Anything else we need to know:

Yes, I want to know how to set up Zammad on Kubernetes with a separate Postgres database. Hope anyone will help me with this.

How to install zammad as NONROOT user for a private cloud

Is this a request for help?: Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:
output of helm version commad:
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}

output of kubectl version commad:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

What happened: Zammad is developed as Root User which are not allowed in private cloud for security reason.

What you expected to happen: To be able to install zammad as NONROOT user

How to reproduce it (as minimally and precisely as possible): NA

Anything else we need to know:
I tried to create new images from the current image and just adding USER 1000 to start as.
My dockerfile has only two lines:
FROM image-name-here
USER 1000

With this I can deploy but the pods don't come up as internally it get into lots of other issues as the user is NOT root user.

I was expecting to find a deployment instead of a stateful set

Version of Helm and Kubernetes:
1.19, helm 3

What happened:
I am taking a brief look to the chart, and I encountered that we are deploying a stateful set, with the replicas assigned to 1

What you expected to happen:
Be able to escalate the statefulset without the need of a NFS or other persistent layers. Or in other words, be able to escalate the zammad replicas easily.

How to reproduce it (as minimally and precisely as possible):
The replicas spec is hard-coded to 1 replica

Anything else we need to know:

I am taking a look at the chart and the zammad repo as well. I know I am missing something, and that the application is complex, but with a PG, I was hoping to find a deployment and to be able to escalate it to more than 1 replica easily, without having any persistance layer. Are there any other possibilities to escalate the whole application? I am not talking about the dependencies.

bug: railsserver not starting up again when running off of PVC

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes: helm 3.6.0, kubernetes v1.21.4-gke.1801

What happened: When the pod gets rescheduled for whatever reason, and zammad is running with a persistent volume claim,
its server.pid file remains in /opt/zammad/tmp/pids/server.pid, causing the rails server container to not successfully start up again. Same goes for container restarts due to health checks.

What you expected to happen: The rails server should successfully start up again after container restart, i.e., its temporary directory should be clean

How to reproduce it (as minimally and precisely as possible): Force the statefulset onto another node, for example by cordoning a node (just make sure there's nothing else of value running on there...)

Anything else we need to know: The missing piece here is a line of shell code that is present in the entrypoint script for zammad-rails in the docker-compose repository, but not in the command field for the container spec here. I'm opening a pull request to fix this issue.

Refactor elasticsearch-init

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:

What happened:
When restarting zammad, in attempts to reset the scheduler sidecar, the elasticsearch-init runs for every ticket in the system causes extreme restart times that get longer by the hour.

What you expected to happen:
Zammad immediately resets and elasticsearch is reindexed via a job.

How to reproduce it (as minimally and precisely as possible):
Restart zammad stateful set

Anything else we need to know:

Scaling statefulset zammad / jobs run on multiple instances at the same time

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
1.18.6
image 3.4.0-18

What happened:
When scaling zammad statefulset it appears the scheduler runs the same jobs at the same time.

This results in duplicate email tickets etc.

What you expected to happen:

scheduler validates jobs are not already running.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext when installing zammad

Is this a request for help?: Yes


Version of Helm and Kubernetes:

kubectl version 1.15
helm version 3.1.1

What happened:
When i installed zammad with helm install using this command,
helm install --set postgresql.enable=false --set envConfig.postgresql.host=sqlproxy-service-ds1 --set env.Config.postgresql.port=3306 --set envConfig.postgresql.pass=xxxxxx my-zammad zammad/zammad --version 2.0.5 --namespace zammad

i got this error:

Error: unable to build kubernetes objects from release manifest: error validating "": error vali
dating data: [ValidationError(StatefulSet.spec.template.spec.containers[1].securityContext): unk
nown field "fsGroup" in io.k8s.api.core.v1.SecurityContext, ValidationError(StatefulSet.spec.tem
plate.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.Securit
yContext, ValidationError(StatefulSet.spec.template.spec.containers[3].securityContext): unknown

field "fsGroup" in io.k8s.api.core.v1.SecurityContext, ValidationError(StatefulSet.spec.templat
e.spec.initContainers[1].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.Securit
yContext, ValidationError(StatefulSet.spec.template.spec.initContainers[2].securityContext): unk
nown field "fsGroup" in io.k8s.api.core.v1.SecurityContext]

What you expected to happen:
Zammad will be running on my cluster in GKE

How to reproduce it (as minimally and precisely as possible):

  • Create new cluster and connect it
  • helm repo add zammad https://zammad.github.io
  • helm repo update
  • Deploy on cluster with command above

Anything else we need to know:
I want to use our postgresql instance v9.6 @ GCP CloudSQL for zammad prod db via cloudsqlproxy (same namespace).

sqlproxy-deployment.yaml

# https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/Kubernetes.md
# 1. kubectl create secret generic service-account-token --from-file=credentials.json=$HOME/kubernetes/cloudsql_proxy/credentials.json --namespace zammad
# 2. kubectl apply -f sqlproxy-deployment.yaml --namespace zammad
# 3. kubectl apply -f sqlproxy-services.yaml --namespace zammad

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cloudsqlproxy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: cloudsqlproxy
    spec:
      containers:
       # Make sure to specify image tag in production
       # Check out the newest version in release page
       # https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases
      - image: b.gcr.io/cloudsql-docker/gce-proxy:latest
       # 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
        imagePullPolicy: Always
        name: cloudsqlproxy
        command:
        - /cloud_sql_proxy
        - -dir=/cloudsql
        - -instances=<PROJECT_ID>:<ZONE>:<SQL_INSTANCE>=tcp:0.0.0.0:3306
        - -credential_file=/credentials/credentials.json
        # set term_timeout if require graceful handling of shutdown
        # NOTE: proxy will stop accepting new connections; only wait on existing connections
        - term_timeout=10s
        lifecycle:
          preStop:
            exec:
              # (optional) add a preStop hook so that termination is delayed
              # this is required if your server still require new connections (e.g., connection pools)
              command: ['sleep', '10']
        ports:
        - name: port-ds1
          containerPort: 3306
        volumeMounts:
        - mountPath: /cloudsql
          name: cloudsql
        - mountPath: /credentials
          name: service-account-token
      volumes:
      - name: cloudsql
        emptyDir:
      - name: service-account-token
        secret:
          secretName: service-account-token

sqlproxy-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: sqlproxy-service-ds1
spec:
  ports:
  - port: 3306
    targetPort: port-ds1
  selector:
    app: cloudsqlproxy

Helm chart checks for wrong Elasticsearch credentials

Is this a request for help?:
No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug

Version of Helm and Kubernetes:
v2

What happened:
Zammad is unable to verify against the Elasticsearch API, because the user and password are not set properly.

What you expected to happen:
The helm template should not verify against .Values.elasticsearch.pass, but Values.envConfig.elasticsearch.pass and user respectively.

How to reproduce it (as minimally and precisely as possible):

  1. Deploy chart version 2.6.1 with an external Elasticsearch cluster configured
  2. Check the "elasticsearch-init" container:
I, [2020-10-05T15:00:18.539348 #6-47041080449380]  INFO -- : Setting.set('models_searchable', ["Organization", "KnowledgeBase::Answer::Translation", "User", "Chat::Session", "Ticket"])
I, [2020-10-05T15:00:19.430781 #6-47041080449380]  INFO -- : Setting.set('es_url', "https://elasticsearch-es-master.elk.svc.cluster.local:9200")
rake aborted!
Unable to process GET request to elasticsearch URL 'https://elasticsearch-es-master.elk.svc.cluster.local:9200'. Check the response and payload for detailed information: 

Response:
#<UserAgent::Result:0x00005625c2492878 @success=false, @body="{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"missing authentication credentials for REST request [/]\",\"header\":{\"WWW-Authenticate\":[\"Bearer realm=\\\"security\\\"\",\"ApiKey\",\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"]}}],\"type\":\"security_exception\",\"reason\":\"missing authentication credentials for REST request [/]\",\"header\":{\"WWW-Authenticate\":[\"Bearer realm=\\\"security\\\"\",\"ApiKey\",\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"]}},\"status\":401}", @data=nil, @code="401", @content_type=nil, @error="Client Error: #<Net::HTTPUnauthorized 401 Unauthorized readbody=true>!">

The Elasticsearch configmap-init.yaml does not contain the relevant section for setting the user and the password, because the evaluation always fails. Either a default has to be added or (IMHO the better way) the check should default to Values.envConfig.elasticsearch.user

apiVersion: v1
kind: ConfigMap
  name: zammad-cluster-dev-01-init
  namespace: default
data:
  elasticsearch-init: >-
    #!/bin/bash

    set -e

    bundle exec rails r 'Setting.set("es_url",
    "https://elasticsearch-es-master.elk.svc.cluster.local:9200")'

    bundle exec rake searchindex:rebuild


    echo "elasticsearch init complete :)"
...

Zammad image tag does not update zammad-master

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
k8 1.18.6

What happened:
running helm upgrade with --set image.tag=3.4.0-18 does not update the image of the state full set for the zammad-master

What you expected to happen:

all images are upgrades to specified image

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Feature Request: use existing secret

I wanna use a existing secret, so we can use this chart with encrypted secrets e.g. sealed secrets or sops.
Let me know if you like the idea, so I can pull request this feature.

Make liveness/readiness Probe Optional

I've noticed cases where the railsserver container restarts because the livenessProbe fails, seemingly only when elasticsearch is slow to respond. Do any of the liveness or reaadiness probes (GET / on nginx and railsserver, TCP to 6042 on the websocket container) cause requests to be generated to the database, elasticsearch, or memcache? If they do, we should probably work on getting dedicate healthcheck endpoints added so that a slow down in elasticsearch doesn't cause a cascading failure down to services that are otherwise healthy.

Regardless I'd like to make the probes optional (it's useful at times, but the defaults will leave everything enabled), and am happy to open a PR with the required changes, but am curious about where the values fit best within the chart. Ideally livenessProbe and readinessProbe will be configurable independently, for each container. Does nesting it under envConfig make sense despite the naming mismatch of rails vs. railsserver?

Thanks!

Postgresql - FATAL remaining connection slots are reserved for non-replication

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
Current

What happened:
System goes up and down when this error occurs:
FATAL: remaining connection slots are reserved for non-replication superuser connections

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:
attempted to fix using, however the zomand-postgres-init fails
set postgresql.postgresqlConfiguration.maxConnections="1000"

helm upgrade fails due to error in values

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG Report

Version of Helm and Kubernetes:
Helm 3.5.3 Kubernetes 1.15.5

What happened:

helm upgrade failed

What you expected to happen:

helm upgrade succeeds

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Console log:

2021-03-31T13:09:02.9237634Z [command]C:\A\B1_work_tool\helm\3.5.3\x64\windows-amd64\helm.exe upgrade --namespace zammadendusertest --force --values C:\A\B1_work\r28\a_Zammad_Enduser\CubeZammadEnduser\zammad-values.yaml --wait zammadendusertest zammad/zammad
2021-03-31T13:09:04.6614548Z coalesce.go:163: warning: skipped value for extraEnv: Not a table.
2021-03-31T13:09:04.6615127Z coalesce.go:163: warning: skipped value for extraEnv: Not a table.
2021-03-31T13:09:04.6615845Z Error: UPGRADE FAILED: failed to replace object: Service "zammad-master" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-memcached" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-postgresql" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest" is invalid: spec.clusterIP: Invalid value: "": field is immutable
2021-03-31T13:09:04.6653914Z ##[error]coalesce.go:163: warning: skipped value for extraEnv: Not a table.
coalesce.go:163: warning: skipped value for extraEnv: Not a table.
Error: UPGRADE FAILED: failed to replace object: Service "zammad-master" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-memcached" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-postgresql" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest" is invalid: spec.clusterIP: Invalid value: "": field is immutable

image

I think it is related to this issue, but I already searched for the cause in this charts but didn't find something:
helm/helm#8283

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.