A Helm chart to install Zammad on Kubernetes
Please see zammad/README.md for detailed information & instructions.
Please see our contributing guidelines.
Zammad Helm chart for Kubernetes
Home Page: https://artifacthub.io/packages/helm/zammad/zammad
License: GNU Affero General Public License v3.0
A Helm chart to install Zammad on Kubernetes
Please see zammad/README.md for detailed information & instructions.
Please see our contributing guidelines.
Your postgres version is docker.io/bitnami/postgresql:11.14.0-debian-10-r28
That's vulnerable to CVE-2022-1552.
Version of Helm and Kubernetes:
1.24, helm 3.8.2
What happened:
After installing the chart version 6.7.0 and 6.7.1 I've noticed that the only PVC that gets the attribute storageClassName configured in values.yml (as persistence.storageClass) is the release-name-zammad StatefulSet.
What you expected to happen:
All the four PVC needs to have the correct storageClass defined via values.
How to reproduce it (as minimally and precisely as possible):
with values:
persistence:
storageClass: host-storage #This is the name of my storageClass
run:
helm template --debug -f values.yml zammad/zammad
Anything else we need to know:
I have defined storageclass and volumes like this:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: host-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
apiVersion: v1
kind: PersistentVolume
metadata:
name: zammad-postgres-pv
spec:
storageClassName: host-storage
capacity:
storage: 8Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/pv/zammad/postgres"
The other pv's are the same, with their own metadata and path
What happened:
I tried to use postgresql.postgresqlUsername
What you expected to happen:
The services would try to connect to (externally deployed) database with this username, but it still attempts to connect as "postgres" user.
Anything else we need to know:
Searching your repo, "postgresqlUsername" is only found in values.yaml and README.md
C:\Users\shaba>helm upgrade --install zammad zammad/zammad --namespace=zammad
Release "zammad" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta1", unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"]
I can't find any references to apps/v1beta1 in the chart. It seems like it's coming from remote dependencies. Unfortunentally, this means I can't install this on my cluster since its a later version.
Version of Helm and Kubernetes:
C:\Users\shaba>helm version
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}
C:\Users\shaba>kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
What happened:
Deployment failed
What you expected to happen:
Deployment succeeds
How to reproduce it (as minimally and precisely as possible):
Deploy on a cluster where apps/v1 is active
Anything else we need to know:
Statefulset was promoted to apps/v1 ~1.16 I believe. May be able to replicate there if that helps.
Hi,
Hope it's OK I'm skipping the template, as this is more of an architectural thing.
I've been running Zammad for a couple of months now, in Kubernetes, with a rather high adaptation within our organisation. Both the users and I have been very pleased with the application.
However, the Kubernetes implementation has a lot of lacks, in terms of creating something stable, especially splitting the services inside the zammad statefulset to individual containers. I assume the statefulset does not scale horizontally as it is, right? Assuming the hardcoded spec.replicas
gives a hint ;)
I'm not very well adversed in Rails, but if there's something I can offer in terms of Kubernetes and Helm charts, I'm happy to help out.
I have a lot of various smaller systems running in Kubernetes, basically using it to have a PAAS to rapidly deploy stuff like Zammad. In order to have databases uniform and backed up, I use KubeDB, which offers both Elasticsearch, Postgres and memcached, so it was very easy to apply for Zammad. I would recommend making this a requirement for the Zammad Helm chart, to relieve some of the burden of the Helm Chart and focus on the actual Zammad deployment.
If someone who's well adversed with Zammad and have an interest in getting this improved is interested, please reach out and I will help with my mediocre Kubernetes knowledge :)
Is this a request for help?:
yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
bug report?
Version of Helm and Kubernetes:
Kubernetes 1.20.5
Helm 3.5.4
What happened:
Zammad-init container is throwing permission denied errors
What you expected to happen:
working zammad pods
How to reproduce it (as minimally and precisely as possible):
I created persistentvolume
then used
helm repo add zammad https://zammad.github.io/zammad-helm
helm upgrade --install zammad zammad/zammad --namespace=zammad -f values.yml (cause I added the persistentVolume)
Anything else we need to know:
All I did was using the commands (https://docs.zammad.org/en/latest/install/kubernetes.html) then created a pv.yaml with this content and used the storageclassname in the value.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /opt
storageClassName: pv1
and then the zammad-0 pod log throws permission denied (13) errors for /opt/zammad.
Since I'm very new to kubernetes and helm it could be very likely that I just don't know what I'm doing and my configuration fucks up things or that I'm missing something how to get zammad via helm working so please bare with me.
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
Version of Helm and Kubernetes: 3, 1.19
What happened:
We have been experiencing stability issues with the system causing users to only be able to use the system in 10-15 minute bursts. In order to stop the sidecars from restarting every 5-10 minutes we disabled health checks. With health checks disabled we prevent 502, however the system works with bursts of speed and instances of extreme slowness (1-2 minute page loads)
What you expected to happen: System is speedy 24/7
How to reproduce it (as minimally and precisely as possible):
Install helm, setup multiple email groups and checks.
We also have LDAP enabled.
Anything else we need to know: We believe we have narrowed the issue down to the scheduler sidecar.
When the scheduler sidecar fails to run the jobs (when everything is in sleep and the entire pod needs restarted) the system stays stable and sidecars stop restarting. However; we stop receiving inbound tickets ๐คฆ
zammad-scheduler I, [2021-02-20T13:30:31.879427 #1-47083127478620] INFO -- : Running job thread for 'Check Channels' (Channel.fetch) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.879579 #1-47083127478620] INFO -- : Running job thread for 'Import OTRS diff load' (Import::OTRS.diff_worker) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.879653 #1-47083127478620] INFO -- : Running job thread for 'Generate Session data' (Sessions.jobs) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.879722 #1-47083127478620] INFO -- : Running job thread for 'Process pending tickets' (Ticket.process_pending) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.879819 #1-47083127478620] INFO -- : Running job thread for 'Process escalation tickets' (Ticket.process_escalation) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.879902 #1-47083127478620] INFO -- : Running job thread for 'Process auto unassign tickets' (Ticket.process_auto_unassign) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.880004 #1-47083127478620] INFO -- : Running job thread for 'Check streams for Channel' (Channel.stream) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.880069 #1-47083127478620] INFO -- : Running job thread for 'Import Jobs' (ImportJob.start_registered) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.880825 #1-47083127478620] INFO -- : Running job thread for 'Delete old online notification entries.' (OnlineNotification.cleanup) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.880909 #1-47083127478620] INFO -- : Running job thread for 'Closed chat sessions where participients are offline.' (Chat.cleanup_close) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.880977 #1-47083127478620] INFO -- : Running job thread for 'Execute jobs' (Job.run) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.881042 #1-47083127478620] INFO -- : Running job thread for 'Generate user based stats.' (Stats.generate) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.881360 #1-47083127478620] INFO -- : Running job thread for 'Handle data privacy tasks.' (DataPrivacyTaskJob.perform_now) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.881434 #1-47083127478620] INFO -- : Running job thread for 'Cleanup closed sessions.' (Chat.cleanup) status is: sleep โโ zammad-scheduler I, [2021-02-20T13:30:31.881533 #1-47083127478620] INFO -- : Running job thread for 'Cleanup expired sessions' (SessionHelper.cleanup_expired) status is: sleep
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: v1.21.1+k3s1
What happened: On deployment using an ingress, the URL constructed in NOTES.txt is wrong due to a schema change from a single string type to a map containing pathType and path values.
What you expected to happen: A fully qualified URL to reach the zammad frontend
How to reproduce it (as minimally and precisely as possible): Enable ingress in values.yaml, add a path and set any pathType. Then deploy the chart and wait for the output.
Anything else we need to know:
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
What happened:
I want to run backup from any zamad container (checked all: railssever, websocket and scheduler).
What you expected to happen:
Backup works from some containers or there is option to configure separate pod that can do backup periodicaly.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know: -
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: helm v3.3.1 and Kubernetes 1.19
What happened: When I deploy the chart with the basic values, the zammad-0 pod of the zammad Statefull Set doesn't go live. I can see the following errors in the nginx container:
Note that the railserver container and the websocket do not log any messages.
How to reproduce it (as minimally and precisely as possible): juste a basic deployment with the basic latest helm chart and values.yaml
Thank you
This is the result of kubectl describe
on the pod
Name: zammad-0
Namespace: zammad
Priority: 0
Node: scw-internal-polynom-cluster-default-df9480afb/10.64.140.135
Start Time: Sat, 28 Nov 2020 06:29:26 +0100
Labels: app.kubernetes.io/instance=zammad
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=zammad
app.kubernetes.io/version=3.6.0
controller-revision-hash=zammad-6bc6965c8
helm.sh/chart=zammad-3.1.0
statefulset.kubernetes.io/pod-name=zammad-0
Annotations: <none>
Status: Running
IP: 100.64.6.78
IPs:
IP: 100.64.6.78
Controlled By: StatefulSet/zammad
Init Containers:
zammad-init:
Container ID: docker://2c7a23893e0c2c83287125c56cd71cae2626d40d52e5af113d6d427eb8a39c23
Image: zammad/zammad-docker-compose:zammad-3.6.0-1
Image ID: docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 28 Nov 2020 06:30:11 +0100
Finished: Sat, 28 Nov 2020 06:30:13 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/docker-entrypoint.sh from zammad-init (ro,path="zammad-init")
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
postgresql-init:
Container ID: docker://ad626c966124d9ba024f50039b565e6225d7ca9fa7ea1dbf4a849c83b1d75df9
Image: zammad/zammad-docker-compose:zammad-3.6.0-1
Image ID: docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
Port: <none>
Host Port: <none>
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 28 Nov 2020 06:30:14 +0100
Finished: Sat, 28 Nov 2020 06:30:51 +0100
Ready: True
Restart Count: 0
Environment:
POSTGRESQL_PASS: <set to the key 'postgresql-pass' in secret 'zammad-postgresql-pass'> Optional: false
Mounts:
/docker-entrypoint.sh from zammad-init (ro,path="postgresql-init")
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
elasticsearch-init:
Container ID: docker://dd53ec2a7b37e8f05f9d07109e2367bbc9b30aeb4c63c218c17470c6e049cab0
Image: zammad/zammad-docker-compose:zammad-3.6.0-1
Image ID: docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
Port: <none>
Host Port: <none>
State: Terminated
Reason: OOMKilled
Exit Code: 0
Started: Sat, 28 Nov 2020 06:39:03 +0100
Finished: Sat, 28 Nov 2020 06:40:00 +0100
Ready: True
Restart Count: 5
Environment: <none>
Mounts:
/docker-entrypoint.sh from zammad-init (ro,path="elasticsearch-init")
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
Containers:
zammad-nginx:
Container ID: docker://75ba2b23de4a80ce18cfb57dae232ffa680e61d040963e721790530eb613690a
Image: zammad/zammad-docker-compose:zammad-3.6.0-1
Image ID: docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
Port: 8080/TCP
Host Port: 0/TCP
Command:
/usr/sbin/nginx
-g
daemon off;
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 29 Nov 2020 20:05:57 +0100
Finished: Sun, 29 Nov 2020 20:06:36 +0100
Ready: False
Restart Count: 715
Limits:
cpu: 100m
memory: 64Mi
Requests:
cpu: 50m
memory: 32Mi
Liveness: http-get http://:8080/ delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/ delay=10s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/nginx/sites-enabled from zammad-nginx (rw)
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
zammad-railsserver:
Container ID: docker://f15cc3c0ebb2d05adebc50d81c6196c6ee041634aa1e3d7f2f63fb7eaca49521
Image: zammad/zammad-docker-compose:zammad-3.6.0-1
Image ID: docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
Port: 3000/TCP
Host Port: 0/TCP
Command:
bundle
exec
rails
server
puma
-b
[::]
-p
3000
-e
production
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 29 Nov 2020 20:02:33 +0100
Finished: Sun, 29 Nov 2020 20:03:17 +0100
Ready: False
Restart Count: 698
Limits:
cpu: 200m
memory: 1Gi
Requests:
cpu: 100m
memory: 512Mi
Liveness: http-get http://:3000/ delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:3000/ delay=10s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
zammad-scheduler:
Container ID: docker://da66b8d3237941e1b20509ad29afc6f618b75ed190a7883ef22cd8c09b7dc831
Image: zammad/zammad-docker-compose:zammad-3.6.0-1
Image ID: docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
Port: <none>
Host Port: <none>
Command:
bundle
exec
script/scheduler.rb
run
State: Running
Started: Sat, 28 Nov 2020 06:40:02 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 200m
memory: 512Mi
Requests:
cpu: 100m
memory: 256Mi
Environment: <none>
Mounts:
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
zammad-websocket:
Container ID: docker://b70d0679484b11d997b5722d889ab56deba9d56393abb8e74374d2f358da325e
Image: zammad/zammad-docker-compose:zammad-3.6.0-1
Image ID: docker-pullable://zammad/zammad-docker-compose@sha256:24b942e6c200a0acdeab31936715757d3a18d6db8007333949b0d990ef0eb4dd
Port: 6042/TCP
Host Port: 0/TCP
Command:
bundle
exec
script/websocket-server.rb
-b
0.0.0.0
-p
6042
start
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 29 Nov 2020 20:05:30 +0100
Finished: Sun, 29 Nov 2020 20:06:09 +0100
Ready: False
Restart Count: 697
Limits:
cpu: 200m
memory: 512Mi
Requests:
cpu: 100m
memory: 256Mi
Liveness: tcp-socket :6042 delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: tcp-socket :6042 delay=10s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kmsdp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
zammad:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: zammad-zammad-0
ReadOnly: false
zammad-nginx:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zammad-nginx
Optional: false
zammad-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: zammad-init
Optional: false
default-token-kmsdp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kmsdp
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 57m (x10255 over 37h) kubelet Back-off restarting failed container
Warning Unhealthy 52m (x2046 over 37h) kubelet Liveness probe failed: HTTP probe failed with statuscode: 502
Warning Unhealthy 32m (x1901 over 37h) kubelet Readiness probe failed: HTTP probe failed with statuscode: 502
Warning Unhealthy 17m (x2080 over 37h) kubelet Liveness probe failed: Get "http://100.64.6.78:3000/": dial tcp 100.64.6.78:3000: connect: connection refused
Warning Unhealthy 7m56s (x2224 over 37h) kubelet Readiness probe failed: dial tcp 100.64.6.78:6042: connect: connection refused
Warning BackOff 2m46s (x10685 over 37h) kubelet Back-off restarting failed container
Why did you choose a StatefulSet for deploying Zammad, instead of a Deployment?
Is this a request for help?:
this is a BUG REPORT
Version of Helm and Kubernetes: helm v3.4.0 , EKS v1.17
What happened:
k get po
NAME READY STATUS RESTARTS AGE
zammad-dev-0 4/4 Running 0 2d10h
zammad-memcached-674dbf5d47-789w6 1/1 Running 0 2d10h
zammad-master-0 1/1 Running 0 2d10h
I need to specify a specific pvc for elasticsearch (zammad-master-0) when performing an install via helm chart with the vaules existingClaim .
the pvc has been mounted by zammad-dev-0 and not zammad-master-0 .
What you expected to happen:
I'm expecting that the pvc will be mounted only by zammad-master-0 .
How to reproduce it (as minimally and precisely as possible):
1- Installed zammad
helm install zammad-dev zammad/zammad --namespace zammad --values=zammad-values.yaml --version=3.4.0
2 PVCs have been created for the sts component
k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zammad-dev-zammad-0 Bound pvc-5896006c-3efa-492b-b55d-6e3ad3a7ce0d 15Gi RWO gp2 2d19h
zammad-master-zammad-master-0 Bound pvc-2a10fb93-aea0-4169-b7aa-76f94fe5c522 30Gi RWO gp2 2d18h
2-Uninstall zammad
helm uninstall zammad-dev
3-Delete elasticsearch PVC
k delete pvc zammad-master-zammad-master-0
4-Create my own PVC
k get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
My-pvc-zammad-master Bound pv-zammad 30Gi RWO gp2 4d18h
zammad-dev-zammad-0 Bound pvc-5896006c-3efa-492b-b55d-6e3ad3a7ce0d 15Gi RWO gp2 2d19h
5-Reinstall Zammad with existingClaim = My-pvc-zammad-master
helm install zammad-dev zammad/zammad --namespace zammad --values=zammad-values.yaml --version=3.4.0
My-pvc-zammad-master is mounted by zammad-dev-0 and zammad-master-0 created a new pvc with 30Gi storage size
k describe pvc zammad-master-zammad-master-0
Name: zammad-master-zammad-master-0
Namespace: zammad
StorageClass: gp2
Status: Bound
Volume: pvc-2a10fb93-aea0-4169-b7aa-76f94fe5c522
Labels: app=zammad-master
Finalizers: [kubernetes.io/pvc-protection snapshot.storage.kubernetes.io/pvc-as-source-protection]
Capacity: 30Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: zammad-master-0
Events: <none>
k describe pvc My-pvc-zammad-master
Name: pvc-zammad-bilel-master
Namespace: zammad
StorageClass: gp2
Status: Bound
Volume: pv-zammad
Labels: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 30Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: zammad-dev-0
Events: <none>
Anything else we need to know:
cat zammad-values.yaml
envConfig:
postgresql:
db: zammad
host: xxxxx
pass: xxxxx
port: 5432
user: zammad
ingress:
enabled: true
hosts:
- host: xxxxx
paths:
- /
persistence:
existingClaim: My-pvc-zammad-master
How to specify an existingClaim for each statefulset objects ? ( zammad-dev-0 and zammad-master-0 )
Version of Helm and Kubernetes:
Kubernetes Version: v1.18.3
What happened:
The zammad-railsserver gets restarted often and prematurely due to low livenessProbe
timeouts. The helm chart does not set a timeout for the livenessProbe (so it defaults to 1
) and does not supply a possibility to customize the timeouts.
What you expected to happen:
The helm chart provides a sensible timeout for zammad or provides a method to customize the livenessProbes (instead of turning them of completely).
How to reproduce it (as minimally and precisely as possible):
Run zammad with this helm chart.
Anything else we need to know:
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
3, 1.20
What happened:
zammad-scheduler pod goes into crashloopbackoff crashing entire service
What you expected to happen:
zammad-scheduler stays up
How to reproduce it (as minimally and precisely as possible):
Please create an option to increase the workers and connection pool. The current limit of 5 on the zammad-scheduler causes the entire service to crash leading to service disruption.
ActiveRecord::ConnectionTimeoutError: could not obtain a connection from the pool within 5.000 seconds (waited 5.023 seconds); all pooled connections were in use.
Pod restarts
Anything else we need to know:
**Is this a BUG REPORT **:
Version of Helm and Kubernetes:
Chart version: zammad-3.0.0
Kubernetes version: v1.15.11
DB:Postgresql 11.5
What happened:
we get a random error when we perofrm a search for Tickets / groups .
โError ID rn0Mel5q: Please contact your administratorโ
DB logs
DataBase Logs :
[2020-12-24T07:49:40.588575 #1-69853656656080] ERROR โ : Error ID dd-TPKCZ: PG::InFailedSqlTransaction: ERROR: current transaction is aborted, commands ignored until end of transaction block
: SELECT โactive_job_locksโ.* FROM โactive_job_locksโ WHERE โactive_job_locksโ.โlock_keyโ = $1 LIMIT $2 FOR UPDATE
What you expected to happen:
Get the list of tickets / groups
It throws. Seems like sth. in the template is broken.
zammad$ helm upgrade zammad zammad/zammad -f values.yaml
coalesce.go:165: warning: skipped value for extraEnv: Not a table.
Error: UPGRADE FAILED: template: zammad/templates/ingress.yaml:45:21: executing "zammad/templates/ingress.yaml" at <.path>: can't evaluate field path in type interface {}
Is this a request for help?: YES
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
Version of Helm and Kubernetes:
helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.3-dhc", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"dirty", BuildDate:"2020-10-15T07:10:10Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
What happened:
Trying to install zammad as NONROOT user in private cloud. It stuck in postgresql-init container at ruby related commands.
What you expected to happen:
Zammad should be installed successfully.
How to reproduce it (as minimally and precisely as possible):
Downloaded zammad-helm project from GIT to local.
created myValues.yaml file
image:
repository: myRepoPath/zammad-docker-compose
tag: zammad-3.6.0-65
pullPolicy: Always
imagePullSecrets:
- name: "my-pull-secret"
elasticsearch:
enabled: false
enableInitialisation: false
memcached:
image:
registry: myRegistry
repository: myPath/memcached
postgresql:
image:
registry: myRegistry
repository: myPath/postgresql
Install zammad in private cloud using below helm command.
helm install zammad . --values=myValues.yaml -n zammad
Anything else we need to know:
I am trying with latest zammad v3.6.0-65 which has fix for bug #82 in it.
First of all thanks for your fixes in bug #82 , the updated commands in zammad-init container are working fine now.
Now when it reaches postgresql-init, it stuck at "bundle exec rake db:migrate" step.
I have added few echo statements in templates/configmap-init.yaml file as shown below (in and around the if statement) :
..........
postgresql-init: |-
#!/bin/bash
set -e
sed -e "s#.*adapter:.*# adapter: postgresql#g" -e "s#.*database:.*# database: {{ .Values.envConfig.postgresql.db }}#g" -e "s#.*username:.*# username: {{ .Values.envConfig.postgresql.user }}#g" -e "s#.*password:.*# password: ${POSTGRESQL_PASS}\\n host: {{ if .Values.postgresql.enabled }}{{ .Release.Name }}-postgresql{{ else }}{{ .Values.envConfig.postgresql.host }}{{ end }}\\n port: {{ .Values.envConfig.postgresql.port }}#g" < contrib/packager.io/database.yml.pkgr > config/database.yml
echo "level 1"
if ! (bundle exec rails r 'puts User.any?' 2> /dev/null | grep -q true); then
echo "level 2"
bundle exec rake db:migrate
bundle exec rake db:seed
else
echo "level 3"
bundle exec rake db:migrate
fi
echo "postgresql init complete :)"
If I check the logs, it print below two statement only, refer to echo statement in above file:
C:\Ajeet\zammad-helm-master-nonroot\zammad>kubectl logs zammad-0 -c postgresql-init -n zammad
level 1
level 2
I even tried to execute it manually as shown below, but it will exit (with code 137) after long time (>10 mins) but nothing shows in logs.
C:\Ajeet\zammad-helm-master-nonroot\zammad>kubectl exec -it zammad-0 -c postgresql-init -n zammad bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
zammad@zammad-0:~$ bundle exec rake db:migrate
command terminated with exit code 137
This is pod status
C:\Ajeet\zammad-helm-master-nonroot\zammad>kubectl get pod -n zammad
NAME READY STATUS RESTARTS AGE
zammad-0 0/4 Init:1/2 1 20m
zammad-memcached-5fbc5dc6db-hfxl6 1/1 Running 0 20m
zammad-postgresql-0 1/1 Running 0 20m
when I am doing describe pod zammad-0 it is showing postgresql-init Terminated with Error code =1.
...........
postgresql-init:
Container ID: containerd://d36149fd04a99e462ec33cef9d373e1f496d39aa666d9e9c7eb48581f9812610
Image: myRepoPath/zammad-docker-compose:zammad-3.6.0-65
Image ID: myRepoPath/zammad-docker-compose@sha256:743a6a93e0744738f396438869f082068a3e627747ed96a0c6000ce890485933
Port: <none>
Host Port: <none>
State: Running
Started: Mon, 08 Mar 2021 23:20:29 +0530
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 08 Mar 2021 23:09:16 +0530
Finished: Mon, 08 Mar 2021 23:20:29 +0530
Ready: False
Restart Count: 1
Environment:
POSTGRESQL_PASS: <set to the key 'postgresql-pass' in secret 'zammad-postgresql-pass'> Optional: false
Mounts:
/docker-entrypoint.sh from zammad-init (ro,path="postgresql-init")
/opt/zammad from zammad (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-df875 (ro)
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: Helm 3.4.2, Kubernetes 1.19.7
What happened: the zammad-init container fails with the following errors:
... long list of files ...
rsync: failed to set times on "/opt/zammad/vendor/assets/stylesheets/.gitkeep.WccGxi": Operation not permitted (1)
rsync: failed to set times on "/opt/zammad/vendor/plugins/.gitkeep.vKQix2": Operation not permitted (1)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1207) [sender=3.1.3]
What you expected to happen: I expect zammad-init to run successfully
How to reproduce it (as minimally and precisely as possible): install zammad using helm
Anything else we need to know: ES and Postgres are provided, I can share my-values.yaml if needed.
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
helm :
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}
Kubernetes:
>kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
What happened:
Autowizard is trying to decode a secret which is already base64 decoded.
What you expected to happen:
Run to completion and pull in the config and start up completely.
How to reproduce it (as minimally and precisely as possible):
autowizard.json
{
"Token": "secret_zammad_autowizard_token",
"TextModuleLocale": {
"Locale": "en-us"
},
"Users": [
{
"login": "[email protected]",
"firstname": "Zammad",
"lastname": "Admin",
"email": "[email protected]",
"organization": "Shadowcom Test",
"password": "testtest"
}
],
"Settings": [
{
"name": "product_name",
"value": "ZammadTestSystem"
},
{
"name": "system_online_service",
"value": true
}
],
"Organizations": [
{
"name": "ZammadTest"
}
]
}
kubectl -n zammad create secret generic autowizard \
--from-file=autowizard=${DEVDIR}/autowizard.json
values.yaml
secrets:
autowizard:
useExisting: true
autowizard:
enable: true
helm upgrade --install zammad-test zammad/zammad -n zammad --create-namespace \
-f values.yaml
This appears to be caused by the following lines in zammad/templates/configmap-init.yaml
. In the zammad-init
section the following code :
if [ -n "${AUTOWIZARD_JSON}" ]; then
echo "${AUTOWIZARD_JSON}" | base64 -d > auto_wizard.json
fi
This value does not need to be base64 decoded, as the variable is mounted from a secret in zammad/templates/statefulset.yaml
and is already decoded:
env:
{{ if .Values.autoWizard.enabled }}
- name: "AUTOWIZARD_JSON"
valueFrom:
secretKeyRef:
name: {{ template "zammad.autowizardSecretName" . }}
key: {{ .Values.secrets.autowizard.secretKey }}
{{ end }}
I believe to fix it all you need to do is remove | base64 -d
from the code above. I confirmed by modifying the cart that the variable contains the already base64 decoded test.
Anything else we need to know:
The documentation in the chart values.yaml file is confusing. It makes it appear that the user can simply uncomment the config section and it will take that. After looking at the code it is clear the intention is to have the config as a secret. I would recommend updating the comments indicating that config must be kept in a secret.
**Is this a request for help? yes:
**Is this a BUG REPORT or FEATURE REQUEST? BUG REPORT :
Version of Helm and Kubernetes: helm v3.6.3 and kubernetes v1.21.4
What happened:
the zammad-init container fails with the following error:
/usr/local/bin/docker-entrypoint.sh: line 3: rsync: command not found
kubectl logs -f pod/zammad-0 zammad-init -n zammad //commad used for logs
What you expected to happen:
I expect zammad-init to run successfully
How to reproduce it (as minimally and precisely as possible):
Used k8s kubespray with flannel network
and using helm deploy with existing database and elastic search running on different machine as standalone
Anything else we need to know:
Is this a request for help?: Yes
Version of Helm and Kubernetes:
Helm v3.5.2
---
k3s version v1.20.4+k3s1 (838a906a)
go version go1.15.8
---
kubectl:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4+k3s1", GitCommit:"838a906ab5eba62ff529d6a3a746384eba810758", GitTreeState:"clean", BuildDate:"2021-02-22T19:49:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4+k3s1", GitCommit:"838a906ab5eba62ff529d6a3a746384eba810758", GitTreeState:"clean", BuildDate:"2021-02-22T19:49:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
What happened: Installation does not run successfully. Log from elasticsearch-init:
Unable to process GET request to elasticsearch URL 'http://zammad-master:9200'. Elasticsearch is not reachable, probably because it's not running or even installed.
How to reproduce it (as minimally and precisely as possible):
helm install zammad zammad/zammad --namespace zammad
Anything else we need to know:
I tried to change the Elasticsearch host but it doesn't change anything:
helm upgrade --set-string envConfig.elasticsearch.host=127.0.0.1 zammad zammad/zammad --namespace zammad
Is this a request for help?:
Unfortunately the update does not work. Unfortunately I always get the error message:
Failed to install app zammad. Error: UPGRADE FAILED: no Secret with the name "zammad-postgresql-pass" found
The secret is there, though.
Is this a BUG REPORT
Version of Helm and Kubernetes:
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
What happened:
After an update helmet always bleaches with the error message "Failed to install app zammad. Error: UPGRADE FAILED: no Secret with the name "zammad-postgresql-pass" found" but unfortunately this is not correct.
How to reproduce it (as minimally and precisely as possible):
Update from version 1.0.0 to 1.0.2.
Is this a request for help?: yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
helm version 3.2.4
Kubernetes 1.16.10 (Google Cloud GKE v1.16.10-gke.8)
What happened:
I'm upgrading a zammad 3.2 (helm chart zammad-1.2.1 / zammad/zammad-docker-compose:3.2.0-13) installation to zammad 3.4 (helm chart zammad-2.2.0 / zammad/zammad-docker-compose:3.4.0-11).
After the upgrade something weird happens to the user/groups/roles config. There are several problems I've noticed so far:
What you expected to happen:
user/groups/roles config should keep working. access to existing tickets should be available. ticket should be assignable to other agents
How to reproduce it (as minimally and precisely as possible):
What I did was:
Anything else we need to know:
I tried it now several times to import different db dump created on the the 3.2 system to import on the 3.4 system.
The dumping/importing itself is running without problems. Also the app server starts without issues on the newly imported database - only after a short while, the system seems to degenerate.
I'm wondering it there were some changes to the db schema in 3.4 that are somehow missing?
I cant find any obvious errors in the logs (not on the database and also not zammad app server)
Right now it is not possible to specify an external redis that requires to provide a user.
However, redis introduces with 6.x support for different users (and privs).
Many cloud vendors tend to only offer Redis as a Service with custom users, without providing any fallback for the old connection strings. Therefore I ask to support redis usernames as input parameter similar as it is already supported for psql or elastic search
https://github.com/zammad/zammad-helm/blob/main/zammad/templates/statefulset.yaml
I know - it's not recommended to run zammad without elasticsearch - the documentation says that it's possible for small teams etc.
Right now, it's not possible to disable elasticsearch config / the elasticsearch-init container in the helm chart.
Let me know if you like the idea, so I can pull request this feature.
In some situations, it coud be useful to define pvc storageClassName as empty string (not null) to disable dynamic provisioning.
Right now, it's not fully possible in this chart, and the way of doing it may vary depends on the chart/subchart:
PostgreSQL (using the storageClass "-" de facto standard) :
postgresql:
...
persistence:
...
storageClass: "-"
Elasticsearch (using full "volumeClaimTemplate" block) :
elasticsearch:
...
volumeClaimTemplate:
...
storageClassName: ""
Zammad itself:
persistence:
...
storageClass: ""
In this last example, zammad is using half of the de facto standard :)
First, storageClass is used instead of storageClassName (which, in my opinion, is a good thing).
But then, the special value "-" is not handled, and worst, if storageClass is an ampty string, it leads to no storageClassName at all because of the "with" usage right here: https://github.com/zammad/zammad-helm/blob/master/zammad/templates/statefulset.yaml#L298
Is this a request for help?:
im IT beginner
i need using smtp but error " Host not reachable"
SMTP
-host
smtp.gmail.com
-address
my@address
-password
google app password 16
-port
589
I think the pod and service ports should be opened. Which pod and service should be open?
Please help me
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
Helm 3.9 / Kubernetes 1.22
What happened:
Since upgrading the zammad helm chart from 6.6.0 to 6.7.0 the Monitoring inside Zammad (Admin --> Monitoring (under System) tells me scheduler may not run (last execution of Stats.generate about 2 months ago) - please contact your system administrator
.
I'm also using the provided healthcheck url in our monitoring system, to check zammads health and of course I'm also getting alarms there.
When I look in the scheduler pods log file, its not looking too bad - at least theres some ok looking activity there:
I, [2022-06-29T15:58:01.015498 #1-109200] INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:58:01.017337 #1-109200] INFO -- : ended Channel.fetch took: 0.018719892 seconds.
I, [2022-06-29T15:58:31.026244 #1-109200] INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:58:31.027890 #1-109200] INFO -- : ended Channel.fetch took: 0.007268574 seconds.
I, [2022-06-29T15:59:01.034358 #1-109200] INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:59:01.035733 #1-109200] INFO -- : ended Channel.fetch took: 0.006543665 seconds.
I, [2022-06-29T15:59:13.740571 #1-108860] INFO -- : ProcessScheduledJobs running...
I, [2022-06-29T15:59:13.748378 #1-120320] INFO -- : execute ImportJob.start_registered (try_count 0)...
I, [2022-06-29T15:59:13.752938 #1-120320] INFO -- : ended ImportJob.start_registered took: 0.009868007 seconds.
I, [2022-06-29T15:59:23.752695 #1-108860] INFO -- : Running job thread for 'Process ticket escalations.' (Ticket.process_escalation) status is: sleep
I, [2022-06-29T15:59:31.043167 #1-109200] INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T15:59:31.044938 #1-109200] INFO -- : ended Channel.fetch took: 0.008018813 seconds.
I, [2022-06-29T15:59:33.762666 #1-108860] INFO -- : Running job thread for 'Check 'Channel' streams.' (Channel.stream) status is: sleep
I, [2022-06-29T15:59:43.770872 #1-120400] INFO -- : execute Ticket.process_pending (try_count 0)...
I, [2022-06-29T15:59:43.973586 #1-120400] INFO -- : ended Ticket.process_pending took: 0.208151392 seconds.
I, [2022-06-29T15:59:53.780674 #1-120540] INFO -- : execute Ticket.process_auto_unassign (try_count 0)...
I, [2022-06-29T15:59:53.784191 #1-120540] INFO -- : ended Ticket.process_auto_unassign took: 0.008527539 seconds.
I, [2022-06-29T16:00:01.052044 #1-109200] INFO -- : execute Channel.fetch (try_count 0)...
I, [2022-06-29T16:00:01.053573 #1-109200] INFO -- : ended Channel.fetch took: 0.00745371 seconds.
I, [2022-06-29T16:00:03.785260 #1-108860] INFO -- : Running job thread for 'Check channels.' (Channel.fetch) status is: sleep
I, [2022-06-29T16:00:09.836285 #1-109540] INFO -- : execute Job.run (try_count 0)...
I, [2022-06-29T16:00:09.838140 #1-109540] INFO -- : ended Job.run took: 0.010392464 seconds.
I, [2022-06-29T16:00:13.795866 #1-120640] INFO -- : execute SessionTimeoutJob.perform_now (try_count 0)...
I, [2022-06-29T16:00:13.859815 #1-120640] INFO -- : SessionTimeoutJob removed session '36821947' for user id '' (last ping: '', timeout: '-1')
May this issue somewhat be associated with the recent switch from scheduler to background-worker?
What you expected to happen:
Zammad health check should report actual system status
How to reproduce it (as minimally and precisely as possible):
Just use helmchart 6.7.0 to setup a zammad instance and you should see the health check issue
Anything else we need to know:
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug? (or maybe user error)
Version of Helm and Kubernetes:
$ helm version --short
v3.2.0+ge11b7ce
$ kubectl version --short
Client Version: v1.15.9
Server Version: v1.15.9-gke.24
What happened: Attempting to set an existingClaim fails:
persistence:
enabled: true
existingClaim: zammad-nfs-pvc
accessModes:
- ReadWriteOnce
storageClass: nfs
size: 15Gi
annotations: {}
with error:
$ helm upgrade zammad zammad/zammad -f zammad-values-3.3.0-19.yaml --dry-run --debug
upgrade.go:120: [debug] preparing upgrade for zammad
Error: UPGRADE FAILED: template: zammad/templates/statefulset.yaml:237:31: executing "zammad/templates/statefulset.yaml" at <.Values.persistence.existingClaim>: can't evaluate field Values in type string
helm.go:84: [debug] template: zammad/templates/statefulset.yaml:237:31: executing "zammad/templates/statefulset.yaml" at <.Values.persistence.existingClaim>: can't evaluate field Values in type string
UPGRADE FAILED
main.newUpgradeCmd.func1
/private/tmp/helm-20200423-41927-615sa8/src/helm.sh/helm/cmd/helm/upgrade.go:146
github.com/spf13/cobra.(*Command).execute
/private/tmp/helm-20200423-41927-615sa8/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/private/tmp/helm-20200423-41927-615sa8/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
/private/tmp/helm-20200423-41927-615sa8/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
/private/tmp/helm-20200423-41927-615sa8/src/helm.sh/helm/cmd/helm/helm.go:83
runtime.main
/usr/local/Cellar/[email protected]/1.13.10_1/libexec/src/runtime/proc.go:203
runtime.goexit
/usr/local/Cellar/[email protected]/1.13.10_1/libexec/src/runtime/asm_amd64.s:1357
What you expected to happen: For it to use my existing persistent claim.
How to reproduce it (as minimally and precisely as possible):
Create NFS pvc and try to attach it by its name
Anything else we need to know: No
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
Version of Helm and Kubernetes:
helm v3.2.1
kubelet v1.18.2 (microk8)
What happened:
i want to start a default helm install test
What you expected to happen:
can't start withot error show pictures vv
How to reproduce it (as minimally and precisely as possible):
helm upgrade --install zammad zammad/zammad --namespace zammad
Anything else we need to know:
FEATURE REQUEST
Version of Helm and Kubernetes:
irrelevant
What happened:
While experimenting with the helm chart, I noticed that there is a autowizard config that is able to configure most Settings out of the box. However I soon hit a dead end when i tried to configure an email channel using the autowizard, since I was unsure how the json of the autowizard should look like.
What you expected to happen:
A Custom Resource Definition defining how the autowizard config should look like would greatly reduce confusion and would make it easier for users to setup their zammad instance. At least, a documentation that describes what fields can be set in the autowizard should be available.
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes: latest helm chart with 1.19
What happened:
Starting the statefull set always starts an elastic full index causing our startup time to increase as time / usage exands
What you expected to happen:
Full index is not performed every start
How to reproduce it (as minimally and precisely as possible):
scale down and up the ss
Anything else we need to know:
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
1.18
Is there any issue in upgrading the docker image to 3.5 ?
It would be useful to be able to set resources for the init containers. Not being able to set these resources prevents zammad from running in a namespace with resource quotas.
Is this a request for help?: YES
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
Version of Helm and Kubernetes:
Helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
Kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.3-XXX", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"dirty", BuildDate:"2020-10-15T07:10:10Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
What happened:
Zammad installation fails due to ROOT image of elasticsearch-init as cluster has pod security policy to not allow root or privileged images.
What you expected to happen:
Elastic search should run as NON ROOT image
How to reproduce it (as minimally and precisely as possible):
Install using helm with below images:
zammad/zammad-docker-compose:zammad-4.0.0-7
zammad/zammad-docker-compose:zammad-elasticsearch-4.0.0-7
Results in error
create Pod zammad-master-0 in StatefulSet zammad-master failed error: pods "zammad-master-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
Anything else we need to know:
Similar issue was fixed for zammad-init container for bug #82.
Hi,
I would like to control how persistent volumes are created during the setup process.
I can see that template is supporting:
persistence.storageClass:
persistence.existingClaim:
When running the helm on rancher beside this PV (zammad-data) that I can manually specifiy in template variables, I can see that there are two additional volumes created so in total it looks like:
zammad-postgresql (container in workload zammad-postgresql) is claiming: zammad:data-zammad-postgresql-0
elasticsearch (container in workload zammad-master) is claiming: zammad:zammad-master-zammad-master-0
zammad-railsserver (container in workload zammad) is claiming: zammad:zammad-data - THIS ONE IS CONTROLLED BY HELM TEMPLATE
I would like to have control over postgresql and elasticsearch claims so that I can attach existing preconfigured volumes.
Thanx,
D
Would you be willing to accept a patch that allows for inserting additional nginx configuration into the nginx configmap from helm values?
We run our instance of Zammad behind a Google IAP and would like the ability to rewrite some of the headers before it hits zammad. I'm specifically targeting this area - https://github.com/zammad/zammad-helm/blob/master/zammad/templates/configmap-nginx.yaml#L52
I'm trying avoid the need to run an additional instance of nginx in front for the purpose of rewriting a few headers.
Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST
Version of Helm and Kubernetes:
Version 3
What happened:
I'm testing Zammad on Kubernetes using Helm charts. It works fine. But, I already have a separate Postgres database.
What you expected to happen:
I want to integrate Zammad with an existing Postgres DB, or Amazon RDS DB
Anything else we need to know:
Yes, I want to know how to set up Zammad on Kubernetes with a separate Postgres database. Hope anyone will help me with this.
Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
output of helm version commad:
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
output of kubectl version commad:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
What happened: Zammad is developed as Root User which are not allowed in private cloud for security reason.
What you expected to happen: To be able to install zammad as NONROOT user
How to reproduce it (as minimally and precisely as possible): NA
Anything else we need to know:
I tried to create new images from the current image and just adding USER 1000 to start as.
My dockerfile has only two lines:
FROM image-name-here
USER 1000
With this I can deploy but the pods don't come up as internally it get into lots of other issues as the user is NOT root user.
Version of Helm and Kubernetes:
1.19, helm 3
What happened:
I am taking a brief look to the chart, and I encountered that we are deploying a stateful set, with the replicas assigned to 1
What you expected to happen:
Be able to escalate the statefulset without the need of a NFS or other persistent layers. Or in other words, be able to escalate the zammad replicas easily.
How to reproduce it (as minimally and precisely as possible):
The replicas spec is hard-coded to 1 replica
Anything else we need to know:
I am taking a look at the chart and the zammad repo as well. I know I am missing something, and that the application is complex, but with a PG, I was hoping to find a deployment and to be able to escalate it to more than 1 replica easily, without having any persistance layer. Are there any other possibilities to escalate the whole application? I am not talking about the dependencies.
Is this a request for help?: No
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes: helm 3.6.0, kubernetes v1.21.4-gke.1801
What happened: When the pod gets rescheduled for whatever reason, and zammad is running with a persistent volume claim,
its server.pid file remains in /opt/zammad/tmp/pids/server.pid
, causing the rails server container to not successfully start up again. Same goes for container restarts due to health checks.
What you expected to happen: The rails server should successfully start up again after container restart, i.e., its temporary directory should be clean
How to reproduce it (as minimally and precisely as possible): Force the statefulset onto another node, for example by cordoning a node (just make sure there's nothing else of value running on there...)
Anything else we need to know: The missing piece here is a line of shell code that is present in the entrypoint script for zammad-rails in the docker-compose repository, but not in the command field for the container spec here. I'm opening a pull request to fix this issue.
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
What happened:
When restarting zammad, in attempts to reset the scheduler sidecar, the elasticsearch-init runs for every ticket in the system causes extreme restart times that get longer by the hour.
What you expected to happen:
Zammad immediately resets and elasticsearch is reindexed via a job.
How to reproduce it (as minimally and precisely as possible):
Restart zammad stateful set
Anything else we need to know:
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
1.18.6
image 3.4.0-18
What happened:
When scaling zammad statefulset it appears the scheduler runs the same jobs at the same time.
This results in duplicate email tickets etc.
What you expected to happen:
scheduler validates jobs are not already running.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Is this a request for help?: Yes
Version of Helm and Kubernetes:
kubectl version 1.15
helm version 3.1.1
What happened:
When i installed zammad with helm install using this command,
helm install --set postgresql.enable=false --set envConfig.postgresql.host=sqlproxy-service-ds1 --set env.Config.postgresql.port=3306 --set envConfig.postgresql.pass=xxxxxx my-zammad zammad/zammad --version 2.0.5 --namespace zammad
i got this error:
Error: unable to build kubernetes objects from release manifest: error validating "": error vali
dating data: [ValidationError(StatefulSet.spec.template.spec.containers[1].securityContext): unk
nown field "fsGroup" in io.k8s.api.core.v1.SecurityContext, ValidationError(StatefulSet.spec.tem
plate.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.Securit
yContext, ValidationError(StatefulSet.spec.template.spec.containers[3].securityContext): unknown
field "fsGroup" in io.k8s.api.core.v1.SecurityContext, ValidationError(StatefulSet.spec.templat
e.spec.initContainers[1].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.Securit
yContext, ValidationError(StatefulSet.spec.template.spec.initContainers[2].securityContext): unk
nown field "fsGroup" in io.k8s.api.core.v1.SecurityContext]
What you expected to happen:
Zammad will be running on my cluster in GKE
How to reproduce it (as minimally and precisely as possible):
helm repo add zammad https://zammad.github.io
helm repo update
Anything else we need to know:
I want to use our postgresql instance v9.6 @ GCP CloudSQL for zammad prod db via cloudsqlproxy (same namespace).
sqlproxy-deployment.yaml
# https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/Kubernetes.md
# 1. kubectl create secret generic service-account-token --from-file=credentials.json=$HOME/kubernetes/cloudsql_proxy/credentials.json --namespace zammad
# 2. kubectl apply -f sqlproxy-deployment.yaml --namespace zammad
# 3. kubectl apply -f sqlproxy-services.yaml --namespace zammad
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cloudsqlproxy
spec:
replicas: 1
template:
metadata:
labels:
app: cloudsqlproxy
spec:
containers:
# Make sure to specify image tag in production
# Check out the newest version in release page
# https://github.com/GoogleCloudPlatform/cloudsql-proxy/releases
- image: b.gcr.io/cloudsql-docker/gce-proxy:latest
# 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
imagePullPolicy: Always
name: cloudsqlproxy
command:
- /cloud_sql_proxy
- -dir=/cloudsql
- -instances=<PROJECT_ID>:<ZONE>:<SQL_INSTANCE>=tcp:0.0.0.0:3306
- -credential_file=/credentials/credentials.json
# set term_timeout if require graceful handling of shutdown
# NOTE: proxy will stop accepting new connections; only wait on existing connections
- term_timeout=10s
lifecycle:
preStop:
exec:
# (optional) add a preStop hook so that termination is delayed
# this is required if your server still require new connections (e.g., connection pools)
command: ['sleep', '10']
ports:
- name: port-ds1
containerPort: 3306
volumeMounts:
- mountPath: /cloudsql
name: cloudsql
- mountPath: /credentials
name: service-account-token
volumes:
- name: cloudsql
emptyDir:
- name: service-account-token
secret:
secretName: service-account-token
sqlproxy-service.yaml
apiVersion: v1
kind: Service
metadata:
name: sqlproxy-service-ds1
spec:
ports:
- port: 3306
targetPort: port-ds1
selector:
app: cloudsqlproxy
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug
Version of Helm and Kubernetes:
v2
What happened:
Zammad is unable to verify against the Elasticsearch API, because the user and password are not set properly.
What you expected to happen:
The helm template should not verify against .Values.elasticsearch.pass
, but Values.envConfig.elasticsearch.pass
and user respectively.
How to reproduce it (as minimally and precisely as possible):
I, [2020-10-05T15:00:18.539348 #6-47041080449380] INFO -- : Setting.set('models_searchable', ["Organization", "KnowledgeBase::Answer::Translation", "User", "Chat::Session", "Ticket"])
I, [2020-10-05T15:00:19.430781 #6-47041080449380] INFO -- : Setting.set('es_url', "https://elasticsearch-es-master.elk.svc.cluster.local:9200")
rake aborted!
Unable to process GET request to elasticsearch URL 'https://elasticsearch-es-master.elk.svc.cluster.local:9200'. Check the response and payload for detailed information:
Response:
#<UserAgent::Result:0x00005625c2492878 @success=false, @body="{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"missing authentication credentials for REST request [/]\",\"header\":{\"WWW-Authenticate\":[\"Bearer realm=\\\"security\\\"\",\"ApiKey\",\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"]}}],\"type\":\"security_exception\",\"reason\":\"missing authentication credentials for REST request [/]\",\"header\":{\"WWW-Authenticate\":[\"Bearer realm=\\\"security\\\"\",\"ApiKey\",\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"]}},\"status\":401}", @data=nil, @code="401", @content_type=nil, @error="Client Error: #<Net::HTTPUnauthorized 401 Unauthorized readbody=true>!">
The Elasticsearch configmap-init.yaml does not contain the relevant section for setting the user and the password, because the evaluation always fails. Either a default has to be added or (IMHO the better way) the check should default to Values.envConfig.elasticsearch.user
apiVersion: v1
kind: ConfigMap
name: zammad-cluster-dev-01-init
namespace: default
data:
elasticsearch-init: >-
#!/bin/bash
set -e
bundle exec rails r 'Setting.set("es_url",
"https://elasticsearch-es-master.elk.svc.cluster.local:9200")'
bundle exec rake searchindex:rebuild
echo "elasticsearch init complete :)"
...
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
k8 1.18.6
What happened:
running helm upgrade with --set image.tag=3.4.0-18 does not update the image of the state full set for the zammad-master
What you expected to happen:
all images are upgrades to specified image
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
I wanna use a existing secret, so we can use this chart with encrypted secrets e.g. sealed secrets or sops.
Let me know if you like the idea, so I can pull request this feature.
I've noticed cases where the railsserver container restarts because the livenessProbe fails, seemingly only when elasticsearch is slow to respond. Do any of the liveness or reaadiness probes (GET /
on nginx and railsserver, TCP to 6042 on the websocket container) cause requests to be generated to the database, elasticsearch, or memcache? If they do, we should probably work on getting dedicate healthcheck endpoints added so that a slow down in elasticsearch doesn't cause a cascading failure down to services that are otherwise healthy.
Regardless I'd like to make the probes optional (it's useful at times, but the defaults will leave everything enabled), and am happy to open a PR with the required changes, but am curious about where the values fit best within the chart. Ideally livenessProbe
and readinessProbe
will be configurable independently, for each container. Does nesting it under envConfig
make sense despite the naming mismatch of rails vs. railsserver?
Thanks!
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Version of Helm and Kubernetes:
Current
What happened:
System goes up and down when this error occurs:
FATAL: remaining connection slots are reserved for non-replication superuser connections
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
attempted to fix using, however the zomand-postgres-init fails
set postgresql.postgresqlConfiguration.maxConnections="1000"
Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG Report
Version of Helm and Kubernetes:
Helm 3.5.3 Kubernetes 1.15.5
What happened:
helm upgrade failed
What you expected to happen:
helm upgrade succeeds
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Console log:
2021-03-31T13:09:02.9237634Z [command]C:\A\B1_work_tool\helm\3.5.3\x64\windows-amd64\helm.exe upgrade --namespace zammadendusertest --force --values C:\A\B1_work\r28\a_Zammad_Enduser\CubeZammadEnduser\zammad-values.yaml --wait zammadendusertest zammad/zammad
2021-03-31T13:09:04.6614548Z coalesce.go:163: warning: skipped value for extraEnv: Not a table.
2021-03-31T13:09:04.6615127Z coalesce.go:163: warning: skipped value for extraEnv: Not a table.
2021-03-31T13:09:04.6615845Z Error: UPGRADE FAILED: failed to replace object: Service "zammad-master" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-memcached" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-postgresql" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest" is invalid: spec.clusterIP: Invalid value: "": field is immutable
2021-03-31T13:09:04.6653914Z ##[error]coalesce.go:163: warning: skipped value for extraEnv: Not a table.
coalesce.go:163: warning: skipped value for extraEnv: Not a table.
Error: UPGRADE FAILED: failed to replace object: Service "zammad-master" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-memcached" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest-postgresql" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "zammadendusertest" is invalid: spec.clusterIP: Invalid value: "": field is immutable
I think it is related to this issue, but I already searched for the cause in this charts but didn't find something:
helm/helm#8283
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.