stevehipwell / helm-charts Goto Github PK
View Code? Open in Web Editor NEWHelm chart repository.
License: MIT License
Helm chart repository.
License: MIT License
Currently, host/tls env vars are only set, if the ingress is enabled.
In my case, we use Traefik, so we use CRDs to configure our own ingress route, thus ingress.enabled=false is used in the chart.
Due that ATL_TOMCAT_SECURE and the usuals are no longer set at the instance is not set.
One would need to add
env:
- name: ATL_PROXY_PORT
value: "443"
- name: ATL_TOMCAT_SECURE
value: "true"
- name: ATL_TOMCAT_SCHEME
value: "https"
- name: ATL_PROXY_NAME
value: "myjira.tld"
I understand that this is perfectly possible - i would though suggest to expose host/tls as common values, not specific to ingress.
Something like hostname
and enabledTls
on the topLevel. Then use those to populate ATL_PROXY_PORT/ATL_TOMCAT_SECURE/ATL_TOMCAT_SCHEME/ATL_PROXY_NAME
Any ideas if that makes sense for you?
The current helm chart does not support adding additional labels to the pods. In azure assigning an MSI to a pod is done via aad-pod-identiy and the label aadpodidbinding.
When metrics and anonymous access are enabled the current helm chart is hard coded to add the policy "nx-anonymous" to the anonymous user. This allows anonymous reads of the repository for all repos. If enabling anonymous metrics the only policy that needs to be applied is "nx-metrics" but there is no way to currently do that.
When using 2.8.1 plantuml charts, I have configuration as below
ingress:
enabled: true
annotations: {}
ingressClassName: "traefik"
hosts:
- plantuml.local
After helm install
, I got those errors, where should I define paths
?
Error: template: plantuml/templates/NOTES.txt:4:12: executing "plantuml/templates/NOTES.txt" at <.paths>: can't evaluate field paths in type interface {}
Hi,
Are you planning to add s3 blobstore functionality for nexus3?
configuration would look something like:
BlobStores:
- name: s3-blobstore
type: s3
bucketName:
region:
accessKeyID:
secretAccessKey:
and so on
We're looking to use your Sonarqube helm chart (it's the most featured & trusted one), however, we required adding an extra container to the Sonarqube deployment (a proxy to connect to external DBs).
@stevehipwell I've opened #297 with the required changes to support this, can you review and let me know if it's OK?
the printf "jdbc:postgresql://%s:%d/%s"
with %d was rendered as float64 instead of int.
then the url looks like:
jdbc:postgresql://ot-sonar-postgresql:%!d(float64=5433)/sonardb
therefore sonarqube dont start with postgresql.
Chart version: nexus3-2.5.1
helm_values.yaml:
image:
tag: "3.25.0"
pullPolicy: "IfNotPresent"
properties:
enabled: true
values:
- nexus.scripts.allowCreation=true
config:
enabled: true
rootPassword:
secret: "root-password"
key: "password"
ldap:
enabled: false
anonymous:
enabled: false
realms:
enabled: false
repos:
- name: "docker-registry"
type: "docker-hosted"
online: true
attributes:
storage:
blobStorageName: "default"
strictContentTypeValidation: true
writePolicy: "ALLOW"
- name: "helm-registry"
type: "helm-hosted"
online: true
attributes:
storage:
blobStorageName: "default"
strictContentTypeValidation: true
writePolicy: "ALLOW"
persistence:
enabled: true
existingClaim: "pvc-nexus3"
leads to error during script configuration phase:
-------------------------------------------------
Started Sonatype Nexus OSS 3.25.0-03
-------------------------------------------------
2020-07-14 14:59:08,913+0000 INFO [qtp1378315085-45] *UNKNOWN org.apache.shiro.session.mgt.AbstractValidatingSessionManager - Enabling session validation scheduler...
2020-07-14 14:59:08,928+0000 INFO [qtp1378315085-45] *UNKNOWN org.sonatype.nexus.internal.security.anonymous.AnonymousManagerImpl - Using default configuration: OrientAnonymousConfiguration{enabled=true, userId='anonymous', realmName='NexusAuthorizingRealm'}
Updating root password.
The root user's password was updated sucessfully.
2020-07-14 14:59:09,065+0000 INFO [qtp1378315085-44] admin org.sonatype.nexus.internal.security.anonymous.AnonymousManagerImpl - Saving configuration: OrientAnonymousConfiguration{enabled=false, userId='anonymous', realmName='NexusAuthorizingRealm'}
Anonymous access configured.
Updating script /opt/sonatype/nexus/conf/cleanup.groovy.
2020-07-14 14:59:09,094+0000 INFO [qtp1378315085-47] admin org.sonatype.nexus.internal.script.ScriptEngineManagerProvider - Detected 2 engine-factories
2020-07-14 14:59:09,095+0000 INFO [qtp1378315085-47] admin org.sonatype.nexus.internal.script.ScriptEngineManagerProvider - Engine-factory: Oracle Nashorn v1.8.0_252; language=ECMAScript, version=ECMA - 262 Edition 5.1, names=[nashorn, Nashorn, js, JS, JavaScript, javascript, ECMAScript, ecmascript], mime-types=[application/javascript, application/ecmascript, text/javascript, text/ecmascript], extensions=[js]
2020-07-14 14:59:09,096+0000 INFO [qtp1378315085-47] admin org.sonatype.nexus.internal.script.ScriptEngineManagerProvider - Engine-factory: Groovy Scripting Engine v2.0; language=Groovy, version=2.4.17, names=[groovy, Groovy], mime-types=[application/x-groovy], extensions=[groovy]
2020-07-14 14:59:09,096+0000 INFO [qtp1378315085-47] admin org.sonatype.nexus.internal.script.ScriptEngineManagerProvider - Default language: groovy
2020-07-14 14:59:09,120+0000 WARN [qtp1378315085-47] admin org.sonatype.nexus.siesta.internal.WebappExceptionMapper - (ID 6bac27bf-bb2a-4792-bc08-94ca2abf395d) Response: [404] (no entity/body); mapped from: javax.ws.rs.NotFoundException: Script with name: 'cleanup' not found
Could not update script cleanup.
My first asumption was, that this might be due to https://issues.sonatype.org/browse/NEXUS-23205, however as seen in the values.yaml, I manually enabled scripts with the properties settings.
I am trying to import custom CAs into the Nexus3 Trust Store.
For this I set caCerts.secret according to a Kubernetes secret, containing multiple CA.crts.
Applying the settings causes an error. From looking at the deployment.yaml, my first guess would be, that the parent node
folder does not exist.
Is this the official Helm Chart for the operator, hosted at https://docs.projectcalico.org/charts?
identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)
Upgrading stevehipwell/nexus3
Creating tiller namespace (if missing): kube-system
Release "nexustest" does not exist. Installing it now.
in helmfile.d/202.nexus.yaml: failed processing release nexustest: helm exited with status 1:
Error: release nexustest failed: Ingress.extensions "nexustest-nexus3" is invalid: spec: Invalid value: []networking.IngressRule(nil): either backend
or rules
must be specified
Error: plugin "tiller" exited with error
In current SonarQube chart version PostgreSQL configuration is not working because of wrong ENV VAR names.
Please change environment variables names int this file:
https://github.com/stevehipwell/helm-charts/blob/master/charts/sonarqube/templates/deployment.yaml#L113-L144
Currently SonarQube run/config script is not reading those values.
So please change:
SONAR_JDBC_URL
SONAR_JDBC_USERNAME
SONAR_JDBC_PASSWORD
to:
SONARQUBE_JDBC_URL
SONARQUBE_JDBC_USERNAME
SONARQUBE_JDBC_PASSWORD
I'm not sure what about rest of them for now.
Hi,
Applying your default confluence helm configure will cause 500 errors after entering database connection information
traefik logs:
{"BackendName":"www.tt.ai/","ClientHost":"10.233.70.0","DownstreamStatus":500,"Duration":1106302817,"OriginDuration":1105784036,"OriginStatus":500,"Overhead":518781,"RequestContentSize":0,"RequestHost":"www.tt.ai","RequestMethod":"GET","RequestPath":"/setup/setupdbtype.action?dbConfigInfo.simple=false\u0026database=postgresql\u0026dbConfigInfo.databaseType=postgresql\u0026forceOverwriteExistingData=true\u0026dbConfigInfo.driverClassName=org.postgresql.Driver\u0026dbConfigInfo.databaseUrl=jdbc%3Apostgresql%3A%2F%2Fpostgresql-debug.postgresql-debug.svc%3A5432%2Fconfluence\u0026dbConfigInfo.userName=root\u0026dbConfigInfo.password=asdf1234\u0026dbConfigInfo.dialect=com.atlassian.confluence.impl.hibernate.dialect.PostgreSQLDialect\u0026atl_token=bc4d17af71abf865ea8b40056be8e9cadea19580","level":"info","msg":"","request_Referer":"http://www.tt.ai/setup/setupdbtype.action","request_User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36","time":"2020-07-22T08:46:51Z"}
Hi @stevehipwell !
A question about your confluence-server helm chart, for the moment we can only connect to PSQL. Can you tell me if connection update to SQL Server Database is scheduled?
Best regards
I want to install the composer plugin for my nexus3, without custom docker image. But It is not possible to install plugins with helm chart. Please add installing plugins like sonarqube helm chart.
values.yml
persistence:
enabled: true
existingClaim: jirapvc
postgresql:
enabled: true
$ helm -n jira upgrade -i jira stevehipwell/jira-software -f jira.yml
Release "jira" does not exist. Installing it now.
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"
values.yml
persistence:
enabled: true
accessMode: ReadWriteMany
storageClass: nfsdefault
existingClaim: jirapvc
psql:
host:
port: 5432
database: jiradb
username: jiraadmin
password:
secret: jirapsql
key: password
kubectl -n jira logs pod/jira-jira-software-7b6b5c8f7-gd2rx
INFO:root:Generating /etc/container_id from template container_id.j2
INFO:root:Generating /opt/atlassian/jira/conf/server.xml from template server.xml.j2
INFO:root:Generating /opt/atlassian/jira/atlassian-jira/WEB-INF/classes/seraph-config.xml from template seraph-config.xml.j2
INFO:root:/var/atlassian/application-data/jira/dbconfig.xml exists; skipping.
Traceback (most recent call last):
File "/entrypoint.py", line 26, in <module>
start_app(f'{JIRA_INSTALL_DIR}/bin/start-jira.sh -fg', JIRA_HOME, name='Jira')
File "/entrypoint_helpers.py", line 85, in start_app
set_perms(home_dir, env['run_user'], env['run_group'], 0o700)
File "/entrypoint_helpers.py", line 34, in set_perms
shutil.chown(path, user=user, group=group)
File "/usr/lib/python3.8/shutil.py", line 1296, in chown
os.chown(path, _user, _group)
PermissionError: [Errno 1] Operation not permitted: '/var/atlassian/application-data/jira'
Hi Steve,
And thank you for your helm repo!
In order to fix Available CPU
warning in nexus application one would need to pass -XX:ActiveProcessorCount=<NUMBER_OF_CORES>
to INSTALL4J_ADD_VM_PARAMS.
It would be nice to introduce jvmAdditionalOptions
in order to facilitate passing additional parameters via INSTALL4J_ADD_VM_PARAMS.
We are currently in the progress of migrating Jira from a different cluster to Kubernetes.
During this we require an init container to do some migration steps.
We are also looking further into the option of creating a Preview environment for which we also require initContainers to do some tasks before jira starts.
I have implemented this already for our purpose and attached a pullrequest with the solution we currently have at work.
The same would also need to implemented in confluence in the future.
Hi Team
helm install my-release stable/confluence-server
when i use the command to deploy confluence on Openshift 4.4 i meet CrashLoopBackOff error.
here is the log
INFO:root:Generating /opt/atlassian/confluence/conf/server.xml from template server.xml.j2
INFO:root:Generating /opt/atlassian/confluence/confluence/WEB-INF/classes/seraph-config.xml from template seraph-config.xml.j2
INFO:root:Generating /opt/atlassian/confluence/confluence/WEB-INF/classes/confluence-init.properties from template confluence-init.properties.j2
INFO:root:/var/atlassian/application-data/confluence/confluence.cfg.xml exists; skipping.
INFO:root:User is currently root. Will downgrade run user to confluence
INFO:root:Running Confluence with command '/bin/su', arguments ['/bin/su', 'confluence', '-c', '/opt/atlassian/confluence/bin/start-confluence.sh -fg']
su: System error
Please tell me how to deal with this problem.
Thank you
I recently discovered when using the istio-operator chart, the Istio Operator is failing to reconcile changes to IstioOperator
CR's, preventing removal of Istio components when uninstalling the chart.
This is due to a missing RBAC permission in the operator's ClusterRole
, which has been fixed in upcoming Istio releases but is not yet present in version 1.11.4
(which this chart is currently using).
Refer to the following issue:
istio/istio#35016 (comment)
This can be fixed now by adding pods/portforward
to the ClusterRole
template, or will eventually be taken care of when you update to later releases of Istio.
I'll submit a PR.
The tigera-operator
chart fails to install calico components on fresh k3s clusters (both v1.21.5+k3s2
and v1.22.3+k3s1
). Logs during failure are recorded below.
I suspect this is happening because of the changes to CRDs made here: 991ff6d
My suspicion is that installs to a cluster with a version <= 1.2.3 will work, and the operator will continue to on upgrades to later versions because CRDs are not reinstalled by helm3 on upgrade. https://helm.sh/docs/chart_best_practices/custom_resource_definitions/
values.yaml
:
installation:
enabled: true
spec:
calicoNetwork:
bgp: Enabled
ipPools:
- cidr: 10.100.0.0/16
encapsulation: None
natOutgoing: Disabled
image:
repository: quay.io/tigera/operator
tag: v1.23.1-arm64
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
Cluster state during failure:
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system metrics-server-9cf544f65-9gzkq 0/1 ContainerCreating 0 9m6s <none> pi01.pico <none> <none>
kube-system coredns-85cb69466-z9hpq 0/1 ContainerCreating 0 9m6s <none> pi01.pico <none> <none>
tigera-operator tigera-operator-54d4bd67c6-9rqlt 1/1 Running 0 7m51s 10.10.5.105 pi05.pico <none> <none>
tigera-operator
logs:
2021/11/12 00:40:57 [INFO] Version: v1.23.1
2021/11/12 00:40:57 [INFO] Go Version: go1.16.7
2021/11/12 00:40:57 [INFO] Go OS/Arch: linux/arm64
I1112 00:40:58.556969 1 request.go:645] Throttling request took 1.044751558s, request: GET:https://10.101.0.1:443/apis/batch/v1?timeout=32s
2021/11/12 00:41:00 [INFO] Active operator: proceeding
{"level":"info","ts":1636677662.1938167,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8484"}
{"level":"info","ts":1636677662.2239914,"logger":"setup","msg":"Checking type of cluster","provider":""}
{"level":"info","ts":1636677662.2308564,"logger":"setup","msg":"Checking if TSEE controllers are required","required":true}
{"level":"info","ts":1636677662.3535101,"logger":"typha_autoscaler","msg":"Starting typha autoscaler","syncPeriod":10}
{"level":"dpanic","ts":1636677662.3748791,"logger":"controller_monitor","msg":"odd number of arguments passed as key-value pairs for logging","ignored key":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.handleFields\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:100\ngithub.com/go-logr/zapr.(*zapLogger).Info\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:127\ngithub.com/tigera/operator/pkg/controller/monitor.waitToAddWatch\n\t/go/src/github.com/tigera/operator/pkg/controller/monitor/prometheus.go:153"}
{"level":"info","ts":1636677662.374828,"logger":"controller_monitor","msg":"%v. monitor-controller will retry."}
{"level":"info","ts":1636677662.377021,"logger":"setup","msg":"starting manager"}
I1112 00:41:02.377286 1 leaderelection.go:243] attempting to acquire leader lease tigera-operator/operator-lock...
{"level":"info","ts":1636677662.377634,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
I1112 00:41:02.408137 1 leaderelection.go:253] successfully acquired lease tigera-operator/operator-lock
{"level":"info","ts":1636677662.4098072,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4117415,"logger":"controller-runtime.manager.controller.monitor-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4117448,"logger":"controller-runtime.manager.controller.log-storage-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4141288,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4154627,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.415749,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4159749,"logger":"controller-runtime.manager.controller.intrusiondetection-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4160686,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4162962,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4166799,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.4169226,"logger":"controller-runtime.manager.controller.amazoncloudintegration-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5118752,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5122,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5158572,"logger":"controller-runtime.manager.controller.monitor-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5160706,"logger":"controller-runtime.manager.controller.monitor-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5169108,"logger":"controller-runtime.manager.controller.log-storage-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5171344,"logger":"controller-runtime.manager.controller.log-storage-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5184586,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.5202954,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.520441,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.5206282,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5210829,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.521193,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5212553,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.5219283,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5224407,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.522566,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5226636,"logger":"controller-runtime.manager.controller.intrusiondetection-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.523238,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.5233364,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.5251477,"logger":"controller-runtime.manager.controller.amazoncloudintegration-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.525359,"logger":"controller-runtime.manager.controller.amazoncloudintegration-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
E1112 00:41:02.530327 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
{"level":"info","ts":1636677662.6133475,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6140475,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6144626,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.614848,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6153228,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6158385,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6162727,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6168122,"logger":"controller-runtime.manager.controller.monitor-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677662.6167839,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6174731,"logger":"controller-runtime.manager.controller.log-storage-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6186554,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.619435,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.619914,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6200671,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677662.6205263,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.6208122,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.62107,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.6212695,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6214352,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.621548,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6215708,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.621665,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6215153,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6218238,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.621916,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6219692,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.622154,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6224272,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6224866,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6226687,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6227536,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
E1112 00:41:02.623348 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
{"level":"info","ts":1636677662.6235983,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6245081,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.624538,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6247797,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6251442,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6254306,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6257234,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6261225,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6261623,"logger":"controller-runtime.manager.controller.amazoncloudintegration-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.626359,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6264236,"logger":"controller-runtime.manager.controller.amazoncloudintegration-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677662.6267672,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.627125,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6274922,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.627925,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6280577,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.628371,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6283948,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6285136,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.628628,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677662.6286788,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.6289165,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677662.628919,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.629197,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6295617,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.6297507,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.629886,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677662.630297,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.630734,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.63078,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.630928,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6311326,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.631313,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6314733,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6316185,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.631774,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6319294,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.63208,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6322203,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.632373,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6325212,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677662.6326635,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=ConfigMap"}
{"level":"info","ts":1636677662.6328561,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677662.7314732,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677662.736159,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Starting Controller"}
{"level":"info","ts":1636677663.4425952,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677663.443027,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677663.4431915,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /V1, Kind=Secret"}
{"level":"info","ts":1636677663.443489,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting EventSource","source":"kind source: /, Kind="}
{"level":"info","ts":1636677663.4436684,"logger":"controller-runtime.manager.controller.tigera-installation-controller","msg":"Starting Controller"}
E1112 00:41:03.860160 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1112 00:41:03.887805 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
E1112 00:41:06.661885 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
E1112 00:41:06.746559 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1112 00:41:08.563248 1 request.go:645] Throttling request took 2.993760677s, request: GET:https://10.101.0.1:443/apis/crd.projectcalico.org/v1?timeout=32s
E1112 00:41:10.452401 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
E1112 00:41:11.320501 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1112 00:41:18.606951 1 request.go:645] Throttling request took 2.540810159s, request: GET:https://10.101.0.1:443/apis/certificates.k8s.io/v1?timeout=32s
E1112 00:41:22.094773 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1112 00:41:22.598269 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
I1112 00:41:29.254312 1 request.go:645] Throttling request took 1.044857982s, request: GET:https://10.101.0.1:443/apis/admissionregistration.k8s.io/v1?timeout=32s
I1112 00:41:39.614575 1 request.go:645] Throttling request took 1.04318486s, request: GET:https://10.101.0.1:443/apis/crd.projectcalico.org/v1?timeout=32s
E1112 00:41:47.224726 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
E1112 00:41:47.432665 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1112 00:42:04.772109 1 request.go:645] Throttling request took 1.045461343s, request: GET:https://10.101.0.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s
{"level":"dpanic","ts":1636677726.5062828,"logger":"controller_monitor","msg":"odd number of arguments passed as key-value pairs for logging","ignored key":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.handleFields\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:100\ngithub.com/go-logr/zapr.(*zapLogger).Info\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:127\ngithub.com/tigera/operator/pkg/controller/monitor.waitToAddWatch\n\t/go/src/github.com/tigera/operator/pkg/controller/monitor/prometheus.go:153"}
{"level":"info","ts":1636677726.5061483,"logger":"controller_monitor","msg":"%v. monitor-controller will retry."}
E1112 00:42:15.168914 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
E1112 00:42:25.670800 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1112 00:42:34.773724 1 request.go:645] Throttling request took 1.043376358s, request: GET:https://10.101.0.1:443/apis/k3s.cattle.io/v1?timeout=32s
I1112 00:43:04.775512 1 request.go:645] Throttling request took 1.044682171s, request: GET:https://10.101.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s
E1112 00:43:06.114124 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1112 00:43:13.825587 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
I1112 00:43:34.768985 1 request.go:645] Throttling request took 1.043094103s, request: GET:https://10.101.0.1:443/apis/helm.cattle.io/v1?timeout=32s
E1112 00:43:56.855100 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1112 00:44:04.774253 1 request.go:645] Throttling request took 1.045804154s, request: GET:https://10.101.0.1:443/apis/policy/v1beta1?timeout=32s
E1112 00:44:05.158737 1 reflector.go:127] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:156: Failed to watch *v1.Job: failed to list *v1.Job: jobs.batch is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "jobs" in API group "batch" at the cluster scope
{"level":"dpanic","ts":1636677857.6515176,"logger":"controller_monitor","msg":"odd number of arguments passed as key-value pairs for logging","ignored key":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.handleFields\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:100\ngithub.com/go-logr/zapr.(*zapLogger).Info\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:127\ngithub.com/tigera/operator/pkg/controller/monitor.waitToAddWatch\n\t/go/src/github.com/tigera/operator/pkg/controller/monitor/prometheus.go:153"}
{"level":"info","ts":1636677857.6514492,"logger":"controller_monitor","msg":"%v. monitor-controller will retry."}
I1112 00:44:34.777013 1 request.go:645] Throttling request took 1.042652461s, request: GET:https://10.101.0.1:443/apis/helm.cattle.io/v1?timeout=32s
Hello,
This issue is linked to #232 , Jira Server in Kubernetes fails to complete a Full Re-indexation because Jira Default page sends a 503 HTTP code back while full re-indexing. however the displayed page is an error page that indicates to the user that Jira re-indexing and that he can not access to the application.
Kubernetes gets a 503 HTTP code on /status
so it considers the application as failed and restarts the pod (before the end of re-indexation).
The issue occurs on Jira 8.13.3 and 8.17.0.
I think that this HTTP code has been "planned" but in Kubernetes context it is unwanted.
The only way I found to skip this issue is to change the Helm Chart itself and not only editing parameters in value.yaml.
I tried to set httpGet: null
and other syntaxes like that but I always had an error because of duplicated parameters.
I raised a ticket to Atlassian Support but it is private I think.
Do you see a better way to inform Atlassian Jira Developers ?
Thank you
I have deployed sonarqube 4.0.0 with the helm chart. Everything is setup and all works correctly except the GitHub Authentication. I completed all the fields using the same clientid etc as used with setting up sonar with github actions.
When I try to login with the github account it redirects me back to the login page again. Did you have any experience with this or is there something I missed?
sessions/new?return_to=%2F%3Ferror%3Dredirect_uri_mismatch%26error_description%3DThe%2Bredirect_uri%2BMUST%2Bmatch%2Bthe%2Bregistered%2Bcallback%2BURL%2Bfor%2Bthis%2Bapplication.%26error_uri%3Dhttps%253A%252F%252Fdocs.github.com%252Fapps%252Fmanaging-oauth-apps%252Ftroubleshooting-authorization-request-errors%252F%2523redirect-uri-mismatch%26state%3Db0jodbgpfpbvu9rencok4om9n1
In order for a role to be created that contains a privilege for a repo, the repo must first exist. The role configuration needs to be moved after the repo configuration.
When changin the credentials of the database, and thus ATL_JDBC_PASSWORD
, the dbconfig.xml
will not be updated, the old one is used and the connection does not work.
This can be very surprising and lead to unexpected issues - any reasons you do not regenerate it?
Comparing to confluence in jira this file usually can be regenerated each start without any issues, so it's much simpler to fix.
Hello.
Today I know your helm charts because of nexus helm chart issue from sonatype.
Here's my opinion.
Thanks,
helm 3
gke 1.16.9-gke.6
Error: unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2"
Thank you for providing this chart, it seems to be actively maintained and provides many option to control the deployed instance of Nexus.
While not setting CPU requests or limits is mostly not a big problem with Kubernetes it is actually causing problems with memory. Pretty much when under memory pressure the containers that do not have a memory limit will be targeted first by the OOM killer.
Since the default heap values are set at 1GB and MaxDirectMemorySize=2048m you will want to set a default limit that will account for that.
There are several things to take into account with regards to memory limits in k8s: the kernel, disk cache and processes are all using memory so you need to pad significantly beyond what the java process will need. In this case I think a 4GB limit by default should be adequate.
Hello!
It would be great to add the creation of secrets and extravolumes to the fluentd-aggregator chart. To mount certificates from secrets, for example.
I'd like to report a bug regarding your JIRA chart, specifically its integrated postgres chart.
On Kubernetes 1.16, the Kind "Deployment" and many more have been moved from extensions/v1beta1 to apps/v1 . Your chart is updated properly, but the postgres dependecy is not.
Without the postgres enabled, the chart works fine though!
Is it worth adding functionality to allow for the removal of the default repos and blob stores? I thought about adding it as groovy to be run via the api, but perhaps it's better to let it be controlled by configure.sh
For example, one could add the following to the values.yaml:
removeRepos:
- name: nuget-group
- name: nuget-hosted
- name: nuget.org-proxy
And perhaps something like this to configure.sh:
for json_file in "${base_dir}"/conf/*-repo-remove.json
do
if [ -f "${json_file}" ]
then
name="$(grep -Pio '(?<="name":)\s*\"[^"]+\"' "${json_file}" | xargs)"
echo "Removing repo '${name}'..."
status_code=$(curl -s -o /dev/null -w "%{http_code}" -X DELETE -H 'Content-Type: application/json' -u "${root_user}:${root_password}" "${nexus_host}/service/rest/v1/repositories/${name}")
if [ "${status_code}" -ne 204 ]
then
echo "Could not remove repo." >&2
exit 1
fi
echo "Repos removed."
fi
done
If this is something that you'd consider for this, do you have any preference on the way it's done?
Hello,
I've encountered a problem when tried to install Sonarqube helm-chart:
> helm install sonarqube stevehipwell/sonarqube -f sonarqube-values.yaml --namespace sonarqube --create-namespace --debug
install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /Users/admin/Library/Caches/helm/repository/sonarqube-1.0.0.tgz
Error: template: sonarqube/templates/deployment.yaml:122:61: executing "sonarqube/templates/deployment.yaml" at <include "sonarqube.postgresql.fullname" .>: error calling include: template: sonarqube/templates/_helpers.tpl:90:12: executing "sonarqube.postgresql.fullname" at <{{template "postgresql.fullname" $postgresContext}}>: template "postgresql.fullname" not defined
helm.go:81: [debug] template: sonarqube/templates/deployment.yaml:122:61: executing "sonarqube/templates/deployment.yaml" at <include "sonarqube.postgresql.fullname" .>: error calling include: template: sonarqube/templates/_helpers.tpl:90:12: executing "sonarqube.postgresql.fullname" at <{{template "postgresql.fullname" $postgresContext}}>: template "postgresql.fullname" not defined
My helm version:
> helm version
version.BuildInfo{Version:"v3.4.2", GitCommit:"23dd3af5e19a02d4f4baa5b2f242645a1a3af629", GitTreeState:"dirty", GoVersion:"go1.15.5"}
Effective from 2 February 2021, Atlassian will end new server licence sales and cease any new feature development for its server product line. Do you know if there still be updated(security patches) docker images from Atlassian? If yes, will you keep the helm charts up to date?
Thank you for all the work on the charts. Really appreciate them!
As mentioned in multiple PlantUML issues (#64, #163), there is an issue when deploying PlantUML behind a reversed proxy, where it redirects back to http instead of https.
It seems the solution is to add the --module=http-forwarded
as an argument to the PlantUML container.
Is it possible to add support for this in the chart? or simply add support for any custom extra arguments?
An example for this can be found in this no-longer-maintained chart:
https://gitlab.com/gitlab-org/charts/plantuml/-/blob/master/templates/deployment.yaml
If I deploy jira-software, an ingress can't be deployed, because there is a templating issue in helm-charts/charts/jira-software/templates/ingress.yaml
on line 20.
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
There is a missing key spec
. The part of the code should look like this:
{{- end }}
spec:
rules:
{{- range .Values.ingress.hosts }}
Need to add following parameter in CATALINA_OPTS:
"-Dofficeconnector.spreadsheet.xlsxmaxsize=134217728"
Do I modify the deployment file for JVM_SUPPORT_RECOMMENDED_ARGS? Or is there a better way to do this?
Im having following issue:
Thanks!
After installing the Jira helm chart on my lab I get gadget.common.error.500
and __MSG_gadget.project.title__
displayed at the system dashboard along with a warning in Administration > System > Troubleshooting and support tools about a problem with gadget feed URL.
The configuration overrides used are:
---
caCerts:
enabled: "true"
secret: "jira-custom-ca"
envVars:
jvmAdditionalOptions: "-Dhttp.nonProxyHosts=\"*.docker.internal|localhost\" -Dhttps.nonProxyHosts=\"*.docker.internal|localhost\""
ingress:
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
enabled: "true"
hosts:
- "jira.docker.internal"
tls:
- secretName: "tls-docker-internal"
hosts:
- "jira.docker.internal"
persistence:
enabled: "true"
existingClaim: "pvc-jira-data"
postgresql:
enabled: "true"
persistence:
enabled: "true"
existingClaim: "pvc-jira-database"
Thank you in advance for your assistance!
When using this chart we ran into an issue when using an AWS ALB with Sonarqube. By default, the ingress path '/' does not support the wildcard * in AWS (the case may be different in other cloud providers). This means that after the Ingress ALB has been created, we need to manually go into the AWS console and add the wildcard (*) after the path '/'. This allows Sonarqube to function normally. Without doing this, none of the Sonarqube pages will load.
YAML for Ingress resource:
Source: sonarqube/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sonarqube
labels:
helm.sh/chart: sonarqube-4.3.1
app.kubernetes.io/name: sonarqube
app.kubernetes.io/instance: sonarqube
app.kubernetes.io/version: "9.3.0"
app.kubernetes.io/managed-by: Helm
annotations:
alb.ingress.kubernetes.io/group.name: sonarqube
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: [REDACTED]
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-type: alb
spec:
rules:
- host: "[REDACTED]"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: sonarqube
port:
number: 9000
Kubernetes version: v1.20.11-eks-f17b81
Helm Version: v3.8.0
Installed chart using the following commands: helm upgrade --install --namespace sonarqube -f values.yaml sonarqube stevehipwell/sonarqube
Hello,
Like in #136 in jira-software i also receive a templating error.
Error: template: jira-software/templates/deployment.yaml:79:43: executing "jira-software/templates/deployment.yaml" at <include "jira-software.postgresql.fullname" .>: error calling include: template: jira-software/templates/_helpers.tpl:77:12: executing "jira-software.postgresql.fullname" at <{{template "postgresql.fullname" $postgresContext}}>: template "postgresql.fullname" not defined
I have a temporary workaround, cloning the repo locally an edit _helpers.tpl on line 77 to {{ template "postgresql.primary.fullname" $postgresContext }}
Hello :)
I'm using this jira-software chart on my local server. It's not very fast + i'm using internal database. So, sometimes pod restarts before complete initialization. I know that i can path deployment params using kubectl apply after chart deploy, but maybe you can add this options for chart. Thank you!
Just a heads up on the configuration features I plan on implementing. @stevehipwell please stop me if any of them are already implemented or in the pipeline ๐
Additionally: Health-check for Kubernetes
Nexus requires basic auth to scrape the metrics. The ServiceMonitor resource does not currently support adding that field
Changing ATL_JDBC_PASSWORD
or any other DB related setting does not regenerate confluence.cfg.xml
- i understand this is a little more tricky with Confluence then with jira #426 - since actual runtime data is saved in that file like the license.
In both cases we could use xmlstartlet to inject just the values we need via https://github.com/EugenMayer/docker-image-atlassian-confluence/blob/master/bin/docker-entrypoint.sh#L81
Any reasons to not do that?
In configure.sh, as of the most recent commit to fix the repository secrets, there is now a requirement that the sonatype/nexus3 image will have 'jq' installed. However, it is not installed on the image that is maintained by sonatype. Refer the following code:
repo_name="$(jq -r '.name' "${json_file}")"
repo_password_file="${base_dir}/secret/repo-credentials/${repo_name}"
if [ -f "${repo_password_file}" ]
then
repo_password="$(cat "${repo_password_file}")"
sed -i "s/PASSWORD/${repo_password}/g" "${json_file}"
fi
Yes, one can create a downstream image, but I suspect that's there might be other ways to deal with it using what's installed on the image.
Hey,
I'm trying to get my head around the istio-operator. I've set up the istio profile to use default but also need to specify kiali as an additional spec.
I'm assuming this comes under controlPlane.spec and the additionalComponents value but I can't seem to find the format for specifying this
I would like to mount the "kube-root-ca.crt" config map in the metrics-server so I can validate against the CA and use the '--kubelet-certificate-authority' argument.
As of v2.1.0 helm chart, tigera-operator
is unable to list *v1beta1.PodSecurityPolicy
:
{"level":"info","ts":1644161221.9355175,"logger":"status_manager.calico","msg":"Status manager is not ready to report component statuses."}
E0206 15:27:06.207604 1 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1beta1.PodSecurityPolicy: failed to list *v1beta1.PodSecurityPolicy: podsecuritypolicies.policy is forbidden: User "system:serviceaccount:tigera-operator:tigera-operator" cannot list resource "podsecuritypolicies" in API group "policy" at the cluster scope
This prevents the operator from deploying calico. This occurs on both clean installs with the provided helm chart and CRDs, as well as on upgrade from 2.0.1
with the provided CRDs applied with kubectl apply
pre-upgrade.
The following .Values.rbac.customRules
fix this:
rbac:
customRules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
verbs:
- create
- get
- list
- update
- delete
- watch
These should probably be integrated into the default policy
rule here:
helm-charts/charts/tigera-operator/templates/clusterrole.yaml
Lines 152 to 162 in 7e786ab
There have been some changes within the Nexus3 codebase, causing some of the groovy scripts not to work.
So far, I was able to confirm that repo.groovy and cleanup.groovy do not work.
In the case of the repo.groovy script, this seems to be due to https://issues.sonatype.org/browse/NEXUS-22819.
I am currently working on fixing these issues. Unless @stevehipwell can think of a reason not to, I will try to use the REST API instead (whenever possible), as I assume it will change less frequently.
come from this discussion.
I'd like to confirm if your nexus helm chart support https access or not? I did quickly search, looks not.
Hi there,
One of the disadvantages of executing the configure.sh, given that it mostly executes in the background, as one of the commands is that if it fails, there is no feedback to helm for it to be able to recognise the failure. I've experimented with a post start lifecycle command (see below), and it works well. If the command fails, helm is able to recognise the failure and roll it back. Is it worth considering to replace both the command
and args
keys (plus removing the & from configure.sh) for the nexus3 container?
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- "${SONATYPE_DIR}/nexus/conf/configure.sh"
Thanks for this chart.
Regards,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.