k8s-at-home / library-charts Goto Github PK
View Code? Open in Web Editor NEW⚠️ Deprecated : Helm library charts for the k8s@home Helm charts
Home Page: https://docs.k8s-at-home.com
License: Apache License 2.0
⚠️ Deprecated : Helm library charts for the k8s@home Helm charts
Home Page: https://docs.k8s-at-home.com
License: Apache License 2.0
common-4.3.0
I set the externalTrafficPolicy to Local for a chart using common, and it was ignored
service:
https:
enabled: true
primary: true
externalTrafficPolicy: "Local"
type: NodePort
ports:
https:
enabled: true
port: 443
http:
enabled: false
externalTrafficPolicy on the returned service should be set to local
This is a simple one. My linter caught it.
visting
-> visiting
Helm chart name:
common
Describe the solution you'd like:
pathTpl to match the other tpl options in ingress settings
Anything else you would like to add:
N/A
Additional Information:
would allow for templating of path based standards. e.g.
pathTpl: '/{{ include "common.names.fullname" . }}'
Helm chart name and version:
k8s-at-home: 7.3.1
What steps did you take and what happened:
When I apply env.NODE_TLS_REJECT_UNAUTHORIZED = 0 (or false), environnement variable is not deployed
Relevant Helm values:
env:
NODE_TLS_REJECT_UNAUTHORIZED: 0
What did you expect to happen:
Must be present.
Helm chart name and version:
unifi: 4.6.1
common: 4.3.0
What steps did you take and what happened:
Adding a custom persistence configuration of type configMap
does not apply common.name.fullname
to the provided config map name, while a configMap created via Values.configmap
does prepend the fullname to the created configMap.
Many charts apply manual workarounds to this problem (example1, example2)
However there seems to be no documented way to achieve the same result just in a Values.yaml
file
Relevant Helm values:
Abbreviated example values.yaml
What did you expect to happen:
unifi-site.config
is generated - configMap:
defaultMode: 420
items:
- key: config.gateway.json
path: config.gateway.json
name: unifi-site-config #The actual value being generated here is currently site-config
name: site-config
NOTE: The site-config
configMap is an additional one I#m trying to add and not originally part of the unifi
chart
Helm chart name:
common
Describe the solution you'd like:
allow having no suffix by giving a negate option
Anything else you would like to add:
N/A
Additional Information:
Relevant code:
library-charts/charts/stable/common/templates/_pvc.tpl
Lines 17 to 19 in a062712
changing L17 to:
{{- if or (not $persistenceValues.nameSuffix) (eq $persistenceValues.nameSuffix "-") }}
should resolve the issue
I would like to have support for generic ephemeral volumes in the common chart.
https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes
Helm chart name:
Describe the solution you'd like:
Anything else you would like to add:
Its not currently supported atm.
Additional Information:
Changing to alpine:3.6 provides multiple architectures. Alternately, alpine:3 would be further future-proofed.
Describe the solution you'd like:
Add configmaps as a integrated commons values, both as an map under Values.configmaps.[0] and Values.persistence.[0].
Anything else you would like to add:
configmaps:
map1: |
stuff
and
things
map22: |
stuff
and
things
persistance
map1:
type:configmap
configmap: map1
mountPath: /some/path.txt
map2:
type:configmap
exsistingConfigMap: map2
mountPath: /some/path.jpeg
map3:
type:configmap
mountPath: /some/path.gif
config: |
stuff
and
more
Additional Information:
It "feels" like this could be a modified version of custom
Helm chart name: common
Describe the solution you'd like:
Add support for container fields terminationMessagePath
and terminationMessagePolicy
. Can be kept flat to follow the Container
spec, or structured, e.g.:
# structured
termination:
path: '/dev/termination-log' # default
policy: 'File' # default
# or flat
terminationMessagePath: '/dev/termination-log' # default
terminationMessagePolicy: 'File' # default
See docs Customizing the termination message and Pod Lifecycle (note: container termination is in the "Lifecycle" section, but is not within the lifecycle
fields).
Anything else you would like to add:
When specifying a volume /dev
as read-only, the container will fail to start as it expects /dev/termination-log
local to the container to be writable. The only way to mount /dev
without changing the terminationMessagePath
is as readOnly: false
:
# works, since the host path is both readable and writable
persistence:
devfs:
enabled: true
type: hostPath
hostPath: /dev
# does NOT work, termination-log is not writable
persistence:
devfs:
enabled: true
type: hostPath
hostPath: /dev
readOnly: true
Overriding the termination log path is possible with containers that do not support a termination file or provide a termination log path configuration itself. It would look something like this all together:
termination:
path: '/var/termination-log' # change to `/var`
policy: 'FallbackToLogsOnError' # additionally, for containers that do not have direct instrumentation, a log policy can be used instead
persistence:
devfs:
enabled: true
type: hostPath
hostPath: /dev
readOnly: true
Additional Information:
Although not related to the container termination configuration, there also exists terminationGracePeriodSeconds
within the PodSpec
, see the Probe section of the PodSpec docs. Termination grace period is for the entire pod, and not distinct containers within the pod. If using the structured approach, it may cause confusion to include a gracePeriodSeconds
field to the termination
structure, but worth considering.
Helm chart name:
common
Describe the solution you'd like:
Ability to specify the mountPropagation
key in a volumeMount entry.
Anything else you would like to add:
Backstory: For some volumes, we need to specify mountPropagation
in addition to mountPath, readOnly, subPath, etc.
For example, an rclone container can mount a google drive or other remote into a shared emptyDir volume, and the other container (the main one from the helm chart, for example, sonarr, radarr, plex) should be able to see that sub-mount in that emptyDir volumeMount.
Referring to https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation, this usecase needs me to set Bidirectional
mountPropagation for the custom/emptyDir volume on the rclone container (which I can do right now since the additionalContainers yaml is passed as-is to the rendered resource), but the volumeMount in the other main container still needs mountPropagation
set to either Bidirectional
or HostToContainer
. This is currently not possible in the persistence/storage of the common-library since the template specifically sets the known-so-far keys (mountPath, subPath handling, readOnly).
The resulting volumeMount I want to be able to render is (assuming the example of plex):
containers:
- name: rclone # whole entry added as-is via additionalContainers yaml
image: rclone/rclone:1.55
args:
- mount
- "remote:location/media"
- "/media/"
volumeMounts:
- name: rclone-shared-mount
mountPath: /media
mountPropagation: Bidirectional
[...]
- name: plex # rendered within helm chart
[...]
volumeMounts:
- name: rclone-shared-mount
mountPath: /media
mountPropagation: HostToContainer # need to be able to specify this from the persistence.<name>.mountPropagation values
volumes:
- name: rclone-shared-mount # rendered from persistence.rclone-shared-mount.*
emptyDir: {}
Additional Information:
This isn't limited to only emptyDir volumes specified inline/in-helm, since we should be able to mount an existngclaim PVC or a hostPath with a specified mountPropagation.
Adding something like
{{- with $persistenceItem.mountPropagation }}
mountPropagation: {{ . }}
{{- end }}
to both sections of https://github.com/k8s-at-home/library-charts/blob/main/charts/stable/common/templates/lib/controller/_volumemounts.tpl should do it.
Thanks!
Helm chart name:
common
Describe the solution you'd like:
Add possibility to specify a hostPort
Anything else you would like to add:
It can be useful
Additional Information:
none
Helm chart name: common
Describe the solution you'd like: To add binary data to a ConfigMap, there is the the binaryData
key. To include e.g. assets in homer without needing to deploy extra resources manually, support for the binaryData
key would be helpful.
Helm chart name:
Library Chart Template
library-charts/charts/stable/common/templates/_deployment.tpl
Describe the solution you'd like:
I want to be able to programmatically scale my Deployments across namespaces and without knowing the exact name of every Deployment. I would like to be able to add labels on Deployments and run commands like kubectl scale deployment -l environment=dev replicas=1
Anything else you would like to add:
No
Additional Information:
Seems straightforward enough.
Helm chart name and version:
common 3.0.0
blocky 6.4.0
What steps did you take and what happened:
I took the blocky chart and updated common to 3.0.0 in Chart.yaml.
I then ran "helm template ."
Error: template: blocky/charts/common/templates/lib/chart/_names.tpl:12:15: executing "common.names.fullname" at <include "common.names.name" .>: error calling include: template: blocky/charts/common/templates/lib/chart/_names.tpl:3:63: executing "common.names.name" at <.Values.global.nameOverride>: nil pointer evaluating interface {}.nameOverride
Relevant Helm values:
No changes to the values.yaml of the blocky chart.
What did you expect to happen:
No error since the breaking change was mainly about services and ingresses
Additional Information:
Helm v3.5.4
Helm chart name:
common
Describe the solution you'd like:
I'd like cert-manager Issuer
s to be created, matching the name of the main service, and optionally used for any service ingress.
Anything else you would like to add:
I notice that you have a cluster guide in another repo and it seems to use cert-manager, but I'm not seeing where that would get used and I don't find any documentation for it. It's my understanding that Issuers are scoped to their namespace, thus any service with it's own namespace, which I think is common, would need a dedicated Issuer created.
I would like the main common chart have options for doing MutatingWebhooks.
Helm chart name: common
Describe the solution you'd like:
The current promtail addon scrape configs are limited to static configs with a job
and __path__
label. This makes it difficult to identify what pod a scrape job comes from, or distinguish multiple instances of a job.
Anything else you would like to add:
-config.expand-env=true
to allow pod environment variables to be used in labelsAdditional Information:
I'm somewhat new to using promtail. If there's a reason this doesn't make sense, or a better way to do it, I'd be glad to hear.
Currently, the additionalContainers
key is a list of dicts.
This could turn in to an issue when there are merges being done from library, add-on, chart and user values since merging lists in Helm is not fun and often leads to things being unintentionally overwritten.
Potentially affects:
additionalContainers
functionalityHelm chart name and version:
appdaemon 5.1.0
What steps did you take and what happened:
I enabled persistence and it didn't work.
Relevant Helm values:
persistence:
config:
enabled: true
What did you expect to happen:
The configuration is taken from my pv/pvc
Anything else you would like to add:
The problem is that appdaemon doesn't expect a mount /config but instead a mount /conf
if I create a values config with
persistence:
conf:
enabled: true
accessMode: ReadWriteOnce
size: 1Gi
It works
The following used to be possible.
persistence:
config:
enabled: true
media:
enabled: true
existingClaim: media
additionalVolumeMounts:
- name: media
mountPath: /downloads
- name: media
mountPath: /series
subPath: series
Please restore additionalVolumeMounts or provide clear documentation on how to achieve this end state.
common library:
https://github.com/k8s-at-home/library-charts/blob/main/charts/stable/common/values.yaml
Describe the solution you'd like:
There are several ways projects have accomplished this. Open to ideas. The last way I accomplished this was to create secret with one item "ca.crt" then mount it in such a way as to overwrite things.
extraVolumeMounts:
- name: certificate
mountPath: /etc/ssl/certs/ca.crt # your self-signed CA part inside the secret
subPath: ca.crt
extraVolumes:
- name: certificate
secret:
secretName: ca-bundle
defaultMode: 420
Though something more specific like specify your tls value here and it will be added in the right place internally would be nice. This would be helpful with a container like keycloak, which uses python internally, and needs the ca-bundle in a different location. With a common library you could always specify in the same way and it would end up in the right place everytime.
Anything else you would like to add:
Just allowing extra volume mounts would work. I could use them to overwrite the existing ca.crt or mount and then use an init container to update-ca-certificates. Though, if a container required the ca-bundle in a different location one would have to figure out where it needs to go, which can be difficult.
Additional Information:
A use case for this is using heimdall in an environment which rewrites https for monitoring purposes, the local ca needs to be trusted otherwise https requests come back with errors similar to self-signed certificate detected.
Describe the solution you'd like:
I searched our chart and couldn't find this referenced anywhere.
Add support for configuring lifecycle for pods.
Helm chart name:
common
Describe the solution you'd like:
I agree the current persistence
code works very well for most users most of the time; but it seriously breaks things for power users; and the workarounds necessary to make every configuration possible within the confines of persistence
are IMO nowhere near worth it.
The forced pairing between volumes and volumemounts is fundamentally incompatible with most workarounds that could be made. Separating the concept of a mount and volume is required for a functional solution, but would be unnecessarily confusing for most users.
IMO best solution is just re-adding a way to specify volumes and volumemounts in standard yaml as is common in many other charts.
Anything else you would like to add:
persistence
cannot properly define complex CSI volumes, nor even handle mounting one volume multiple times.
I can whip up a PR if this is received well.
Helm chart name:
Common
Describe the solution you'd like:
Currently the commons is only able to reference a single service and port per ingress, you cannot define multiple paths with different ingresses per service.
Ideally, this Syntax would be desirable
ingress:
enabled: true
hosts:
- host: bw.domain.tdl
paths:
- path: /
pathType: Prefix
- path: /notifications/hub
pathType: Prefix
servicePort: 3012
with each path defaulting to service.port.port unless specified.
Anything else you would like to add:
Additional Information:
Nope, the underlying sprig library doesn't differentiate between an empty string and nil (See also Masterminds/sprig#53). Guess we're stuck with
-
for now.
Originally posted by @bjw-s in #14 (comment)
In the same sprig issue, a proper fix was defined:
Found a solution
{{ kindIs "invalid" $value }}
Originally posted by @jkroepke in Masterminds/sprig#53 (comment)
By changing anything that was "-"
to a nil value either implicitly or explicitly, you can test for it
implicit:
explicit1: nil
explicit2: ~
This could not only fix the verbosity concern of "-"
but additionally allow the common charts that have commented out defaults become uncommented with a nil value to test for existence and optionally requirement
Helm chart name and version:
The pvc woun't be created because of a bug in the chart. ArgoCD is rendering the json wrong because of a fault in the _pvc.yaml.
What steps did you take and what happened:
This is the output:
'''
{
"---kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"labels": {
"app.kubernetes.io/instance": "testapp",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "testapp",
"app.kubernetes.io/version": "1.0.0",
"helm.sh/chart": "testapp-0.0.1"
},
"name": "testapp-config"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
}
}
}
'''
As you can see the '---' is in front of kind.
Be aware that this is not a bug of ArgoCD. If I using the bitnami postgresql with persistence storage. It is generating the correct yaml files for the pvc.
Describe the solution you'd like:
Add support to label and or annotate PVCs, e.g.
persistence:
config:
metadata:
labels:
foo: bar
annotations:
bar: foo
An annoying copy/paste mistake crept in the Wireguard image repo value.
The docker pull
needs to be removed of course.
Helm chart name: chartlib
Describe the solution you'd like: Support for CronJob template
Anything else you would like to add: I think NetworkPolicy should be treated as a first class citizen not just a addons/vpn
Additional Information:
Currently, the initContainers
key is a list of dicts.
This could turn in to an issue when there are merges being done from library, add-on, chart and user values since merging lists in Helm is not fun and often leads to things being unintentionally overwritten.
Potentially affects:
initContainers
functionalityWhen I fill this structure for service:
service:
main:
ports:
appservice:
port: 9993
webhook:
port: 9000
metrics:
port: 9002
as result common library renders only single empty port in Service, instead of list of all described ports:
spec:
type: ClusterIP
ports:
- port:
targetPort: http
protocol: TCP
name: http
And to make desired port configuration I need to explicitly disable default http
port and enable all others, like this:
service:
main:
ports:
http:
enabled: false
appservice:
enabled: true
port: 9993
webhook:
enabled: true
port: 9000
metrics:
enabled: true
port: 9002
Helm chart name: common
Describe the solution you'd like: I suggest to treat custom service ports as enabled by default, to not write "enabled = true" on each new port, and disable default http
port if no value is specified.
Putting this to keep track of stuff I still want to fix before v3.
mountPath
and hostPath
but you should be able to set one or the other and have it populate the empty one if none setHelm chart name: common
Describe the solution you'd like:
By default template variables like {{ .Release.Name }}
and {{ .Chart.Name }}
are not working in values.yaml
files (here is some issues about this: helm/helm#3558, helm/helm#2492), and that's sad...
But seems we can get rid of this limitation via implementing parser of each value in common
library!
Here is an example of adding support for template conversion in value from values.yaml
file of manually defined key persistence.config.existingClaim
, described in templates/common.yaml
:
{{- include "common.values.setup" . }}
{{-
$_ := set .Values.persistence.public
"existingClaim"
(print (tpl .Values.persistence.public.existingClaim .))
-}}
{{ include "common.all" . }}
Having this, we can use templates inside value of that key directly in values.yaml
file, like this:
persistence:
public:
enabled: true
type: pvc
existingClaim: "{{ .Release.Name }}-{{ .Chart.Name }}-my-config"
And via same way we can loop through each of the keys in passed values.yaml
file and make the template conversion for each value (or only defined ones)!
So we can add this as opt-in feature that should be enabled via some key, for example like this - global switcher for all values:
parseTemplatesInValues: true
and only for selected values:
parseTemplatesInValues:
- envFrom[0].secretRef.name
- persistence.config.existingClaim
What do you think about this idea?
Describe the solution you'd like:
I'd like to append some labels to all created resources.
I have a branch prepped. See related PR.
Helm chart name:
common
Describe the solution you'd like:
Allow empty value for secretName to enable ingress to use default certificate. I have a default (wildcard) lets encrypt ssl certificate installed. I configured my ingress nginx deployment to use this if the secret name is not specified in the ingress resource. However I must to enter a value now using the common chart. I use {} now but cert-manager (jetstack.io) then gives me an error like:
Certificate.cert-manager.io "map[]" is invalid: metadata.name: Invalid value: "map[]": a DNS-1123 subdomain must consist of lower case alphanumeric characters
Anything else you would like to add:
The fix is very simple. In classes/_ingress.tpl line 46 (v2.2.0), replace:
{{- else }}
secretName: {{ .secretName }}
with:
{{- else if .secretName }}
secretName: {{ .secretName }}
and it works.
Helm chart name:
common
Describe the solution you'd like:
Currently the VPN add-on requires an inline configuration. In order to leverage other tools better (for example to do secrets management, or variable substitution in Flux) we should support referencing an existing configuration from a Secret.
Tasks:
addons/vpn/_configmap.tpl
to use a Secret for configFile
instead of a configMapaddons/vpn/_volume.tpl
to reference the Secret from step 1 or a provided existing Secret.Anything else you would like to add:
Additional Information:
Helm chart name and version:
3.1.0
What steps did you take and what happened:
Used addon with multiple ingress entries
addons:
codeserver:
enabled: true
volumeMounts:
- name: config
mountPath: /config
ingress:
enabled: true
hosts:
- host: ha-editor.pub.${CLUSTER_DOMAIN}
paths:
- path: /
pathType: Prefix
- host: ha-editor.home.${CLUSTER_DOMAIN}
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- ha-editor.pub.${CLUSTER_DOMAIN}
- ha-editor.home.${CLUSTER_DOMAIN}
Relevant Helm values:
What did you expect to happen:
Both ingress entries pointing the vscode
Anything else you would like to add:
The problem comes from
{{- $_ := set (index (index $ingressValues.hosts 0).paths 0) "service" (dict "name" $svcName "port" .Values.addons.codeserver.service.ports.codeserver.port) -}}
It should be a range statement to not only process the first entry.
Additional Information:
Easy to fix - can do a PR - capturing for now as issue to not forget it.
Helm chart name:
common chart
Describe the solution you'd like:
Currently there's support for OpenVPN and Wireguard VPNs in the common chart. A nice new option would be to deploy a tailscale sidecar. Tailscale uses Wireguard but it manages linking it to your mesh and allows for easy sharing of nodes with others.
Describe the solution you'd like:
The skipuninstall
key name can be confusing, I suggest we rename this key to retain
Anything else you would like to add:
This would be a breaking change or maybe we can keep skipuninstall and phase it out over time.
After checking the code I saw that many of the properties that secrets have in persistence are missing in the config map. Whereas the docs say they are there.
Chart - Common
Versión: 4.0.0
What steps did you take and what happened:
Add labels in ingress.
ingress:
main:
enabled: true
labels:
foo: "var"
nameOverride: local
hosts:
- host: example.k8
paths:
- path: /
pathType: Prefix
Relevant Helm values:
kind: Ingress
metadata:
name: {{ $ingressName }}
labels:
{{- include "common.labels" . | nindent 4 }}
{{- with $values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
What did you expect to happen:
Add labels in ingress after compile template
Additional Information:
No labels in yaml
Helm chart name and version:
common
What steps did you take and what happened:
Tried to configure it, and couldnt
Relevant Helm values:
N/A
What did you expect to happen:
To be able to configure it.
Anything else you would like to add:
N/A
Additional Information:
library-charts/charts/stable/common/templates/classes/_ingress.tpl
Lines 62 to 64 in a062712
L63 should be:
pathType: {{ default "Prefix" .pathType }}
Helm chart name and version:
common 3.0.1
bookstack 2.0.0
What steps did you take and what happened:
When deploying the new version of bookstack with enabled mariadb the release name of the mariadb statefulset is no longer "bookstack-mariadb" but simply "mariadb".
helm template --name-template bookstack .
# Source: bookstack/charts/mariadb/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mariadb
Relevant Helm values:
# Enabled mariadb
# ... for more options see https://github.com/bitnami/charts/tree/master/bitnami/mariadb
mariadb:
enabled: true
What did you expect to happen:
# Source: bookstack/charts/mariadb/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: bookstack-mariadb
Anything else you would like to add:
The MariaDB chart also uses a "common.names.fullname". The new common-3.x define seems to conflict with it somehow.
Additional Information:
Helm v3.5.4 used.
Helm chart name: radarr, sonarr, lidarr, nzbget
Describe the solution you'd like: For use of Remote Path Mappings there is another persistent volume claim needed. Currently the only option i see is to use the shared volume for it. It would be nice to add a default disabled generic volume in the commons.yaml to make it more flexible to mount other volumes into helm charts.
StatefulSet should support podManagementPolicy
Valid values are Parallel
and OrderedReady
(default)
Helm chart name:
common
Describe the solution you'd like:
Currently the code-server add-on does not support loading up a Git / SSH identity from the values. We should make it so we can provide a Git / SSH identity from the add-on configuration.
Tasks:
Anything else you would like to add:
Additional Information:
Describe the solution you'd like:
A addon for the common library to support shipping logs written to disk to loki.
Ideally the addon config would ask for
releasename-logs
config
/config/path/to/logs/*.log
Might be cool to allow users to pass in multiple logs.
Anything else you would like to add:
We once had this in the Plex chart:
https://github.com/k8s-at-home/charts/blob/5f5b815ccacce8c592cdb015b596b4b42fa7c33b/charts/stable/plex/values.yaml#L308
https://github.com/k8s-at-home/charts/blob/5f5b815ccacce8c592cdb015b596b4b42fa7c33b/charts/stable/plex/templates/promtail-configmap.yaml
https://github.com/k8s-at-home/charts/blob/5f5b815ccacce8c592cdb015b596b4b42fa7c33b/charts/stable/plex/templates/deployment.yaml#L65
Additional Information:
promtail is now on 2.2.0 but I do not think anything has changed much.
Lots of configuration is location here:
https://grafana.com/docs/loki/latest/clients/promtail/configuration/#example-static-config
Helm chart name and version:
When trying to migrate powerdns to common. I believe the following error will occur with any chart when setting lifecycle
.
What steps did you take and what happened:
lifecycle
to values.yaml
. Full input listed below.helm install powerdns . --dry-run
Relevant Helm values:
# values.yaml
...
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "let a=0; while [ $a -lt 200 ]; do sleep 5; let a=a+1; echo 'Attempt: '$a; if nc -vz powerdns-postgresql 5432; then pdnsutil list-zone mydomain.local 2>/dev/null && break; pdnsutil create-zone mydomain.local; fi; done"]
...
I get the following output.
$ helm install powerdns . --dry-run
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "postStart" in io.k8s.api.core.v1.Container
What did you expect to happen:
Generate the YAML output without any errors.
Anything else you would like to add:
Perhaps we need to change the indentation from 2 to 4? I haven't tested by updating the library.
Additional Information:
When using the following values, I get a valid output but lifecycle
shows up twice.
# values.yaml
lifecycle:
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "let a=0; while [ $a -lt 200 ]; do sleep 5; let a=a+1; echo 'Attempt: '$a; if nc -vz powerdns-postgresql 5432; then pdnsutil list-zone mydomain.local 2>/dev/null && break; pdnsutil create-zone mydomain.local; fi; done"]
$ helm install powerdns . --dry-run
...
lifecycle:
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- 'let a=0; while [ $a -lt 200 ]; do sleep 5; let a=a+1; echo ''Attempt: ''$a; if nc -vz powerdns-postgresql 5432; then pdnsutil list-zone mydomain.local 2>/dev/null && break; pdnsutil create-zone mydomain.local; fi
...
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
.github/workflows/charts-lint-test.yaml
actions/checkout v3
azure/setup-helm v3
actions/setup-python v4
helm/chart-testing-action v2.2.1
actions/checkout v3
azure/setup-helm v3
actions/checkout v3
azure/setup-helm v3
actions/setup-python v4
helm/chart-testing-action v2.2.1
nolar/setup-k3d-k3s v1
.github/workflows/charts-release.yaml
getsentry/action-github-app-token v1
actions/checkout v3
dorny/paths-filter v2
azure/setup-helm v3
actions/setup-python v4
stefanzweifel/git-auto-commit-action v4
getsentry/action-github-app-token v1
actions/checkout v3
azure/setup-helm v3
helm/chart-releaser-action v1.4.0
.github/workflows/metadata-label-commenter.yaml
getsentry/action-github-app-token v1
actions/checkout v3
peaceiris/actions-label-commenter v1
.github/workflows/metadata-label-issues-prs.yaml
getsentry/action-github-app-token v1
Videndum/label-mastermind 2.1.3
.github/workflows/metadata-label-pr-ci-status.yaml
getsentry/action-github-app-token v1
potiuk/get-workflow-origin v1_3
andymckay/labeler 1.0.4
andymckay/labeler 1.0.4
getsentry/action-github-app-token v1
potiuk/get-workflow-origin v1_3
actions/github-script v6
andymckay/labeler 1.0.4
andymckay/labeler 1.0.4
andymckay/labeler 1.0.4
andymckay/labeler 1.0.4
andymckay/labeler 1.0.4
andymckay/labeler 1.0.4
.github/workflows/pre-commit-check.yaml
actions/checkout v3
dorny/paths-filter v2
pre-commit/action v3.0.0
pre-commit/action v3.0.0
helper-charts/common-test/Chart.yaml
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.