GithubHelp home page GithubHelp logo

fluxcd / kustomize-controller Goto Github PK

View Code? Open in Web Editor NEW
252.0 15.0 181.0 4.09 MB

The GitOps Toolkit Kustomize reconciler

Home Page: https://fluxcd.io

License: Apache License 2.0

Dockerfile 0.21% Makefile 1.98% Go 95.75% Smarty 1.21% Ruby 0.11% Shell 0.74%
gitops-toolkit

kustomize-controller's Introduction

kustomize-controller

CII Best Practices e2e report license release

The kustomize-controller is a Flux component, specialized in running continuous delivery pipelines for infrastructure and workloads defined with Kubernetes manifests and assembled with Kustomize.

The cluster desired state is described through a Kubernetes Custom Resource named Kustomization. Based on the creation, mutation or removal of a Kustomization resource in the cluster, the controller performs actions to reconcile the cluster current state with the desired state.

overview

Features

  • watches for Kustomization objects
  • fetches artifacts produced by source-controller from Source objects
  • watches Source objects for revision changes
  • generates the kustomization.yaml file if needed
  • generates Kubernetes manifests with Kustomize SDK
  • decrypts Kubernetes secrets with Mozilla SOPS and KMS
  • validates the generated manifests with Kubernetes server-side apply dry-run
  • detects drift between the desired and state and cluster state
  • corrects drift by patching objects with Kubernetes server-side apply
  • prunes the Kubernetes objects removed from source
  • checks the health of the deployed workloads
  • runs Kustomizations in a specific order, taking into account the depends-on relationship
  • notifies whenever a Kustomization status changes

Specifications

Guides

Roadmap

The roadmap for the Flux family of projects can be found at https://fluxcd.io/roadmap/.

Contributing

This project is Apache 2.0 licensed and accepts contributions via GitHub pull requests. To start contributing please see the development guide.

kustomize-controller's People

Contributors

apeschel avatar aryan9600 avatar asloan7 avatar darkowlzz avatar dependabot[bot] avatar glebiller avatar hiddeco avatar jalseth avatar janeliul avatar jasonbirchall avatar jodok avatar klausenbusk avatar laszlocph avatar makkes avatar matheuscscp avatar michalschott avatar mvoitko avatar nalum avatar oliverbaehler avatar ordovicia avatar phillebaba avatar relu avatar seh avatar somtochiama avatar souleb avatar squaremo avatar stealthybox avatar stefanprodan avatar superbrothers avatar suryapandian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kustomize-controller's Issues

Potential memory leak

Looking at memory consumption of kustomize-controller (Docker image
ghcr.io/fluxcd/kustomize-controller-arm64:v0.1.0)
I see it gradually grows over time:
image

I wonder if it's expected and at some point in time it should drop or it's a memory leak.

Ability to control verbosity of events

The output of the events of the kustomize-controller quite quickly get too long on medium-sized clusters. This then causes the output in the slack provider from the notification-controller to exceed the slack message limit, truncating the event. This is a pretty easy limit to hit, I'm hitting it with ~200 resources. With any errors located at the end, the most useful information is lost from the message. This relates to an issue made on the notification-controller but probably better belongs here, as it's caused by the payload of the event sent by the kustomize-controller.

I originally thought it might be a good idea to configure the verbosity of the underlying kubectl apply that feeds into the info/error events, but that still doesn't prevent the output of the unchanged resources that floods the output.

Would it be appropriate to filter out lines that have unchanged? There are probably better solutions, like some kind of "smarter" message as mentioned by @phillebaba here, but this to me seems the simplest, albeit depending on the output of kubectl apply.

Feature Request: Allow disabling of prune on certain resources

We have our EKS clusters created by Terraform. Terraform then installs Flux which takes over from there but one of the repos that Flux points at has manifests for the kube-system, kube-public, default, etc. namespaces (they add a couple of labels). As soon as it applies the manifest Flux also becomes the "owner" of the namespace so we run the risk of it trying to delete kube-system if we delete the Kustomization object from flux-system and getting in a state when Kubernetes says "don't you be touching my damn namespaces".

Options at the moment:

  1. We could split those specific manifests into their own folder and disable prune on a new Flux Kustomization
  2. We could disable prune altogether for that repo
  3. We could add the label using the Kubernetes provider in Terraform

For now we're gonna go with option 3 but ideally we'd like to be able to add an annotation to a resource (kustomize.toolkit.fluxcd.io/prune: false or something) to tell Flux not to touch it. What do you think?

Option for using `kubectl replace --force` when fail to apply

When we want change immutable fields (e.g. label selector in Deployment), currently we need to take two steps:

  1. Enable garbage collection for a resource and delete the resource from a repository
  2. Add the resource again to the repository with a new definition

It would be great if kustomize-controller does the above operation in one reconciliation by using kubectl replace --force command.

Allow kustomize image override in Kustomization CRD

In a case where the person with control over the syncing(kustomization CRD) does not have control over the config repository where the kustomization.yml reside and they can only refer to it. It would be great want to allow making changes(overriding the images) in the Kustomization CRD

Link to discussion:
(https://github.com/fluxcd/flux2/discussions/135)[https://github.com/fluxcd/flux2/discussions/135]
In particular this comment

In the first, the person with control over the syncing does not have control over the config repository -- they can only refer to it. So they must resort to putting changes into the sync.

Enable pruning for remote clusters

When targeting a remote cluster with spec.kubeConfig, the pruning is skipped. To implemented garbage collection we need to initialise the controller-runtime client using the kube config file.

Panic on slow network (introduced in api/v0.6.7)

I'm getting a panic in the kustomize-controller. It looks like it was introduced in api/v0.6.7.

panic: send on closed channel

goroutine 537 [running]:
sigs.k8s.io/kustomize/api/loader.getRemoteTarget.func1(0x4000704210, 0x4002ecfe60, 0x40028a0150)
	/go/pkg/mod/sigs.k8s.io/kustomize/[email protected]/loader/getter.go:104 +0x64
created by sigs.k8s.io/kustomize/api/loader.getRemoteTarget
	/go/pkg/mod/sigs.k8s.io/kustomize/[email protected]/loader/getter.go:102 +0x3c4

Here is an issue tracking this: kubernetes-sigs/kustomize#3362

I thought I would post it here to keep track of the fix/PR. Please close it if you think it's inappropriate.

Support SOPS decryption of ConfigMaps and values other than "(data|stringData)"

While porting some standard config files which contain some secrets and some common information (traditionally the secrets being template applied with through Puppet's Hieradata), I've wished that I can sops encrypt part of a ConfigMap.

This would allow adding normal yaml config's in mostly plain text, such that one can easily see the non-secrets values and developers can easily change them without re-encrypting the entire config as a secret.

My current solution is to keep ConfigMaps plain text and add support to the application to override the secret field with a ENV variable which is placed in a secret.

I personally find great value in being able to push changes to configs without manually invoking SOPS encryption action, so this would be a welcome enhancement.

Allow kustomize plugins to be optionally enabled

It would be very useful to allow the new-ish kustomize plugin functionality to be optionally enabled. I understand that there are potential security issues around plugins and that it's an alpha-level feature, so I wouldn't enable by default, but in a tightly controlled environment, they can be extremely useful.

Is it me or this use a different version of kubectl than flux V1.x? (Force Bases instead of usage of resources)

Hi, in Flux V1.x I had no issue with my Kustomization yaml files. However, after upgrading to Flux2 with this kustomize-controller, it seems that my files are not valid anymore. Basically, if I keep the content only in resources it fails silently and flux juste delete itself after installing. However if I use the bases instead of resources it works as expected.

In the documentation of Kuztomize (https://kubectl.docs.kubernetes.io/references/kustomize/bases/) we can read the following:

The bases field was deprecated in v2.1.0

My files were like this (Fail in Flux2, works in Flux v1):

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- flux-system
- some-folder

And I had to make the changes to:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- flux-system
- some-folder

System information:
K8s Version: 1.18.10
Kustomize-controller: 0.5.4

(If more details is needed, I will give them)

Separate image name for arm

Since you have separate images for arm, it is not possible to transparently switch architectures or operate on a mixed arch cluster.
Why don't you use multi arch images? You already use buildx, all it takes is to merge your two separate build workflow steps into one specifying all architectures.

Ignore manifests using annotation

One feature I miss from flux is annotating resources on k8s directly using fluxcd.io/ignore to ignore them temporarily.
This seems not possible in gotk, you need to suspend the ks pipeline which also stops syncing the entire source and not just a single manifest.

I used fluxcd.io/ignore quite often to debug things.

Also in gotk an example where the suspend & debug does not work: If you have the flux components in the same ks pipeline as other infra things and want to change for example the log level to debug of the ks controller it won't be possible since it gets overwritten again but you can't suspend since you want to debug the ks controller itself. One would need to change this in the source which is not a nice workflow for debugging.

Can we add an annotation to ignore resources? This would help a lot to debug things.

Garbage collection step is skipped if there are no manifests in the repo

Noticed in version ghcr.io/fluxcd/kustomize-controller:v0.2.2

Steps to reproduce

  • Added a deployment in a repository which has a Kustomization object with prune: true.
    The deployment is scheduled.
  • Removed the manifest from the repository but the pod wasn't removed, the kustomize-controller logs
... validation failed: error: no objects passed to apply

In another test with two deployment manifests and then deleting one, the pod was removed fine

{"level":"info","ts":"2020-11-27T12:43:41.447Z","logger":"controllers.Kustomization","msg":"Kustomization applied in 344.740099ms","kustomization":"test-2","output":{"deployment.apps/podinfo2":"configured"}}
{"level":"info","ts":"2020-11-27T12:43:41.456Z","logger":"controllers.Kustomization","msg":"garbage collection completed: Deployment/test3/podinfo deleted\n","kustomization":"test-2"}
{"level":"info","ts":"2020-11-27T12:43:41.472Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 785.636958ms, next run in ..

Kustomization creation fails when dir contains non-k8 yaml files

First of all, thanks for the amazing work on flux v2!

When testing migration from v1, I've noticed an issue with kustomization generation: Given a directory, which does not contain kustomization.yaml, but containing other yaml files not related to kubernetes objects kustomization creation fails with:

kustomize create failed: failed to decode Kubernetes YAML from /tmp/flux-system630708581/.sops.yaml: error unmarshaling JSON: while decoding JSON: Object 'Kind' is missing in '{"creation_rules":[{"XXX":"projects/XXX/locations/global/keyRings/XXX/cryptoKeys/XXX","path_regex":"\\.secret\\.yaml$"}]}'

This can be triggered by having YAML files like .sops.yaml or ci workflows definitions e.g. .gitlab-ci.yml in root directory and not having kustomization.yaml

Flux v1 ignored non-k8 yaml files thanks to this func , whereas v2 and the kustomization flow requires all of yaml files to be k8 by using SliceForBytes for validation.

Possible workaround: simply adding kustomization.yaml in the affected directories, but this requires action and makes migration a bit more demanding for new users.

I'd be happy to contribute a fix or to document the workaround in case this gets considered as working as intended.

Check pruning objects with finalizers behaves reasonably

During pruning, an object with remaining finalizers can cause deletes to hang, especially when the controllers for those objects are uninstalled first or if they are down/disabled.

We should check what happens in our apply controllers:
Some good behavior could look like:

  • object is not deleted
  • does not affect creates/updates of other objects
  • does not affect deletes of other objects
  • create/update/delete of other objects is timely or done concurrently
  • an object specific error about finalizers is returned for PruneFailedReason
  • multiple errors are aggregated

Alternatively, if pruning is done first, maybe it's best not to continue on with anything if it fails.

Inspired by this user-reported issue in Fluxv1,
figured we should make sure we do it right:

I figured it out, after way more time than i hoped for. git was all ok since fluxctl release --dry-run knew exactly what needed to be done (it was able to do the diff with git), it just couldn't execute the change. the reason is that one of the kube resources could not be deleted that was related to some recently removed resource. because of that, flux seems to abort everything and just time out, not just in that namespace, for for ANYTHING including changing the tags on images. it just becomes paralyzed for all subsequent gitops work. I wish it
let me know what resource or action it timed out on
it doesn't shut down all other unrelated syncs
For the record, resources can become undeletable if they have finalizers on them (like the new istio-operator does). then you have to patch the resource to remove the finalizer, and then remove it.
https://cloud-native.slack.com/archives/CLAJ40HV3/p1599242087142400?thread_ts=1599175627.127600&cid=CLAJ40HV3

Add kubectl error code to error messages

It happens that the controller logs an error with an empty message,e.g.:

{"level":"error","ts":"2020-11-09T13:55:26.336Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"aks20-euw-infra-main-m
onitoring","namespace":"flux","error":"apply failed: "}

In that case the error message should mention that kubectl did not write anything on stderr and print the error code.

kustomize build failed: Found multiple kustomization files under

Hi.
Following guide of multy tenant flux repo I end up with kustomize error
kustomize build failed: Found multiple kustomization files under: /tmp/tenants576654270/tenants/staging

My sample repo here https://github.com/ndemeshchenko/fleet-infra.
Screenshot 2020-12-29 at 12 45 00

Latest log from kustomize controller

{"level":"error","ts":"2020-12-29T11:39:21.069Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"tenants","namespace":"flux-system","error":"kustomize build failed: Found multiple kustomization files under: /tmp/tenants457654148/tenants/staging\n"}

GC seems to abandon objects

Looking at

err := kgc.List(ctx, ulist, client.InNamespace(ns), kgc.matchingLabels(name, namespace, kgc.snapshot.Checksum))
it seems that the GC logic only searches for objects with a known prior checksum, but this seems to leave some objects behind. I unfortunately don't know how it happened, but I definitely have objects that match the kustomize.toolkit.fluxcd.io/name and kustomize.toolkit.fluxcd.io/namespace labels but do not have the correct kustomize.toolkit.fluxcd.io/checksum. I noticed this when we attempted to clean up some previously deployed resources, but they weren't removed.

$ kubectl get statefulset -l kustomize.toolkit.fluxcd.io/namespace=flux-system,kustomize.toolkit.fluxcd.io/name=deploy-ns-kafka -L kustomize.toolkit.fluxcd.io/checksum -A
NAMESPACE   NAME                 READY   AGE   CHECKSUM
kafka       connect              3/3     63d   626af1c4a8d9c2ac041f8e6db6a56d53d936c94a
kafka       connect-mongo-test   0/0     40d   e70b56648e95ab4befdf1c82f49ec2c4ff7c53fc

(The connect-mongo-test was scaled to 0 long ago, and was only recently cleaned up)

The relevant Kustomization:

kind: Kustomization
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"kustomize.toolkit.fluxcd.io/v1beta1","kind":"Kustomization","metadata":{"annotations":{},"labels":{"managed-namespace":"kafka","shell-operator":"006-flux-provision"},"name":"deploy-ns-kafka","namespace":"flux-system","ownerReferences":[{"apiVersion":"v1","kind":"Namespace","name":"kafka","uid":"fc483992-68fe-4aa5-8c8d-440607a0fcc6"}]},"spec":{"dependsOn":[{"name":"cluster"}],"interval":"1m","path":"$REMOVED","prune":true,"serviceAccountName":"deploy-ns-kafka","sourceRef":{"kind":"GitRepository","name":"k8s"},"suspend":false}}
  creationTimestamp: "2020-12-14T18:57:02Z"
  finalizers:
  - finalizers.fluxcd.io
  generation: 4
  labels:
    managed-namespace: kafka
    shell-operator: 006-flux-provision
  name: deploy-ns-kafka
  namespace: flux-system
  ownerReferences:
  - apiVersion: v1
    kind: Namespace
    name: kafka
    uid: fc483992-68fe-4aa5-8c8d-440607a0fcc6
  resourceVersion: "400510691"
  selfLink: /apis/kustomize.toolkit.fluxcd.io/v1beta1/namespaces/flux-system/kustomizations/deploy-ns-kafka
  uid: 19150bd0-3724-4e82-88eb-682466bc0509
spec:
  dependsOn:
  - name: cluster
  interval: 1m
  path: $REMOVED
  prune: true
  serviceAccountName: deploy-ns-kafka
  sourceRef:
    kind: GitRepository
    name: k8s
  suspend: false
status:
  conditions:
  - lastTransitionTime: "2020-12-15T19:22:07Z"
    message: 'Applied revision: master/fff779623a7025bd01f345c4536b7e367f664fa0'
    reason: ReconciliationSucceeded
    status: "True"
    type: Ready
  lastAppliedRevision: master/fff779623a7025bd01f345c4536b7e367f664fa0
  lastAttemptedRevision: master/fff779623a7025bd01f345c4536b7e367f664fa0
  observedGeneration: 4
  snapshot:
    checksum: 626af1c4a8d9c2ac041f8e6db6a56d53d936c94a
    entries:
    - kinds:
        /v1, Kind=Service: Service
        /v1, Kind=ServiceAccount: ServiceAccount
        apps/v1, Kind=Deployment: Deployment
        apps/v1, Kind=StatefulSet: StatefulSet
        autoscaling.k8s.io/v1, Kind=VerticalPodAutoscaler: VerticalPodAutoscaler
        bitnami.com/v1alpha1, Kind=SealedSecret: SealedSecret
        networking.istio.io/v1beta1, Kind=DestinationRule: DestinationRule
        networking.istio.io/v1beta1, Kind=ServiceEntry: ServiceEntry
        networking.istio.io/v1beta1, Kind=Sidecar: Sidecar
        policy/v1beta1, Kind=PodDisruptionBudget: PodDisruptionBudget
        security.istio.io/v1beta1, Kind=PeerAuthentication: PeerAuthentication
      namespace: kafka

Should the GC code search for everything with the kustomize.toolkit.fluxcd.io/name and kustomize.toolkit.fluxcd.io/namespace labels and remove the things with the wrong checksum? That might be a more resilient approach in the face of unknown failures/bugs.

Generate API reference documentation

Besides having a written spec, it would help users if we also generate a reference from our API code. This can be done using "refdocs", as done by the Helm Operator.

  • Create make api-docs target to generate reference document to docs/api/kustomize.md
  • Run make api-docs in CI so that we ensure they are always up-to-date with code changes

Prune fails when CRD is removed

I have a problem with prune failing after deleting a crd from a kustomization record and a custom resource from a record that is dependent on the record the crd is in. The CRD is deleted but when it tries to purne the record which defines the custom resource of that type it can't delete it.

Now that kustomization record is stuck in prune failed.

kubectl -n gitops-system describe deployments.apps kustomize-controller
Name:                   kustomize-controller
Namespace:              gitops-system
CreationTimestamp:      Mon, 03 Aug 2020 15:29:11 -0400
Labels:                 app.kubernetes.io/instance=gitops-system
                        app.kubernetes.io/version=latest
                        control-plane=controller
                        kustomization/name=gitops-gitops-system
                        kustomization/revision=fea286da70b54619fa9fe7624f53a9494e38a948
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=kustomize-controller
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:       app=kustomize-controller
  Annotations:  prometheus.io/port: 8080
                prometheus.io/scrape: true
  Containers:
   manager:
    Image:      fcr-nonprod.fmr.com/fmr-pr172922/fluxcd/kustomize-controller:v0.0.5
    Port:       8080/TCP
    Host Port:  0/TCP
    Args:
      --events-addr=
      --enable-leader-election
      --log-level=debug
      --log-json
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:     100m
      memory:  64Mi
    Liveness:  http-get http://:http-prom/metrics delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      RUNTIME_NAMESPACE:   (v1:metadata.namespace)
    Mounts:
      /tmp from temp (rw)
  Volumes:
   temp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kustomize-controller-597fc85d74 (1/1 replicas created)
Events:          <none>
{
  "level": "error",
  "ts": "2020-08-26T08:03:29.185Z",
  "logger": "controllers.Kustomization",
  "msg": "Garbage collection for non-namespaced objects failed",
  "kustomization": "gitops-system/kraan-config",
  "error": " error: the server doesn't have a resource type \"LayerMgr\"\n",
  "stacktrace": "github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/fluxcd/kustomize-controller/controllers.prune\n\t/workspace/controllers/kustomization_controller.go:799\ngithub.com/fluxcd/kustomize-controller/controllers.(*KustomizationReconciler).prune\n\t/workspace/controllers/kustomization_controller.go:611\ngithub.com/fluxcd/kustomize-controller/controllers.(*KustomizationReconciler).reconcile\n\t/workspace/controllers/kustomization_controller.go:325\ngithub.com/fluxcd/kustomize-controller/controllers.(*KustomizationReconciler).Reconcile\n\t/workspace/controllers/kustomization_controller.go:187\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:233\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"
}
{
  "level": "debug",
  "ts": "2020-08-26T08:03:29.187Z",
  "logger": "controller-runtime.manager.events",
  "msg": "Normal",
  "object": {
    "kind": "Kustomization",
    "namespace": "gitops-system",
    "name": "kraan-config",
    "uid": "68fdd0d6-3e27-408a-a041-6be20a62f936",
    "apiVersion": "kustomize.fluxcd.io/v1alpha1",
    "resourceVersion": "8608197"
  },
  "reason": "error",
  "message": "pruning failed"
}
{
  "level": "info",
  "ts": "2020-08-26T08:03:29.198Z",
  "logger": "controllers.Kustomization",
  "msg": "Reconciliation finished in 426.467759ms, next run in 1m0s",
  "controller": "kustomization",
  "request": "gitops-system/kraan-config",
  "revision": "1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a"
}
{
  "level": "error",
  "ts": "2020-08-26T08:03:29.198Z",
  "logger": "controller-runtime.controller",
  "msg": "Reconciler error",
  "controller": "kustomization",
  "name": "kraan-config",
  "namespace": "gitops-system",
  "error": "pruning failed",
  "stacktrace": "github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"
}

Kustomizations...

kubectl -n gitops-system describe kustomizations.kustomize.fluxcd.io
Name:         addons-config
Namespace:    gitops-system
Labels:       <none>
Annotations:  fluxcd.io/reconcileAt: 2020-08-26 07:51:55.38088604 +0000 UTC m=+1945356.914135879
API Version:  kustomize.fluxcd.io/v1alpha1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2020-08-03T19:29:14Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  2
  Managed Fields:
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:interval:
        f:path:
        f:prune:
        f:sourceRef:
          .:
          f:kind:
          f:name:
        f:suspend:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-08-06T16:40:49Z
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:fluxcd.io/reconcileAt:
        f:finalizers:
          .:
          v:"finalizers.fluxcd.io":
      f:status:
        .:
        f:conditions:
        f:lastAppliedRevision:
        f:lastAttemptedRevision:
        f:snapshot:
          .:
          f:entries:
          f:revision:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2020-08-26T07:51:55Z
  Resource Version:  8605049
  Self Link:         /apis/kustomize.fluxcd.io/v1alpha1/namespaces/gitops-system/kustomizations/addons-config
  UID:               2b9d04d0-9b84-42c1-85f5-b117dac0da70
Spec:
  Interval:  1m0s
  Path:      ./addons/addons-config
  Prune:     true
  Source Ref:
    Kind:   GitRepository
    Name:   global-config
  Suspend:  true
Status:
  Conditions:
    Last Transition Time:   2020-08-26T07:51:55Z
    Message:                Kustomization is suspended, skipping reconciliation
    Reason:                 Suspended
    Status:                 False
    Type:                   Ready
  Last Applied Revision:    1.16-fideks-0.0.66/d9306f706705bca3772edf1b0c5a704fc5295337
  Last Attempted Revision:  1.16-fideks-0.0.66/d9306f706705bca3772edf1b0c5a704fc5295337
  Snapshot:
    Entries:
      Kinds:
        Kustomization:  kustomize.fluxcd.io/v1alpha1
      Namespace:        gitops-system
    Revision:           1.16-fideks-0.0.66/d9306f706705bca3772edf1b0c5a704fc5295337
Events:                 <none>


Name:         cluster-config
Namespace:    gitops-system
Labels:       <none>
Annotations:  fluxcd.io/reconcileAt: 2020-08-26 07:51:54.514531285 +0000 UTC m=+1945356.047781125
API Version:  kustomize.fluxcd.io/v1alpha1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2020-08-03T19:29:15Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  1
  Managed Fields:
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:interval:
        f:path:
        f:prune:
        f:sourceRef:
          .:
          f:kind:
          f:name:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-08-03T19:29:15Z
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:fluxcd.io/reconcileAt:
        f:finalizers:
          .:
          v:"finalizers.fluxcd.io":
      f:status:
        .:
        f:conditions:
        f:lastAppliedRevision:
        f:lastAttemptedRevision:
        f:snapshot:
          .:
          f:entries:
          f:revision:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2020-08-26T08:23:15Z
  Resource Version:  8613397
  Self Link:         /apis/kustomize.fluxcd.io/v1alpha1/namespaces/gitops-system/kustomizations/cluster-config
  UID:               18a544f0-a767-4292-83c8-b0d1783507f4
Spec:
  Interval:  1m0s
  Path:      ./addons-config
  Prune:     true
  Source Ref:
    Kind:  GitRepository
    Name:  cluster-config
Status:
  Conditions:
    Last Transition Time:   2020-08-26T08:23:15Z
    Message:                Applied revision: testing/eb159097aaf3e1494af2edc833175998e03ee4b0
    Reason:                 ApplySucceed
    Status:                 True
    Type:                   Ready
  Last Applied Revision:    testing/eb159097aaf3e1494af2edc833175998e03ee4b0
  Last Attempted Revision:  testing/eb159097aaf3e1494af2edc833175998e03ee4b0
  Snapshot:
    Entries:
      Kinds:
        Git Repository:  source.fluxcd.io/v1alpha1
      Namespace:         gitops-system
    Revision:            testing/eb159097aaf3e1494af2edc833175998e03ee4b0
Events:
  Type    Reason  Age                 From                  Message
  ----    ------  ----                ----                  -------
  Normal  info    31m (x21 over 22d)  kustomize-controller  gitrepository.source.fluxcd.io/global-config configured


Name:         gitops
Namespace:    gitops-system
Labels:       <none>
Annotations:  fluxcd.io/reconcileAt: 2020-08-26 07:51:55.276976849 +0000 UTC m=+1945356.810226696
API Version:  kustomize.fluxcd.io/v1alpha1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2020-08-03T19:29:14Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  1
  Managed Fields:
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:healthChecks:
        f:interval:
        f:path:
        f:prune:
        f:sourceRef:
          .:
          f:kind:
          f:name:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-08-06T16:40:49Z
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:fluxcd.io/reconcileAt:
        f:finalizers:
          .:
          v:"finalizers.fluxcd.io":
      f:status:
        .:
        f:conditions:
        f:lastAppliedRevision:
        f:lastAttemptedRevision:
        f:snapshot:
          .:
          f:entries:
          f:revision:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2020-08-26T08:23:44Z
  Resource Version:  8613521
  Self Link:         /apis/kustomize.fluxcd.io/v1alpha1/namespaces/gitops-system/kustomizations/gitops
  UID:               76c30a70-5deb-4731-b1eb-3ddb800757f1
Spec:
  Health Checks:
    Kind:       Deployment
    Name:       source-controller
    Namespace:  gitops-system
    Kind:       Deployment
    Name:       kustomize-controller
    Namespace:  gitops-system
  Interval:     1m0s
  Path:         ./addons/gitops
  Prune:        true
  Source Ref:
    Kind:  GitRepository
    Name:  global-config
Status:
  Conditions:
    Last Transition Time:   2020-08-26T08:23:44Z
    Message:                Applied revision: 1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
    Reason:                 ApplySucceed
    Status:                 True
    Type:                   Ready
  Last Applied Revision:    1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
  Last Attempted Revision:  1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
  Snapshot:
    Entries:
      Kinds:
        Cluster Role Binding:        rbac.authorization.k8s.io/v1
        Custom Resource Definition:  apiextensions.k8s.io/v1
        Namespace:                   v1
      Namespace:
      Kinds:
        Deployment:      apps/v1
        Network Policy:  networking.k8s.io/v1
        Role:            rbac.authorization.k8s.io/v1
        Role Binding:    rbac.authorization.k8s.io/v1
        Service:         v1
      Namespace:         gitops-system
    Revision:            1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
Events:
  Type    Reason  Age   From                  Message
  ----    ------  ----  ----                  -------
  Normal  info    36m   kustomize-controller  customresourcedefinition.apiextensions.k8s.io/helmcharts.source.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.fluxcd.io configured
service/source-controller configured
deployment.apps/source-controller configured
networkpolicy.networking.k8s.io/deny-ingress configured
namespace/gitops-system configured
customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.fluxcd.io configured
role.rbac.authorization.k8s.io/crd-controller-gitops-system configured
rolebinding.rbac.authorization.k8s.io/crd-controller-gitops-system configured
clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-gitops-system configured
deployment.apps/kustomize-controller configured
  Normal  info  31m  kustomize-controller  customresourcedefinition.apiextensions.k8s.io/helmcharts.source.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.fluxcd.io configured
customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.fluxcd.io configured
role.rbac.authorization.k8s.io/crd-controller-gitops-system configured
deployment.apps/kustomize-controller configured
namespace/gitops-system configured
rolebinding.rbac.authorization.k8s.io/crd-controller-gitops-system configured
clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-gitops-system configured
service/source-controller configured
deployment.apps/source-controller configured
networkpolicy.networking.k8s.io/deny-ingress configured
customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.fluxcd.io configured
  Normal  info  31m (x33 over 22d)  kustomize-controller  Health check passed for Deployment 'gitops-system/source-controller'
Health check passed for Deployment 'gitops-system/kustomize-controller'


Name:         kraan-config
Namespace:    gitops-system
Labels:       <none>
Annotations:  fluxcd.io/reconcileAt: 2020-08-26 07:51:55.294977464 +0000 UTC m=+1945356.828227304
API Version:  kustomize.fluxcd.io/v1alpha1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2020-08-03T19:29:14Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  1
  Managed Fields:
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:dependsOn:
        f:interval:
        f:path:
        f:prune:
        f:sourceRef:
          .:
          f:kind:
          f:name:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-08-06T16:40:49Z
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:fluxcd.io/reconcileAt:
        f:finalizers:
          .:
          v:"finalizers.fluxcd.io":
      f:status:
        .:
        f:conditions:
        f:lastAppliedRevision:
        f:lastAttemptedRevision:
        f:snapshot:
          .:
          f:entries:
          f:revision:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2020-08-26T08:14:24Z
  Resource Version:  8611071
  Self Link:         /apis/kustomize.fluxcd.io/v1alpha1/namespaces/gitops-system/kustomizations/kraan-config
  UID:               68fdd0d6-3e27-408a-a041-6be20a62f936
Spec:
  Depends On:
    kraan-crd
  Interval:  1m0s
  Path:      ./addons/addons-config/kraan
  Prune:     true
  Source Ref:
    Kind:  GitRepository
    Name:  global-config
Status:
  Conditions:
    Last Transition Time:   2020-08-26T08:14:24Z
    Message:                pruning failed
    Reason:                 PruneFailed
    Status:                 False
    Type:                   Ready
  Last Applied Revision:    1.16-fideks-0.0.76/f6de156a98688631c902e43a55793d7c58b826ab
  Last Attempted Revision:  1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
  Snapshot:
    Entries:
      Kinds:
        Addons Layer:  kraan.io/v1alpha1
        Layer Mgr:     kraan.io/v1alpha1
      Namespace:
    Revision:          1.16-fideks-0.0.76/f6de156a98688631c902e43a55793d7c58b826ab
Events:
  Type    Reason  Age                   From                  Message
  ----    ------  ----                  ----                  -------
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config748511563/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config748511563/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config748511563/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config748511563/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config437913525/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config437913525/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config437913525/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config437913525/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config152966032/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config152966032/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config152966032/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config152966032/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config191173442/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config191173442/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config191173442/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config191173442/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config903523769/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config903523769/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config903523769/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config903523769/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config295145391/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config295145391/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config295145391/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config295145391/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config951230916/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config951230916/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config951230916/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config951230916/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config579288659/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config579288659/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config579288659/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config579288659/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  error   36m                   kustomize-controller  kustomize build failed: Error: accumulating resources: accumulateFile "accumulating resources from 'mgr.yaml': evalsymlink failure on '/tmp/kraan-config670738326/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config670738326/addons/addons-config/kraan/mgr.yaml: no such file or directory", loader.New "Error loading mgr.yaml with git: url lacks orgRepo: mgr.yaml, dir: evalsymlink failure on '/tmp/kraan-config670738326/addons/addons-config/kraan/mgr.yaml' : lstat /tmp/kraan-config670738326/addons/addons-config/kraan/mgr.yaml: no such file or directory, get: invalid source string: mgr.yaml"
  Normal  info    31m (x1377 over 22d)  kustomize-controller  Dependencies do not meet ready condition, retrying in 30s
  Normal  info    31m (x3 over 22d)     kustomize-controller  addonslayer.kraan.io/apps created
addonslayer.kraan.io/base created
addonslayer.kraan.io/bootstrap created
addonslayer.kraan.io/mgmt created
  Normal  error  31m (x8 over 36m)     kustomize-controller  (combined from similar events): pruning failed
  Normal  error  9m22s (x18 over 31m)  kustomize-controller  pruning failed


Name:         kraan-crd
Namespace:    gitops-system
Labels:       <none>
Annotations:  fluxcd.io/reconcileAt: 2020-08-26 07:51:55.284647353 +0000 UTC m=+1945356.817897195
API Version:  kustomize.fluxcd.io/v1alpha1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2020-08-03T19:29:14Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  1
  Managed Fields:
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:dependsOn:
        f:interval:
        f:path:
        f:prune:
        f:sourceRef:
          .:
          f:kind:
          f:name:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-08-06T16:40:49Z
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:fluxcd.io/reconcileAt:
        f:finalizers:
          .:
          v:"finalizers.fluxcd.io":
      f:status:
        .:
        f:conditions:
        f:lastAppliedRevision:
        f:lastAttemptedRevision:
        f:snapshot:
          .:
          f:entries:
          f:revision:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2020-08-26T08:23:01Z
  Resource Version:  8613335
  Self Link:         /apis/kustomize.fluxcd.io/v1alpha1/namespaces/gitops-system/kustomizations/kraan-crd
  UID:               7e18eaf6-55bc-4b94-afcb-23045743e15e
Spec:
  Depends On:
    gitops
  Interval:  1m0s
  Path:      ./addons/kraan/crd
  Prune:     true
  Source Ref:
    Kind:  GitRepository
    Name:  global-config
Status:
  Conditions:
    Last Transition Time:   2020-08-26T08:23:01Z
    Message:                Applied revision: 1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
    Reason:                 ApplySucceed
    Status:                 True
    Type:                   Ready
  Last Applied Revision:    1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
  Last Attempted Revision:  1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
  Snapshot:
    Entries:
      Kinds:
        Custom Resource Definition:  apiextensions.k8s.io/v1beta1
      Namespace:
    Revision:                        1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
Events:
  Type    Reason  Age                  From                  Message
  ----    ------  ----                 ----                  -------
  Normal  info    36m                  kustomize-controller  customresourcedefinition.apiextensions.k8s.io "layermgrs.kraan.io" deleted
  Normal  info    31m (x13 over 22d)   kustomize-controller  customresourcedefinition.apiextensions.k8s.io/addonslayers.kraan.io configured
  Normal  info    15m (x884 over 22d)  kustomize-controller  Dependencies do not meet ready condition, retrying in 30s


Name:         kraan-mgr
Namespace:    gitops-system
Labels:       <none>
Annotations:  fluxcd.io/reconcileAt: 2020-08-26 07:51:55.369557372 +0000 UTC m=+1945356.902807212
API Version:  kustomize.fluxcd.io/v1alpha1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2020-08-03T19:29:14Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  2
  Managed Fields:
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:dependsOn:
        f:healthChecks:
        f:interval:
        f:path:
        f:prune:
        f:sourceRef:
          .:
          f:kind:
          f:name:
        f:suspend:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-08-06T16:40:49Z
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:fluxcd.io/reconcileAt:
        f:finalizers:
          .:
          v:"finalizers.fluxcd.io":
      f:status:
        .:
        f:conditions:
        f:lastAppliedRevision:
        f:lastAttemptedRevision:
        f:snapshot:
          .:
          f:entries:
          f:revision:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2020-08-26T07:51:55Z
  Resource Version:  8605046
  Self Link:         /apis/kustomize.fluxcd.io/v1alpha1/namespaces/gitops-system/kustomizations/kraan-mgr
  UID:               8df1ec69-6bda-4ae5-95ec-d0572d343e3e
Spec:
  Depends On:
    kraan-rbac
  Health Checks:
    Kind:       Deployment
    Name:       kraan-controller
    Namespace:  addons-config
  Interval:     1m0s
  Path:         ./addons/kraan/manager
  Prune:        true
  Source Ref:
    Kind:   GitRepository
    Name:   global-config
  Suspend:  true
Status:
  Conditions:
    Last Transition Time:   2020-08-26T07:51:55Z
    Message:                Kustomization is suspended, skipping reconciliation
    Reason:                 Suspended
    Status:                 False
    Type:                   Ready
  Last Applied Revision:    1.16-fideks-0.0.65/5647f08d2076605967e70e3b93a4617aec302522
  Last Attempted Revision:  1.16-fideks-0.0.65/5647f08d2076605967e70e3b93a4617aec302522
  Snapshot:
    Entries:
      Kinds:
        Deployment:  apps/v1
      Namespace:     addons-config
    Revision:        1.16-fideks-0.0.65/5647f08d2076605967e70e3b93a4617aec302522
Events:              <none>


Name:         kraan-rbac
Namespace:    gitops-system
Labels:       <none>
Annotations:  fluxcd.io/reconcileAt: 2020-08-26 07:51:55.356167477 +0000 UTC m=+1945356.889417327
API Version:  kustomize.fluxcd.io/v1alpha1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2020-08-03T19:29:14Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  1
  Managed Fields:
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:dependsOn:
        f:interval:
        f:path:
        f:prune:
        f:sourceRef:
          .:
          f:kind:
          f:name:
    Manager:      kubectl
    Operation:    Update
    Time:         2020-08-06T16:40:49Z
    API Version:  kustomize.fluxcd.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:fluxcd.io/reconcileAt:
        f:finalizers:
          .:
          v:"finalizers.fluxcd.io":
      f:status:
        .:
        f:conditions:
        f:lastAppliedRevision:
        f:lastAttemptedRevision:
        f:snapshot:
          .:
          f:entries:
          f:revision:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2020-08-26T08:23:04Z
  Resource Version:  8613349
  Self Link:         /apis/kustomize.fluxcd.io/v1alpha1/namespaces/gitops-system/kustomizations/kraan-rbac
  UID:               d5184326-6e80-4b6f-bed5-00e629391b31
Spec:
  Depends On:
    kraan-crd
  Interval:  1m0s
  Path:      ./addons/kraan/rbac
  Prune:     true
  Source Ref:
    Kind:  GitRepository
    Name:  global-config
Status:
  Conditions:
    Last Transition Time:   2020-08-26T08:23:04Z
    Message:                Applied revision: 1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
    Reason:                 ApplySucceed
    Status:                 True
    Type:                   Ready
  Last Applied Revision:    1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
  Last Attempted Revision:  1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
  Snapshot:
    Entries:
      Kinds:
        Role:             rbac.authorization.k8s.io/v1
        Role Binding:     rbac.authorization.k8s.io/v1
        Service Account:  v1
      Namespace:          addons-config
      Kinds:
        Cluster Role:          rbac.authorization.k8s.io/v1
        Cluster Role Binding:  rbac.authorization.k8s.io/v1
      Namespace:
    Revision:                  1.16-fideks-0.0.78/e41a0656be2cc7c47b7b9bb6107941949d31156a
Events:
  Type    Reason  Age                  From                  Message
  ----    ------  ----                 ----                  -------
  Normal  error   55m (x2 over 5d14h)  kustomize-controller  faild to untar artifact, error: tar error: unexpected EOF
  Normal  info    36m                  kustomize-controller  clusterrole.rbac.authorization.k8s.io/gitops-source configured
rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding configured
clusterrolebinding.rbac.authorization.k8s.io/gitops-source configured
serviceaccount/deployer configured
role.rbac.authorization.k8s.io/manager-role configured
clusterrole.rbac.authorization.k8s.io/helm-release configured
rolebinding.rbac.authorization.k8s.io/manager-role-binding configured
clusterrolebinding.rbac.authorization.k8s.io/addons-config-admin-rolebinding configured
clusterrolebinding.rbac.authorization.k8s.io/helm-release configured
role.rbac.authorization.k8s.io/leader-election-role configured
  Normal  info  31m  kustomize-controller  serviceaccount/deployer configured
role.rbac.authorization.k8s.io/manager-role configured
rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding configured
rolebinding.rbac.authorization.k8s.io/manager-role-binding configured
clusterrolebinding.rbac.authorization.k8s.io/gitops-source configured
clusterrolebinding.rbac.authorization.k8s.io/helm-release configured
role.rbac.authorization.k8s.io/leader-election-role configured
clusterrole.rbac.authorization.k8s.io/gitops-source configured
clusterrole.rbac.authorization.k8s.io/helm-release configured
clusterrolebinding.rbac.authorization.k8s.io/addons-config-admin-rolebinding configured
  Normal  info  15m (x1359 over 22d)  kustomize-controller  Dependencies do not meet ready condition, retrying in 30s

Health assessment for all resources of a given kind.

The current health assessment mechanism requires users to provide a white list of resource references in a Kustomization resource, which introduces a tight coupling between the Kustomization resource itself and the contents retrieved from its source git repository. This is not ideal in certain use cases, for example, if you add a new deployment or remove a deployment in the git repository, you have to modify the Kustomization resource in the cluster for health assessment to work.

Ideally, it would be nice to have a mechanism for the kustomize controller to automatically detect what resources should be health-checked, but this is hard to do because not all resources are kstatus-compliant, especially for custom resources.

That said, a possible improvement is to allow health assessment to be applied on a white list of resource kinds provided by users. Although there is still a coupling between the Kustomization resource and the git repository contents, the coupling is not as high because adding/removing resource kinds happens less often than changing resource names.

Would it be a good idea if empty names are allowed in a health check entry to indicate that the health check applies to all resources of the same group/version/kind that is submitted by this Kustomization?

Additionally, how about listing all resources to whom the health checks apply and the health results in Kustomization statuses for better observability?

Force Kustomization targetNamespace default value

While experimenting with Flux2 as Kubernetes Continuous Deployment, I encountered a corner case without a clear solution.

Context

The experiment goal was to create a multi-cluster/multi-namespace/multi-application folder layout

.
โ”œโ”€โ”€ apps
โ”‚ย ย  โ”œโ”€โ”€ collections
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ app-collection-example
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ app1.yaml
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ app2.yaml
โ”‚ย ย  โ””โ”€โ”€ standalone
โ”‚ย ย      โ”œโ”€โ”€ app1
โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ helm-release.yaml
โ”‚ย ย      โ”‚ย ย  โ””โ”€โ”€ kustomization.yaml
โ”‚ย ย      โ”œโ”€โ”€ app2
โ”‚ย ย      โ”‚ย ย  โ”œโ”€โ”€ helm-release.yaml
โ”‚ย ย      โ”‚ย ย  โ””โ”€โ”€ kustomization.yaml
โ”œโ”€โ”€ clusters
โ”‚ย ย  โ”œโ”€โ”€ minikube
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ flux-system
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ gotk-components.yaml
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ gotk-sync.yaml
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ kustomization.yaml
โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ namespaces
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ namespace-A
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ”œโ”€โ”€ app-collection-kustomization.yaml
โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ namespace.yaml
โ”‚ย ย  โ”‚ย ย  โ””โ”€โ”€ sources
โ”‚ย ย  โ”‚ย ย      โ””โ”€โ”€ sources.yaml
โ””โ”€โ”€ sources
 ย ย  โ”œโ”€โ”€ git-repositories
 ย ย  โ”‚ย ย  โ””โ”€โ”€ git-repo.yaml
 ย ย  โ””โ”€โ”€ helm-repositories
 ย ย      โ”œโ”€โ”€ helm-incubator.yaml
 ย ย      โ””โ”€โ”€ helm-stable.yaml

The app-collection-kustomization.yaml is a Kustomization with a well known targetNamespace

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: app-collection-kustomization
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: sources
  path: ./fluxcd/apps/collections/app-collection-example
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system
  validation: client
  targetNamespace: namespace-A

The files app1.yaml and app2.yaml are meant to be Kustomization and the namespace will be overridden by the app-collection-kustomization

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: app1
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: sources
  path: ./fluxcd/apps/standalone/app1
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system
  validation: client

Problem

All the resources managed by app1.yaml and app2 are no more constrained to the targetNamespace

Expectation

Allow to set a field in the files app1.yaml and app2.yaml that allows to keep the parent targetNamespace as it was defined in the app-collection-kustomization

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: app1
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: sources
  path: ./fluxcd/apps/standalone/app1
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system
  validation: client
  limitToNamespace: true

where limitToNamespace is a boolean that when true overrides the targetNamespace value by the Kustomization resource namespace.

This feature will help to have consistency with the HelmRelease behavior of the same targetNamespace field.

Allow for filtering kubectl apply results based on result (created, updated, no change)

Currently, the notifications that are sent from the kustomize controller contain all the resources processed by the kustomization. When there are large numbers of resources, the unchanged resources overwhelm the notification body, when all we care about are the updated and created resources.

It would be good to be able to filter the notifications that were sent based on whether the result of the kubectl apply was a create, and update, or no change

Unit testing

Currently the kustomize controller testing is done with Kubernetes Kind and a series of end-to-end tests. We should create unit tests and mock the source controller artifacts server.

The artifacts mock server could be built on top of Go httptest with the following features:

  • func new() (port int) creates a httptest server and returns the allocated port
  • func artifactFromYaml(path, yaml string) (url string) creates a tar.gz with the path e.g. dev/manifests.yaml and returns the URL in the format hash(path).tar.gz

A basic unit test could look like:

  • init mock server
  • create artifact from an in-memory namespace yaml
  • create a GitRepository object with the artifact url set to the one received from the mock server
  • create Kustomization object referencing the above source
  • start Kustomize reconciler
  • wait for the namespace to be created

Garbage Collection Deleting "downstream" Resources

I am using KubeDB with its helm chart being deployed via Flux. However I am seeing the kustomize controller sometimes delete Service resources related to these. It doesn't seem to happen all the time. I see it happen sometimes? when altering another resource within the same kustomization.yaml though I do not know if it happens all the time. Running a reconcile seems to resolve this (or kubedb operator is just fixing it after so long, i dont know.)

  • Kustomize Controller: v0.4.0
  • KubeDB Operator: v0.15.1

Kustomize Controller Log

here I altered a setting for the harbor HelmRelease, and in return the KubeDB services were removed (except the redis service which remained intact)

Log entries cleaned up a little bit

{
  "level":"info",
  "ts":"2020-12-04T15:16:01.894Z",
  "logger":"controllers.Kustomization",
  "msg":"
    garbage collection completed:
      Service/registry/postgres deleted
      Service/registry/postgres-pods deleted
      Service/registry/postgres-standby deleted
      Service/registry/redis-pods deleted
    ",
  "kustomization":"flux-system/flux-system"
}

Error hidden with info log level and status body too long

This is partially related to #190.

If many manifests are in a repository and one of these manifest got an error the error is not visible in info log level because you end up seeing:

{"level":"error","ts":"2020-12-14T09:52:40.719Z","logger":"controllers.Kustomization","msg":"unable to update status after reconciliation","controller":"kustomization","request":"flux-system/k8s","error":"Kustomization.kustomize.toolkit.fluxcd.io \"k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}

However if the log level is debug the real error is visible before that in the debug msg Normal.
The error should be visible in all log levels despite seeing body too long error.

Dry-run support

A dry-run feature would be quite useful to have on the ks pipeline. If you deal with big sources a dry-run is something useful (Also if one migrates a source with existing manifests).
Basically if dry-run is enabled just stop before apply or after the validation.

kustomize-controller CrashLoopBackoff with v0.2.0 release

flux install in a fresh cluster fails to fully initialize the kustomize-controller in the latest v0.2.0 release.

Relevant logs are below:

kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:35.657Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:35.658Z","logger":"setup","msg":"starting manager"}
kustomize-controller-54954bd5dd-m68tt manager I1030 01:07:35.659177       8 leaderelection.go:242] attempting to acquire leader lease  flux-system/5b6ca942.fluxcd.io...
kustomize-controller-54954bd5dd-m68tt manager I1030 01:07:35.668219       8 leaderelection.go:252] successfully acquired lease flux-system/5b6ca942.fluxcd.io
kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:35.760Z","logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:35.760Z","logger":"controller","msg":"Starting EventSource","reconcilerGroup":"helm.toolkit.fluxcd.io","reconcilerKind":"HelmRelease","controller":"helmrelease","source":"kind source: /, Kind="}
kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:35.760Z","logger":"controller","msg":"Starting EventSource","reconcilerGroup":"helm.toolkit.fluxcd.io","reconcilerKind":"HelmRelease","controller":"helmrelease","source":"kind source: /, Kind="}
kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:35.861Z","logger":"controller","msg":"Starting Controller","reconcilerGroup":"helm.toolkit.fluxcd.io","reconcilerKind":"HelmRelease","controller":"helmrelease"}
kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:35.861Z","logger":"controller","msg":"Starting workers","reconcilerGroup":"helm.toolkit.fluxcd.io","reconcilerKind":"HelmRelease","controller":"helmrelease","worker count":4}
kustomize-controller-54954bd5dd-m68tt manager {"level":"info","ts":"2020-10-30T01:07:56.731Z","logger":"controller","msg":"Stopping workers","reconcilerGroup":"helm.toolkit.fluxcd.io","reconcilerKind":"HelmRelease","controller":"helmrelease"}

failed to download artifact error in GKE private cluster

Hi, I got some error on GKE private cluster like this.

flux-system.flux-system
faild to download artifact from http://source-controller.flux-system/gitrepository/flux-system/flux-system/7914abbcca1376a18c4b07bc14aff3e5b2067867.tar.gz, status: 404 Not Found
revision
master/7914abbcca1376a18c4b07bc14aff3e5b2067867
flux-system.flux-system
faild to download artifact from http://source-controller.flux-system/gitrepository/flux-system/flux-system/7914abbcca1376a18c4b07bc14aff3e5b2067867.tar.gz, status: 404 Not Found
revision
master/7914abbcca1376a18c4b07bc14aff3e5b2067867

I think this is related with DNS issues, but I have no idea about this error.

If it's domain is source-controller.flux-system.svc.cluster.local, it would be work I guess, but I have no tuning point about hostname. Is there any idea about this problem? I didn't tested on GKE public cluster but I've been saw source-controller.flux-system is valid URL on some other environment.

Reconciler error : "kustomization path not found" when removing namespace

Hello,

We have a git repository flux-workspaces that is organized in the following manner :

flux-workspaces
	<mycluster01>
		<namespace01>
			<tool01>
			<tool02>
			kustomization.yaml
			other_yaml_files.yaml
		<namespace02>
		<namespace03>
		...
	<mycluster02>
		...

The flux namespace is a bit different, as it has a kustomization resource for each namespace, that looks like this :

$ cat mycluster01/flux/fluxkustomization-mynamespace01.yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: mycluster01-mynamespace01
spec:
  interval: 10m
  path: ./mycluster01/mynamespace01/
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-workspaces
  targetNamespace: mynamespace01

Each of these files is then referenced in the kustomization.yaml file of the flux namespace :

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- fluxkustomization-mynamespace01.yaml
...

So, to create a new namespace, we do the following :

  • add a Kustomization for the syncing of the new namespace in the flux namespace : mycluster01/flux/fluxkustomization-mynamespace01.yaml
  • reference that new file in mycluster01/flux/kustomization.yaml
  • create the new namespace folder mycluster01/mynamespace01/ with the following items :
    • namespace.yaml : definition of the namespace
    • serviceaccount-default.yml : default service account
    • serviceaccount-jenkins.yml : jenkins service account
    • rolebinding-admin.yaml : bind the jenkins service account to an admin ClusterRole
    • kustomization.yaml : references the above files in its resources

After pushing these changes to the repo, everything is OK : the namespace and all its objects are created, and the logs are showing no errors.

Now, when we delete the namespace, the following is done :

  • remove the namespace folder mycluster01/mynamespace01/
  • remove the namespace kusto mycluster01/flux/fluxkustomization-mynamespace01.yaml
  • remove the namespace kusto reference in the mycluster01/flux/kustomization.yaml resources

When these changes are commited and pushed to the repo :

  • the namespace and all its objects are correctly removed from the cluster
  • we get a whole bunch of errors in the kustomize controller logs :
{"level":"info","ts":"2021-01-08T08:22:16.756Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 999.976256ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:16.756Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01140601269/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:31.456Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 800.520349ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:31.456Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01296703260/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:32.164Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 696.992415ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:32.164Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01006052921/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:32.884Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 427.775447ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:32.884Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01924921258/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:33.386Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 221.770087ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:33.386Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01608322305/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:34.085Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 200.448415ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:34.085Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01093831276/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:34.603Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 216.664845ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:34.603Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01327836379/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:35.671Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 585.87802ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:35.671Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01296367742/mycluster01/mynamespace01: no such file or directory"}
{"level":"info","ts":"2021-01-08T08:22:36.579Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 267.677557ms, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-mynamespace01","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"error","ts":"2021-01-08T08:22:36.579Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"mycluster01-mynamespace01","namespace":"flux","error":"kustomization path not found: stat /tmp/mycluster01-mynamespace01010473413/mycluster01/mynamespace01: no such file or directory"}
...
{"level":"info","ts":"2021-01-08T08:22:36.893Z","logger":"controllers.Kustomization","msg":"garbage collection completed: Kustomization/flux/mycluster01-mynamespace01 marked for deletion\n","kustomization":"flux/mycluster01-flux"}
{"level":"info","ts":"2021-01-08T08:22:36.941Z","logger":"controllers.Kustomization","msg":"Reconciliation finished in 16.388887738s, next run in 10m0s","controller":"kustomization","request":"flux/mycluster01-flux","revision":"master/28ff8ce47754604a8eeb9480a74b4568f78cc181"}
{"level":"info","ts":"2021-01-08T08:22:36.980Z","logger":"controllers.Kustomization","msg":"garbage collection completed: LimitRange/mynamespace01/mynamespace01-limit-range deleted\nServiceAccount/mynamespace01/default deleted\nServiceAccount/mynamespace01/jenkins deleted\nRoleBinding/mynamespace01/mynamespace01-admin deleted\nNamespace/mynamespace01 deleted\n","kustomization":"flux/mycluster01-mynamespace01"}

Any idea why we get these errors ?

We are using Kustomize Controller v0.5.3.

Throttle reconciliation in case of error

If there is an error it looks like the configured interval does not get considered.

I have a 10m configured interval and one error in a manifest and see a reconciliation every ~10s.
Which also leads to a slack alert for each. (Also the slack alerts are useless because of #190, I don't see the actual error in the notification nor the log because of #202).

{"level":"error","ts":"2020-12-14T10:23:53.606Z","logger":"controllers.Kustomization","msg":"unable to update status after reconciliation","controller":"kustomization","request":"flux-system/devops-k8s","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:23:53.606Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"devops-k8s","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:24:21.658Z","logger":"controllers.Kustomization","msg":"unable to update status after reconciliation","controller":"kustomization","request":"flux-system/devops-k8s","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:24:21.658Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"devops-k8s","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:24:48.822Z","logger":"controllers.Kustomization","msg":"unable to update status after reconciliation","controller":"kustomization","request":"flux-system/devops-k8s","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:24:48.822Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"devops-k8s","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:25:16.280Z","logger":"controllers.Kustomization","msg":"unable to update status after reconciliation","controller":"kustomization","request":"flux-system/devops-k8s","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:25:16.280Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"devops-k8s","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:25:41.754Z","logger":"controllers.Kustomization","msg":"unable to update status after reconciliation","controller":"kustomization","request":"flux-system/devops-k8s","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}
{"level":"error","ts":"2020-12-14T10:25:41.754Z","logger":"controller","msg":"Reconciler error","reconcilerGroup":"kustomize.toolkit.fluxcd.io","reconcilerKind":"Kustomization","controller":"kustomization","name":"devops-k8s","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"devops-k8s\" is invalid: status.conditions.message: Invalid value: \"\": status.conditions.message in body should be at most 32768 chars long"}

Health check event is emitted in every reconciliation cycle

Hi,

With merge of #219 the "health check passed"-event/notification is emitted in every reconciliation cycle for all Kustomizations always.

ready := readiness != nil && readiness.Status == metav1.ConditionTrue

ready will always be false since readiness.Status will be Unknown at the stage of checkHealth.

.
.
Status:
  Conditions:
    Last Transition Time:   2021-01-13T08:38:10Z
    Message:                reconciliation in progress
    Reason:                 Progressing
    Status:                 Unknown
    Type:                   Ready
.
.

Status is updated later in the reconciliation cycle.

return kustomizev1.KustomizationReady(

Am I missing something?

Example of kind: Kustomization used:

---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: my-app
  namespace: flux-system
spec:
  interval: 15m
  path: ./mgmt/namespaces/my-namespace/my-app
  prune: true
  sourceRef:
    kind: GitRepository
    name: flux-system
  healthChecks:
    - apiVersion: helm.toolkit.fluxcd.io/v2beta1
      kind: HelmRelease
      name: my-app
      namespace: my-namespace
  timeout: 5m
  validation: client

HelmRelease, Kustomization DependsOn

Currently HelmRelease only allowed to depends on HelmRelease, Kustomization only allowed to depends on Kustomization.

Is it a better solution if we can remove the type restriction ?

Don't send reconcile notification when no object has changed

When committing to a source repository, in a directory not concerned by any kustomizations, an event is still sent and the corresponding notification - it's kind of spammy if you only watch a part of a repository containing k8s conf, and a lot of commits are made in the rest of the repo.

Incorrect state in health check event

When deploying a Kustomization with a health check to a deployment in an incorrect image tag, the expected event generated is expected to be an error. The image cant be pulled which means that the deployment will fail immediately and never reach a healthy state. The correct error event is received if it is the initial deployment or if the previous commit also caused the health check to fail.

If a deployment is healthy but then a new commit changes the tag to something that cant be fetched, the health check will transition from healthy in one commit to unhealthy in another commit. The first event will have the wrong severity but the correct reason. After the next reconcile loop is run for the kustomization the health check will be evaluated again and the correct severity will be sent. I have only observed this issue in the git commit notifier, and it may only affect those notifiers as they will not ignore update events sent by the kustomize controller.

Below is the status messages sent for a commit to GitHub

[
  {
    "url": "",
    "avatar_url": "",
    "id": 0,
    "node_id": "",
    "state": "failure",
    "description": "health check failed",
    "target_url": null,
    "context": "kustomization/apps",
    "created_at": "2020-12-04T11:18:19Z",
    "updated_at": "2020-12-04T11:18:19Z",
    "creator": {}
  },
  {
    "url": "",
    "avatar_url": "",
    "id": 0,
    "node_id": "",
    "state": "success",
    "description": "health check failed",
    "target_url": null,
    "context": "kustomization/apps",
    "created_at": "2020-12-04T11:17:18Z",
    "updated_at": "2020-12-04T11:17:18Z",
    "creator": {}
  }
]

Thank you @chlunde for initially reporting this bug and helping out with digging into it!

Malformed YAMLs are silently removed

Malformed YAMLs are ignored when the controller generates a kustomization.yaml. This leads to objects being pruned from the cluster when an existing object has been modified in Git and its YAML is invalid. We should validate the yaml files are in fact Kubernetes objects.

Directory called Kustomization incorrectly implies kustomization

Hi there! ๐Ÿ‘‹

It seems that I have hit a corner case in Kustomization detection.

Summary

Current behavior: kustomization-controller fails if it encounters a directory called Kustomization in the repo.
Expected behavior:

  1. reconciliation not failing
  2. flux not refusing to descend into Kustomization/'s parent directory

I believe that this happens because the condition of this if should only fire for non-directories.

Reproduction

Suppose that I have source-controller and kustomization-controller running in my cluster and the following manifests applied:

apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: cluster-contents
  namespace: something
spec:
  url: https://github.com/some-org/some-repo.git
  interval: 30s
  secretRef:
    name: some-https-basic-auth-secret
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: cluster-contents
  namespace: something
spec:
  interval: 30s
  path: "./"
  prune: true
  sourceRef:
    kind: GitRepository
    name: cluster-contents

The repo contains only plain Kubernetes manifests (i.e. no Kustomizations, just plain k8s resources). That repo follows a naming convention for resource filenames: /<metadata.namespace>/<kind>/<metadata.name>.yaml

This works flawlessly until I add a resource of Kustomization kind (the use case is that I want to manage my flux2 Kustomizations from this repo). Per my naming convention, it lands in a file called along the lines of /michals-namespace/Kustomization/michals-kustomization.yaml. Once a directory called Kustomization gets pushed to the repository, reconciliation of the Kustomization resource cluster-contents begins to fail:

Status:
  Conditions:
    Last Transition Time:  2020-12-23T21:04:46Z
    Message:               kustomize build failed: accumulating resources: 2 errors occurred:
                           * accumulateFile error: "accumulating resources from './gitops': read /tmp/cluster-contents240645327/gitops: is a directory"
                           * accumulateDirector error: "couldn't make target for path '/tmp/cluster-contents240645327/gitops': unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory '/tmp/cluster-contents240645327/gitops'"


    Reason:                 BuildFailed
    Status:                 False
    Type:                   Ready
  Last Applied Revision:    (redacted)
  Last Attempted Revision:  (redacted)
  Observed Generation:      1
  Snapshot:
    Checksum:  (redacted)
    Entries:
      Kinds:
        /v1, Kind=Namespace:                                          Namespace
        apiextensions.k8s.io/v1, Kind=CustomResourceDefinition:       CustomResourceDefinition
        apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition:  CustomResourceDefinition
        rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding:        ClusterRoleBinding
        rbac.authorization.k8s.io/v1beta1, Kind=ClusterRole:          ClusterRole
        rbac.authorization.k8s.io/v1beta1, Kind=ClusterRoleBinding:   ClusterRoleBinding
      Namespace:                                                      
      Kinds:
        /v1, Kind=Service:         Service
        /v1, Kind=ServiceAccount:  ServiceAccount
        apps/v1, Kind=Deployment:  Deployment
      Namespace:                   (redacted)
      Kinds:
        /v1, Kind=Service:                               Service
        apps/v1, Kind=Deployment:                        Deployment
        networking.k8s.io/v1, Kind=NetworkPolicy:        NetworkPolicy
        rbac.authorization.k8s.io/v1, Kind=Role:         Role
        rbac.authorization.k8s.io/v1, Kind=RoleBinding:  RoleBinding
      Namespace:                                         flux-system
      Kinds:
        apps/v1, Kind=Deployment:  Deployment
      Namespace:                   (redacted)
Events:
  Type    Reason  Age   From                  Message
  ----    ------  ----  ----                  -------
  Normal  error   49m   kustomize-controller  kustomize build failed: accumulating resources: 2 errors occurred:
          * accumulateFile error: "accumulating resources from './michals-namespace': read /tmp/cluster-contents051013649/michals-namespace: is a directory"
          * accumulateDirector error: "couldn't make target for path '/tmp/cluster-contents051013649/michals-namespace': unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory '/tmp/cluster-contents051013649/michals-namespace'"

Proposed fix

This line in kustomization_generator.go

					if fs.Exists(filepath.Join(path, kfilename)) {

should become something equivalent to

					if kpath := filepath.Join(path, kfilename); fs.Exists(kpath) && !fs.IsDir(kpath) {

If I get a green light from maintainers I'm happy to send a PR.

Extend kustomize controller health assessment to support HelmReleases

If the kustomize controller health assessment was extended to support HelmReleases the kustomize-controller could be used to deploy helmreleases custom resources and detect execution outcome based on HR status. This would enable users to define kustomization custom resources linked by the dependsOn field and sequence deployment of HelmReleases with other object creations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.