GithubHelp home page GithubHelp logo

Comments (11)

gclawes avatar gclawes commented on August 26, 2024 1

Tried 2.0.0, works perfectly! Thanks!

from helm-charts.

gclawes avatar gclawes commented on August 26, 2024 1

Yep, thanks!

from helm-charts.

stevehipwell avatar stevehipwell commented on August 26, 2024

@gclawes thanks for opening this issue.

Firstly, as is alluded to in the helm docs you shouldn't be using Helm for CRDs, they're in the chart by convention and to make automation testing easier, but I'd expect people to be manually updating the CRDs and then using the --skip-crds flag when upgrading/installing a chart.

Secondly, I only use the operator myself to run network policy in coordination with the AWS VPC CNI, so I've not had this issue. To build this chart I've taken the CRDs from the operator project but as they don't have a reference installation manifest that I can find I've had to use the chart in the Calico project to try and figure out the context I'm missing. If you can point me in the direction of better docs or manifests I'd really appreciate it; otherwise I'll look at your logs and see if I can figure out the issue.

from helm-charts.

stevehipwell avatar stevehipwell commented on August 26, 2024

@gclawes I've updated the role in #362 based on some changes that I hadn't seen, but I don't think this will fix your issue. Could you manually patch your role based on the logs to come up with what I need to add to the chart?

from helm-charts.

stevehipwell avatar stevehipwell commented on August 26, 2024

@gclawes the last two chart releases have support for adding custom rules to the ClusterRole to support this until I can find the correct RBAC documented. I'm going to close this, but feel free to re-open if you think I've missed something.

from helm-charts.

gclawes avatar gclawes commented on August 26, 2024

Hey, sorry for the late reply, been busy the last few weeks. I may get some more time to dive deeper into this later this week. If I recall correctly, it may have been related to the fact that there are 2 sets of CRDs tigera/projectcalico use, the ones from projectcalico/calico and the ones from tigera/operator. If I recall from my investigation several weeks ago, they differ slightly.

I believe the former (projectcalico/calico) is where the official helm chart's CRDs are sourced from (but I may be mistaken on that for more recent versions).

from helm-charts.

stevehipwell avatar stevehipwell commented on August 26, 2024

@gclawes there are indeed two sets of CRDs, the Tigera Operator ones and the Calico ones; but this issue is due to the Calico spec only using a subset of the Tigera Operator CRDs. This Helm chart works fine out of the box if you only install the CRDs required for Calico and with the last couple of releases it will also work correctly with all the Tigera Operator CRDs if you provide custom rules.

Moving forward I'll clarify my position of using --skip-crds by default in all my charts, and add some specific instructions for installing the Tigera Operator just to manage Calico. I'm also going to try and get the full ruleset required for all Tigera Operator CRDs, but I'm not sure if anyone would be using this pattern.

from helm-charts.

gclawes avatar gclawes commented on August 26, 2024

@stevehipwell I finally got around to testing this, adding these rules fixed the RBAC issues:

rbac:
  customRules:
  - apiGroups:
      - batch
    resources:
      - jobs
    verbs:
      - create
      - get
      - list
      - update
      - delete
      - watch
  - apiGroups:
      - storage.k8s.io
    resources:
      - storageclasses
    verbs:
      - get
      - list
      - watch

Unfortunately, the operator now crashes with an error about a missing Elasticsearch CRD.

{"level":"error","ts":1641873611.100888,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"Elasticsearch.elasticsearch.k8s.elastic.co","error":"no matches for kind \"Elasticsearch\" in version \"elasticsearch.k8s.elastic.co/v1\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:117\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:159\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"info","ts":1641873611.1018019,"logger":"controller-runtime.manager.controller.monitor-controller","msg":"Starting workers","worker count":1}
{"level":"info","ts":1641873611.101933,"logger":"controller-runtime.manager.controller.monitor-controller","msg":"Stopping workers"}
{"level":"error","ts":1641873611.1019604,"logger":"controller-runtime.manager.controller.amazoncloudintegration-controller","msg":"Could not wait for Cache to sync","error":"failed to wait for amazoncloudintegration-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1641873611.1026955,"logger":"controller-runtime.manager.controller.clusterconnection-controller","msg":"Could not wait for Cache to sync","error":"failed to wait for clusterconnection-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1641873611.1028874,"logger":"controller-runtime.manager.controller.apiserver-controller","msg":"Could not wait for Cache to sync","error":"failed to wait for apiserver-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1641873611.103022,"logger":"controller-runtime.manager.controller.compliance-controller","msg":"Could not wait for Cache to sync","error":"failed to wait for compliance-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1641873611.1031818,"logger":"controller-runtime.manager.controller.intrusiondetection-controller","msg":"Could not wait for Cache to sync","error":"failed to wait for intrusiondetection-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"info","ts":1641873611.104602,"logger":"controller-runtime.manager.controller.logcollector-controller","msg":"Stopping workers"}
{"level":"error","ts":1641873611.1047306,"logger":"controller-runtime.manager.controller.cmanager-controller","msg":"Could not wait for Cache to sync","error":"failed to wait for cmanager-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1641873611.1057136,"logger":"controller-runtime.manager.controller.authentication-controller","msg":"Could not wait for Cache to sync","error":"failed to wait for authentication-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:205\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:691"}
{"level":"error","ts":1641873611.105318,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"failed to wait for amazoncloudintegration-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.109239,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"failed to wait for clusterconnection-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.1092997,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"failed to wait for apiserver-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.1093373,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"failed to wait for compliance-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.109372,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"failed to wait for intrusiondetection-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.109406,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"Timeout: failed waiting for *v1.ClusterRoleBinding Informer to sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.109454,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"failed to wait for cmanager-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.10949,"logger":"controller-runtime.manager","msg":"error received after stop sequence was engaged","error":"failed to wait for authentication-controller caches to sync: cache did not sync","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/internal.go:547"}
{"level":"error","ts":1641873611.1095903,"logger":"setup","msg":"problem running manager","error":"no matches for kind \"Elasticsearch\" in version \"elasticsearch.k8s.elastic.co/v1\"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nmain.main\n\t/go/src/github.com/tigera/operator/main.go:279\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:225"}

I suspect the operator is looking for Elastisearch based on the presence of the LogStorages and LogCollectors CRDs, given that their descriptions mention elasticsearch.

$ grep -rI Elasticsearch .
./crds/logstorages.operator.tigera.io.yaml:            must be named "tigera-secure". When created, this installs an Elasticsearch
./crds/logstorages.operator.tigera.io.yaml:                    that Elasticsearch will run on. The contents of DataNodeSelector
./crds/logstorages.operator.tigera.io.yaml:                    will be added to the PodSpec of the Elasticsearch nodes. For the
./crds/logstorages.operator.tigera.io.yaml:                    Elasticsearch cluster.
./crds/logstorages.operator.tigera.io.yaml:                    Elasticsearch cluster nodes, each of type master, data, and ingest.
./crds/logstorages.operator.tigera.io.yaml:                        Count defines the number of nodes in the Elasticsearch
./crds/logstorages.operator.tigera.io.yaml:                        NodeSets defines configuration specific to each Elasticsearch
./crds/logstorages.operator.tigera.io.yaml:                          Elasticsearch Node Set
./crds/logstorages.operator.tigera.io.yaml:                              and Elasticsearch cluster awareness attributes for the
./crds/logstorages.operator.tigera.io.yaml:                              Elasticsearch nodes. The list of SelectionAttributes are
./crds/logstorages.operator.tigera.io.yaml:                              configuration in the running Elasticsearch instance.
./crds/logstorages.operator.tigera.io.yaml:                                "attribute" the Elasticsearch nodes should be aware
./crds/logstorages.operator.tigera.io.yaml:                                the "awareness" attributes in Elasticsearch, while the
./crds/logstorages.operator.tigera.io.yaml:                                Node Affinity for the Pods created for the Elasticsearch
./crds/logstorages.operator.tigera.io.yaml:                        and requirements for the Elasticsearch cluster.
./crds/logstorages.operator.tigera.io.yaml:                    Retention defines how long data is retained in the Elasticsearch
./crds/logstorages.operator.tigera.io.yaml:                    that is used to provision disks to the Tigera Elasticsearch cluster.
./crds/logstorages.operator.tigera.io.yaml:                    ElasticsearchHash represents the current revision and
./crds/logstorages.operator.tigera.io.yaml:                    configuration of the installed Elasticsearch cluster. This is an
./crds/logstorages.operator.tigera.io.yaml:                    when Elasticsearch is modified.
./crds/logcollectors.operator.tigera.io.yaml:            Elasticsearch cluster as well as any additionally configured destinations.

I think the problems center around the CRD set your chart is using, the ones from the tigera/operator repo don't really seem like a finished release, I suspect switching to the other set (from projectcalico/calico) would resolve this. I believe the places to get them are here:

The calico directory in the 1st link (within the tigera-operator chart) symlinks to the crds directory in the 2nd link (calico chart).

As far as I know, projectcalico builds their public tigera-operator chart (including CRDs) used in the installation instructions here from the tigera-operator chart in that repo, and they only use the calico chart to generate their static manifests for kubectl apply -f installs (I have no idea why they don't just release that chart as well).

I think the CRDs in the tigera/operator repository are for some other purpose, likely Tigera Enterprise installs that make assumptions about the presence of ElasticSearch.

from helm-charts.

stevehipwell avatar stevehipwell commented on August 26, 2024

@gclawes I'm planning on removing the CRDs not required by a basic Calico install from the chart. As Helm shouldn't be used to install the CRDs I've not prioritised this.

I'm waiting on the Calico version of the operator to catch up with the minor version in my chart; at which point I'll update the patch version, remove the CRDs and update the docs to say it's optimised for Calico. Basically the chart will be for a Calico installation, but if you add the extra CRDs and rules (thank you for your work on these) you can use alternative modes.

from helm-charts.

gclawes avatar gclawes commented on August 26, 2024

Cool! Looking forward to it, thanks for the quick reply! Your chart is definitely much higher quality than the upstream tigera one!

from helm-charts.

stevehipwell avatar stevehipwell commented on August 26, 2024

Thanks @gclawes, I meant to comment here when I released. Can this issue be closed now?

from helm-charts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.