GithubHelp home page GithubHelp logo

cscetbon / casskop Goto Github PK

View Code? Open in Web Editor NEW
13.0 4.0 8.0 154.26 MB

This Kubernetes operator automates Cassandra operations such as deploying rack aware clusters, scaling up and down, configuring C* and its JVM, upgrading JVM and C*, backup/restores and many more...

Home Page: https://cscetbon.github.io/casskop/

License: Apache License 2.0

Shell 2.34% Makefile 1.92% Dockerfile 1.27% Go 76.48% Python 1.68% HCL 3.00% JavaScript 10.70% CSS 1.95% HTML 0.33% Mustache 0.32%

casskop's Introduction

Logo

CassKop - Cassandra Kubernetes operator

Overview

CassKop, the Cassandra Kubernetes operator makes it easy to run Apache Cassandra on Kubernetes. Apache Cassandra is a popular, free, open-source, distributed wide column store, NoSQL database management system. The operator allows to easily create and manage racks and data centers aware Cassandra clusters.

CassKop is based on CoreOS operator-sdk tools and APIs.

CassKop creates/configures/manages Cassandra clusters atop Kubernetes and is by default space-scoped which means that :

  • CassKop is able to manage X Cassandra clusters in one Kubernetes namespace.
  • You need X instances of CassKop to manage Y Cassandra clusters in X different namespaces (1 instance of CassKop per namespace).

This adds security between namespaces with a better isolation, and less work for each operator.

Installation

For detailed see the installation instructions.

Documentation

The documentation of the Casskop operator project is available at the Casskop Documentation Page.

Cassandra operator

The Casskop image is automatically built and stored on Github Packages

Casskop uses standard Cassandra image (tested up to Version 3.11 and 4.0)

Operator SDK

CassKop is build using operator SDK:

Build pipelines

We uses Github Action as our CI tool to build and test the operator.

Build image

To accelerate build phases we have created a custom build-image used by the CI pipeline:

https://github.com/cscetbon/casskop/actions/workflows/ci-image.yml

You can find more info in the developer Section

Contributing

See CONTRIBUTING for details on submitting patches and the contribution workflow.

For developers

Operator SDK is part of the operator framework provided by RedHat & CoreOS. The goal is to provide high-level abstractions that simplifies creating Kubernetes operators.

The quick start guide walks through the process of building the Cassandra operator using the SDK CLI, setting up the RBAC, deploying the operator and creating a Cassandra cluster.

You can find this in the Developer section

Contacts

You can contact the team on our slack https://casskop.slack.com (request sent to that ML)

License

CassKop is under Apache 2.0 license. See the LICENSE file for details.

casskop's People

Contributors

cscetbon avatar fdehay avatar jsanda avatar orange-cscetbon avatar akamyshnikova avatar erdrix avatar dependabot[bot] avatar peres-richard avatar mackwong avatar ahmedjami avatar pchmieli avatar aignatov avatar kri5 avatar jal06 avatar ibumarskov avatar snyk-bot avatar rocket357 avatar srteam2020 avatar toffer avatar yuriheupa avatar gamer22026 avatar dharmjit avatar hkroger avatar armingerten avatar keepitsimplestupid avatar ajoskowski avatar przysiadzesztanga avatar

Stargazers

Johann Gnaucke avatar NatalKaplia avatar ilyas ahsan avatar Joseph Hermis avatar Charles Dunda avatar Tomasz Gajger avatar Paradoxe Ng avatar Lucas Nickel avatar headless avatar GaoZizhong avatar  avatar  avatar  avatar

Watchers

James Cloos avatar  avatar Kostas Georgiou avatar  avatar

casskop's Issues

No matches for kind "MultiCasskop" in version "multicasskops.db.orange.com/v2"

Bug Report

What did you do?
helm install multi-casskop oci://ghcr.io/cscetbon/multi-casskop-helm

What did you expect to see?
multi-casskop up and running

What did you see instead? Under which circumstances?
Crashloopback with error:
creating Cassandra Multi Cluster controller: setting up MultiCasskop watch in Cluster dc1 Cluster: no matches for kind "MultiCasskop" in version "multicasskops.db.orange.com/v2"

Environment

  • casskop version:

2.1.17

  • Kubernetes version information:

v1.27.3

k api-versions | grep orange

db.orange.com/v1
db.orange.com/v1alpha1
db.orange.com/v2

k api-resources | grep orange

cassandrabackups                               db.orange.com/v2                       true         CassandraBackup
cassandraclusters                              db.orange.com/v2                       true         CassandraCluster
cassandrarestores                              db.orange.com/v2                       true         CassandraRestore
multicasskops                                  db.orange.com/v2                       true         MultiCasskop

k get crd multicasskops.db.orange.com -o yaml

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  creationTimestamp: "2023-08-02T13:48:08Z"
  generation: 1
  name: multicasskops.db.orange.com
  resourceVersion: "14831"
  uid: 9f43d154-3e16-47c7-91c7-d1bba77968c1
spec:
  conversion:
    strategy: None
  group: db.orange.com
  names:
    kind: MultiCasskop
    listKind: MultiCasskopList
    plural: multicasskops
    singular: multicasskop
  scope: Namespaced
  versions:
  - name: v1
    schema:
      openAPIV3Schema:
        properties:
          apiVersion:
            type: string
          kind:
            type: string
          metadata:
            type: object
          spec:
            type: object
            x-kubernetes-preserve-unknown-fields: true
          status:
            type: object
            x-kubernetes-preserve-unknown-fields: true
        required:
        - metadata
        - spec
        type: object
    served: true
    storage: false
  - name: v2
    schema:
      openAPIV3Schema:
        properties:
          apiVersion:
            type: string
          kind:
            type: string
          metadata:
            type: object
          spec:
            type: object
            x-kubernetes-preserve-unknown-fields: true
          status:
            type: object
            x-kubernetes-preserve-unknown-fields: true
        required:
        - metadata
        - spec
        type: object
    served: true
    storage: true
status:
  acceptedNames:
    kind: MultiCasskop
    listKind: MultiCasskopList
    plural: multicasskops
    singular: multicasskop
  conditions:
  - lastTransitionTime: "2023-08-02T13:48:08Z"
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: "2023-08-02T13:48:08Z"
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  storedVersions:
  - v2

Casskop spawning a bunch of logs; Multicasskop 'stop working' log

Bug Report

What did you do?

  • deployed casskop:2.2.2
  • deployed multi-casskop:2.2.2
  • created multi-casskop resource
apiVersion: db.orange.com/v2
kind: MultiCasskop
metadata:
  name: test
  namespace: ice-cassandra
spec:
  base:
    apiVersion: db.orange.com/v2
    kind: CassandraCluster
    metadata:
      creationTimestamp: null
      labels:
        cluster: test
      name: test
      namespace: test-cassandra
    spec:
      autoPilot: true
      backRestSidecar:
        image: ghcr.io/cscetbon/instaclustr-icarus:2.0.4
        imagePullPolicy: IfNotPresent
      bootstrapImage: ghcr.io/cscetbon/casskop-bootstrap:0.1.14
      cassandraImage: cassandra:4.1.4
      configBuilderImage: datastax/cass-config-builder:1.0.8
      # configMapName: cassandra-configmap
      dataCapacity: 10Gi
      imagePullSecret:
        name: ice-cassandra-image-pull-artifactory
      imagepullpolicy: IfNotPresent
      maxPodUnavailable: 1
      noCheckStsAreEqual: true
      nodesPerRacks: 1
      resources:
        limits:
          cpu: "4"
          memory: 12Gi
        requests:
          cpu: "2"
          memory: 4Gi
      runAsUser: 999
    status:
      seedlist:
      - test-dc1-rack1-0.ice.ice-cassandra.svc.cluster.local
  deleteCassandraCluster: true
  override:
    dc1:
      spec:
        topology:
          dc:
          - name: dc1
            rack:
            - name: rack1

What did you expect to see?
Operator does not spawning such amount of logs

What did you see instead? Under which circumstances?
Lots of strange logs in the casskop operator

  1. Reconciling CassandraCluster / Issue when updating CassandraCluster (hundrets lines)
    Casskop:
time="2024-03-29T10:37:32Z" level=error msg="Issue when updating CassandraCluster Status" cluster=test err="cassandraclusters.db.orange.com \"test\" not found"
time="2024-03-29T10:37:32Z" level=error msg="Issue when updating CassandraCluster" cluster=test err="Operation cannot be fulfilled on cassandraclusters.db.orange.com \"test\": the object has been modified; please apply your changes to the latest version and try again"
2024-03-29T10:37:32Z    INFO    controller_cassandracluster    Reconciling CassandraCluster    {"Request.Namespace": "test-cassandra", "Request.Name": "test"}
time="2024-03-29T10:37:38Z" level=error msg="Issue when updating CassandraCluster Status" cluster=test err="cassandraclusters.db.orange.com \"test\" not found"
2024-03-29T10:37:38Z    INFO    controller_cassandracluster    Reconciling CassandraCluster    {"Request.Namespace": "test-cassandra", "Request.Name": "test"}

Multi-casskop:
Could not reconcile Request. Stop working. - the log is alarming

controller.go:222: Operation cannot be fulfilled on multicasskops.db.orange.com "test": StorageError: invalid object, Code: 4, Key: /registry/db.orange.com/multicasskops/ice-cassandra/test, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8d2aa011-8f37-4094-94a6-f59cd94558fe, UID in object meta: 
controller.go:223: Could not reconcile Request. Stop working.

Environment

  • casskop version: v2.2.2

  • go version:

  • Kubernetes version information: v1.29.2

  • Kubernetes cluster kind: k3s v1.29.2

  • Cassandra version: 4.1.4

Don't exhaust compaction thread pool with cleanup tasks

Feature Request

Is your feature request related to a problem? Please describe.
Looks like CassKop is using the deprecated Cassandra StorageService.forceKeyspaceCleanup(String, String...) MBean call, resulting in using internally an unlimited number (jobs = 0) of compaction threads from the compaction thread pool, which results in affecting Cassandra negatively doing its ongoing automatic minor compactions. Due to this, the Cassandra node may start to see a huge increase in SSTables on disk, which then negatively affects query response times. Additionally, due to potentially all compaction threads are at full speed, it also may max out available CPU cycles on a node.

Describe the solution you'd like to see
CassKop should use the non-deprecated MBean call StorageService.forceKeyspaceCleanup(int jobs, String, String...), where providing a number of jobs (threads) is possible. Either expose / make it available somehow to the user or if not specified from the outside, use a value of 2, cause this is also the default value used by Cassandra's command-line nodetool cleanup ... if not specified explicitely via the -j command-line option.

Describe alternatives you've considered
No real alternative available for an end-user, despite perhaps only running it on a single node at a time, thus only affecting one node negatively, which is not a real option for larger clusters.

Additional context
CassKop
image

Cassandra
image

Casskop doesn't work properly on Kubernetes 1.25.x

Bug Report

We have deployed casskop operator on Kubernetes 1.24.x.
Now we want to use it on Kubernetes 1.25.x.
We are not able to do it, because casskop uses "beta" APIs - example for PodDisruptionBudget

What did you expect to see?
I would like to be able to use casskop on Kubernetes 1.25.x

What did you see instead? Under which circumstances?

2023-04-05T10:53:05.629Z        INFO    controller_cassandracluster     Reconciling CassandraCluster    {"Request.Namespace": "prod-doaks-cassandra", "Request.Name": "cassandra-cluster"}
time="2023-04-05T10:53:06Z" level=error msg="CreateOrUpdatePodDisruptionBudget Error: no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\""
time="2023-04-05T10:53:06Z" level=error msg="ensureCassandraPodDisruptionBudget Error: no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"" cluster=cassandra-cluster
time="2023-04-05T10:53:06Z" level=info msg="We will request : cassandra-cluster-dc1-rack1-0.cassandra-cluster to catch hostIdMap" cluster=cassandra-cluster err="<nil>"

Environment

  • casskop version: 2.1.0-release

  • Kubernetes version information: Azure AKS 1.25.4

Possible Solution
Replace "beta" apis with stable ones

Cassandra pod is not pulling backrest-sidecar container image

I am trying to deploy demo cassandra cluster, but cassandra-demo-dc1-rack1-0 pod has problem to run its backrest-sidecar container.
There is problem to pull image from google container repository with following pod log:

Events:
  Type     Reason     Age              From               Message
  ----     ------     ----             ----               -------
  Normal   Scheduled  17s              default-scheduler  Successfully assigned cassandra/cassandra-demo-dc1-rack1-0 to vm-k8s-master.vmnet.local
  Normal   Pulled     16s              kubelet            Container image "cassandra:4.0" already present on machine
  Normal   Created    16s              kubelet            Created container base-config-builder
  Normal   Started    16s              kubelet            Started container base-config-builder
  Normal   Pulled     15s              kubelet            Container image "datastax/cass-config-builder:1.0.4" already present on machine
  Normal   Created    15s              kubelet            Created container config-builder
  Normal   Started    15s              kubelet            Started container config-builder
  Normal   Created    8s               kubelet            Created container bootstrap
  Normal   Pulled     8s               kubelet            Container image "ghcr.io/cscetbon/casskop-bootstrap:0.1.10" already present on machine
  Normal   Started    8s               kubelet            Started container bootstrap
  Normal   Pulling    7s               kubelet            Pulling image "gcr.io/cassandra-operator/instaclustr-icarus:1.1.0"
  Warning  Failed     7s               kubelet            Failed to pull image "gcr.io/cassandra-operator/instaclustr-icarus:1.1.0": rpc error: code = Unknown desc = Error response from daemon: Head "https://gcr.io/v2/cassandra-operator/instaclustr-icarus/manifests/1.1.0": denied: Project cassandra-operator has been deleted.
  Warning  Failed     7s               kubelet            Error: ErrImagePull
  Normal   Pulled     6s (x2 over 7s)  kubelet            Container image "cassandra:4.0" already present on machine
  Normal   Created    6s (x2 over 7s)  kubelet            Created container cassandra
  Normal   Started    6s (x2 over 7s)  kubelet            Started container cassandra
  Normal   BackOff    5s (x2 over 6s)  kubelet            Back-off pulling image "gcr.io/cassandra-operator/instaclustr-icarus:1.1.0"
  Warning  Failed     5s (x2 over 6s)  kubelet            Error: ImagePullBackOff
  Warning  BackOff    5s               kubelet            Back-off restarting failed container

I also tried to pull the docker image manually, but I got similar response:

# docker pull gcr.io/cassandra-operator/instaclustr-icarus:1.1.0
Error response from daemon: Head "https://gcr.io/v2/cassandra-operator/instaclustr-icarus/manifests/1.1.0": denied: Project cassandra-operator has been deleted.

For me it looks like instaclustr-icarus image is not available anymore.
I am not sure whether I missed some configuration or icarus docker image was removed from repository.

Only first statefulset created when spinning up new cluster

Type of question

General Help

Question

What did you do?
Fresh install (via helm3) of casskop 2.1.13 in GKE (v1.22.15-gke.100) I see the following logged repeatedly:

time="2022-12-20T20:22:14Z" level=info msg="Error Waiting for sts change" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
1.6715677348775537e+09  INFO    controller_cassandracluster     Reconciling CassandraCluster    {"Request.Namespace": "development", "Request.Name": "sre-cassandra"}
time="2022-12-20T20:22:14Z" level=info msg="We will request : sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra to catch hostIdMap" cluster=sre-cassandra err="<nil>"
time="2022-12-20T20:22:14Z" level=info msg="The Operator Waits 20 seconds for the action to start correctly" cluster=sre-cassandra rack=srecassandra100g-uscentral1a
time="2022-12-20T20:22:16Z" level=info msg="Waiting for new version of statefulset" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
time="2022-12-20T20:22:17Z" level=info msg="Waiting for new version of statefulset" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
time="2022-12-20T20:22:18Z" level=info msg="Waiting for new version of statefulset" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
time="2022-12-20T20:22:19Z" level=info msg="Waiting for new version of statefulset" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
time="2022-12-20T20:22:20Z" level=info msg="Waiting for new version of statefulset" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
time="2022-12-20T20:22:20Z" level=info msg="Waiting for new version of statefulset" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
time="2022-12-20T20:22:20Z" level=info msg="Error Waiting for sts change" cluster=sre-cassandra statefulset=sre-cassandra-srecassandra100g-uscentral1a
time="2022-12-20T20:22:20Z" level=error msg="Issue when updating CassandraCluster" cluster=sre-cassandra err="Operation cannot be fulfilled on cassandraclusters.db.orange.com \"sre-cassandra\": the object has been modified; please apply your changes to the latest version and try again"
1.6715677400486872e+09  INFO    controller_cassandracluster     Reconciling CassandraCluster    {"Request.Namespace": "development", "Request.Name": "sre-cassandra"}
time="2022-12-20T20:22:20Z" level=info msg="We will request : sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra to catch hostIdMap" cluster=sre-cassandra err="<nil>"
time="2022-12-20T20:22:20Z" level=info msg="The Operator Waits 20 seconds for the action to start correctly" cluster=sre-cassandra rack=srecassandra100g-uscentral1a

This results in only the first statefulset being created, as the operator hangs indefinitely waiting for the sts change.

What did you expect to see?
I expected to see the cluster spin up completely with all "racks/statefulsets" coming online.

What did you see instead? Under which circumstances?
The operator hangs waiting for an unknown statefulset update to take place, causing only one rack to come online.

Environment

  • casskop version:

2.1.13

  • Kubernetes version information:

kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.12", GitCommit:"696a9fdd2a58340e61e0d815c5769d266fca0802", GitTreeState:"clean", BuildDate:"2022-04-13T19:07:00Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.15-gke.100", GitCommit:"4262ac74f84d8f8ec8f692ea2080483e932554f9", GitTreeState:"clean", BuildDate:"2022-09-22T09:24:03Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:

GKE

  • Cassandra version:

3.11.14

In GKE cassandra operator is missing permissions to list Nodes

Bug Report

What did you do?
I've tried to install the Cassandra operator in GKE.

What did you expect to see?
It should just work among all clouds.

What did you see instead? Under which circumstances?
It broke with log as follows:

2022-03-31T04:50:08.967Z	ERROR	leader	Failed to get Node	{"Node.Name": "<redacted>-system-f2e9a7e0-56u7", "error": "nodes \"<redacted>-system-f2e9a7e0-56u7\" is forbidden: User \"system:serviceaccount:<redacted>:cassandra-operator\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"}
github.com/operator-framework/operator-lib/leader.isNotReadyNode
	/casskop/vendor/github.com/operator-framework/operator-lib/leader/leader.go:277
github.com/operator-framework/operator-lib/leader.Become
	/casskop/vendor/github.com/operator-framework/operator-lib/leader/leader.go:182
main.main
	/casskop/main.go:145
runtime.main
	/usr/local/go/src/runtime/proc.go:255

The issue seems to be isolated only to GKE. In Azure, it runs perfectly as is.
Anyway IMHO if the operator needs to list nodes and there are no related permissions for it, it's definitely a bug here, not in the cloud (and plain luck, that it's working everywhere else).

Environment

  • casskop version:

2.1.0-release

  • Kubernetes version information:

Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9-gke.1001", GitCommit:"35dafe6010950b2aa1b3733e912f5828b58e8a02", GitTreeState:"clean", BuildDate:"2022-02-18T05:02:26Z", GoVersion:"go1.16.12b7", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind: GKE

  • Cassandra version:

2.1.0-release

Possible Solution

Add clusterrole and clusterrolebinding (will create PR for it quickly).

DNS entries not created on fresh 2.1.14 install

Type of question

General Help

Question

What did you do?
Fresh install of 2.1.14 on GKE (v1.22.15-gke.100) (removed CRDs from 2.1.13 install in issue #83 prior to helm install'ing 2.1.14). The following is logged repeatedly, though all statefulsets are brought up as expected (note DNS entries are created as expected under 2.1.13, though there are different issues there per the above linked issue):

time="2022-12-20T20:43:22Z" level=info msg="Initializing StatefulSet: Replicas count is not okay" ReadyReplicas=1 RequestedReplicas=2 cluster=sre-cassandra rack=srecassandra100g-uscentral1a
time="2022-12-20T20:43:22Z" level=info msg="Cluster has a disruption, waiting before applying any potential changes to statefulset" cluster=sre-cassandra dc-rack=srecassandra100g-uscentral1a
time="2022-12-20T20:43:22Z" level=info msg="Waiting Rack to be running before continuing, we break ReconcileRack after updated statefulset" cluster=sre-cassandra dc-rack=srecassandra100g-uscentral1a
1.6715690074502008e+09  INFO    controller_cassandracluster     Reconciling CassandraCluster    {"Request.Namespace": "development", "Request.Name": "sre-cassandra"}
time="2022-12-20T20:43:27Z" level=info msg="We will request : sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra to catch hostIdMap" cluster=sre-cassandra err="<nil>"
time="2022-12-20T20:43:27Z" level=error msg="Failed to call sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra to get hostIdMap" cluster=sre-cassandra err="cannot get host id map: HTTP Request Failed: Post \"http://sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra:8778/jolokia/\": dial tcp: lookup sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra on 240.100.24.10:53: no such host"
time="2022-12-20T20:43:27Z" level=error msg="CheckPodsState Error: cannot get host id map: HTTP Request Failed: Post \"http://sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra:8778/jolokia/\": dial tcp: lookup sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra on 240.100.24.10:53: no such host" cluster=sre-cassandra
time="2022-12-20T20:43:27Z" level=info msg="Initializing StatefulSet: Replicas count is not okay" ReadyReplicas=1 RequestedReplicas=2 cluster=sre-cassandra rack=srecassandra100g-uscentral1a
time="2022-12-20T20:43:27Z" level=info msg="Cluster has a disruption, waiting before applying any potential changes to statefulset" cluster=sre-cassandra dc-rack=srecassandra100g-uscentral1a
time="2022-12-20T20:43:27Z" level=info msg="Waiting Rack to be running before continuing, we break ReconcileRack after updated statefulset" cluster=sre-cassandra dc-rack=srecassandra100g-uscentral1a

What did you expect to see?
Expectation: DNS entries are created so the operator can manage the cluster. Oddly enough all of the nodes seem to think they're seed nodes:

$ for pod in $(kubectl get pods -n development -l app=cassandracluster -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'); do echo $pod; kubectl logs $pod -n development -c bootstrap | grep seed; echo; done
sre-cassandra-srecassandra100g-uscentral1a-0
Check Configured seeds by bootstrap
  - seeds: "sre-cassandra-srecassandra100g-uscentral1a-0.sre-cassandra.development.svc.cluster.local"

sre-cassandra-srecassandra100g-uscentral1a-1
Check Configured seeds by bootstrap
  - seeds: "sre-cassandra-srecassandra100g-uscentral1a-1.sre-cassandra.development.svc.cluster.local"

sre-cassandra-srecassandra100g-uscentral1b-0
Check Configured seeds by bootstrap
  - seeds: "sre-cassandra-srecassandra100g-uscentral1b-0.sre-cassandra.development.svc.cluster.local"

sre-cassandra-srecassandra100g-uscentral1b-1
Check Configured seeds by bootstrap
  - seeds: "sre-cassandra-srecassandra100g-uscentral1b-1.sre-cassandra.development.svc.cluster.local"

sre-cassandra-srecassandra100g-uscentral1f-0
Check Configured seeds by bootstrap
  - seeds: "sre-cassandra-srecassandra100g-uscentral1f-0.sre-cassandra.development.svc.cluster.local"

sre-cassandra-srecassandra100g-uscentral1f-1
Check Configured seeds by bootstrap
  - seeds: "sre-cassandra-srecassandra100g-uscentral1f-1.sre-cassandra.development.svc.cluster.local"

What did you see instead? Under which circumstances?
I would expect the DNS entries to be created (the issue does not present itself in 2.1.13) so the nodes can join the cluster and casskop can manage the cluster.

Environment

  • casskop version:

2.1.14

  • Kubernetes version information:

kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.12", GitCommit:"696a9fdd2a58340e61e0d815c5769d266fca0802", GitTreeState:"clean", BuildDate:"2022-04-13T19:07:00Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.15-gke.100", GitCommit:"4262ac74f84d8f8ec8f692ea2080483e932554f9", GitTreeState:"clean", BuildDate:"2022-09-22T09:24:03Z", GoVersion:"go1.16.15b7", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:
    GKE (v1.22.15-gke.100)

  • Cassandra version:

3.11.14

Cassandra operator doesn't properly work if launched in cluster scope

Bug Report

What did you do?
If the operator is running with cluster scope it can't properly resolve pod hostnames in other namespaces.

What did you expect to see?
Pod hostname should contain namespace(e.g. cassandra-<rack>.<cluster-name>.<namespace> instead of cassandra-<rack>.<cluster-name>).

What did you see instead? Under which circumstances?
Pods are running but it seems the operator couldn't fetch information about the clusters/racks:

2022-04-11T19:43:59.156Z	INFO	controller_cassandracluster	Reconciling CassandraCluster	{"Request.Namespace": "therapy", "Request.Name": "cassandra"}
[reconcile.go:774::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).ListCassandraClusterPods()] Apr 11 19:43:59.157 [D] [cluster:cassandra] [dc-rack:dc06-sandbox] List available pods
[reconcile.go:726::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).CheckPodsState()] Apr 11 19:43:59.157 [D] [cluster:cassandra] [err:<nil>] Get first available pod
[reconcile.go:736::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).CheckPodsState()] Apr 11 19:43:59.158 [I] [cluster:cassandra] [err:<nil>] We will request : cassandra-dc06-sandbox-0.cassandra to catch hostIdMap
[node_operations.go:60::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.NewJolokiaClient()] Apr 11 19:43:59.158 [D] [host:cassandra-dc06-sandbox-0.cassandra] [namespace:therapy] [port:8778] [secretRef:{}] Creating Jolokia connection
[reconcile.go:746::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).CheckPodsState()] Apr 11 19:43:59.262 [E] [cluster:cassandra] [err:Cannot get host id map: HTTP Request Failed: Post "http://cassandra-dc06-sandbox-0.cassandra:8778/jolokia/": dial tcp: lookup cassandra-dc06-sandbox-0.cassandra on 10.11.0.10:53: no such host] Failed to call cassandra-dc06-sandbox-0.cassandra to get hostIdMap
[cassandracluster_controller.go:122::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).Reconcile()] Apr 11 19:43:59.262 [E] [cluster:cassandra] CheckPodsState Error: Cannot get host id map: HTTP Request Failed: Post "http://cassandra-dc06-sandbox-0.cassandra:8778/jolokia/": dial tcp: lookup cassandra-dc06-sandbox-0.cassandra on 10.11.0.10:53: no such host
[reconcile.go:774::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).ListCassandraClusterPods()] Apr 11 19:43:59.262 [D] [cluster:cassandra] [dc-rack:dc06-sandbox] List available pods
[node_operations.go:60::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.NewJolokiaClient()] Apr 11 19:43:59.262 [D] [host:cassandra-dc06-sandbox-0.cassandra] [namespace:therapy] [port:8778] [secretRef:{}] Creating Jolokia connection
[reconcile.go:490::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).ReconcileRack()] Apr 11 19:43:59.275 [E] [cluster:cassandra] [dc-rack:dc06-sandbox] [err:Cannot check if there are joining nodes: HTTP Request Failed: Post "http://cassandra-dc06-sandbox-0.cassandra:8778/jolokia/": dial tcp: lookup cassandra-dc06-sandbox-0.cassandra on 10.11.0.10:53: no such host] Executing pod operation failed
[reconcile.go:510::github.com/Orange-OpenSource/casskop/controllers/cassandracluster.(*CassandraClusterReconciler).ReconcileRack()] Apr 11 19:43:59.275 [W] [LastActionName:Initializing] [LastActionStatus:Done] [Phase:Running] [cluster:cassandra] [dc-rack:dc06-sandbox] Should Not see this message ;) Waiting Rack to be running before continuing, we loop on Next Rack, maybe we don't want that

Environment

  • casskop version: 2.1.0

  • go version: go1.17.8

  • Kubernetes version information: 1.22.7

  • Kubernetes cluster kind: k3s

  • Cassandra version: 4.0.0

Possible Solution
Here in:

casskop/pkg/k8s/util.go

Lines 210 to 212 in a66f86e

func PodHostname(pod v1.Pod) string {
return fmt.Sprintf("%s.%s", pod.Spec.Hostname, pod.Spec.Subdomain)
}

pod.Spec.Subdomain will not include namespace suffix.
Add an ability to handle cluster scope by adding namespace suffix to pod hostname in case it's running not in the operator's namespace.

Additional context
We're trying to launch multiple cassandra clusters in their namespaces but want to avoid overhead on running operators in each namespace, moreover, they might conflict with each other.

Upgrade of casskop from 2.1.0 to 2.1.16 does not work

Bug Report

Cannot upgrade of casskop operator from version 2.1.0 to 2.1.16.

What did you do?
I had initial cluster with cassandracluster custom resource managed by casskop operator in version 2.1.0 for cassandra image 3.11.14 and it was working correctly.

After upgrade of casskop operator to version 2.1.16 I've encountered on a problem.

What did you expect to see?
I expect that upgrade works correctly without problems.

What did you see instead? Under which circumstances?

  • cassandra-operator.deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "9"
    meta.helm.sh/release-name: cassandra
    meta.helm.sh/release-namespace: prod-doaks-cassandra
  labels:
    app: cassandra-operator
    app.kubernetes.io/managed-by: Helm
    chart: cassandra-operator-1.0.0-20230515-065810
    heritage: Helm
    operator: cassandra
    release: cassandra
  name: cassandra-cassandra-operator
  namespace: prod-doaks-cassandra
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      name: cassandra-operator
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        dynatrace.com/inject: "false"
      labels:
        app: cassandra-operator
        name: cassandra-operator
        operator: cassandra
        release: cassandra
    spec:
      affinity:
        nodeAffinity:
          ...
      containers:
      - command:
        - casskop
        env:
        - name: WATCH_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: OPERATOR_NAME
          value: cassandra-operator
        image: <registry.url>/cscetbon/casskop:2.1.16
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 4
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: cassandra-operator
        readinessProbe:
          failureThreshold: 5
          httpGet:
            path: /readyz
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 4
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 50m
            memory: 64Mi
          requests:
            cpu: 50m
            memory: 64Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsUser: 1000
      serviceAccount: cassandra-operator
      serviceAccountName: cassandra-operator
      terminationGracePeriodSeconds: 30
      tolerations:
      - ...
  • cassandra-cluster.cassandracluster.yaml
apiVersion: db.orange.com/v2
kind: CassandraCluster
metadata:
  annotations:
    meta.helm.sh/release-name: cassandra
    meta.helm.sh/release-namespace: prod-doaks-cassandra
  labels:
    app.kubernetes.io/instance: cassandra
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: Cassandra
    cluster: k8s.kaas
    version: 3.11.10
  name: cassandra-cluster
  namespace: prod-doaks-cassandra
spec:
  autoUpdateSeedList: true
  backRestSidecar:
    image: ghcr.io/cscetbon/instaclustr-icarus:1.1.3
  bootstrapImage: ghcr.io/cscetbon/casskop-bootstrap:0.1.11
  cassandraImage: cassandra:3.11.14
  config:
    cassandra-yaml:
      ....
    logback-xml:
      ....
  configBuilderImage: datastax/cass-config-builder:1.0.4
  configMapName: cassandra-configmap.cm
  dataCapacity: 100Gi
  dataStorageClass: managed-premium
  fsGroup: 1
  hardAntiAffinity: true
  imageJolokiaSecret: {}
  imagePullSecret: {}
  imagepullpolicy: IfNotPresent
  livenessHealthCheckPeriod: 10
  livenessHealthCheckTimeout: 20
  livenessInitialDelaySeconds: 120
  maxPodUnavailable: 3
  nodesPerRacks: 1
  pod:
    tolerations:
    - ....
  readOnlyRootFilesystem: true
  readinessHealthCheckPeriod: 10
  readinessHealthCheckTimeout: 10
  readinessInitialDelaySeconds: 60
  resources:
    limits:
      cpu: "1"
      memory: 4000Mi
    requests:
      cpu: "1"
      memory: 4000Mi
  runAsUser: 999
  serverType: cassandra
  serviceAccountName: cassandra-cluster-node
  sidecarConfigs:
  - ...
  topology:
    dc:
    - name: dc1
      nodesPerRacks: 1
      rack:
      - labels:
          topology.kubernetes.io/zone: region-1
        name: rack1
      - labels:
          topology.kubernetes.io/zone: region-2
        name: rack2
      - labels:
          topology.kubernetes.io/zone: region-3
        name: rack3
      resources: {}
status:
  cassandraNodeStatus:
    cassandra-cluster-dc1-rack1-0:
      hostId: 318e1850-8a6f-4a4b-aa3b-XXXXXXXXXXXX
      nodeIp: X.X.X.X
    cassandra-cluster-dc1-rack2-0:
      hostId: 5ed600e7-91f2-4a3b-a1c4-XXXXXXXXXXXX
      nodeIp: X.X.X.X
    cassandra-cluster-dc1-rack3-0:
      hostId: e7de69c8-e601-41f8-a5b7-XXXXXXXXXXXX
      nodeIp: X.X.X.X
  cassandraRackStatus:
    dc1-rack1:
      cassandraLastAction:
        name: UpdateStatefulSet
        startTime: "2023-05-16T04:49:15Z"
        status: Ongoing
      phase: Running
      podLastOperation: {}
    dc1-rack2:
      cassandraLastAction:
        name: UpdateDockerImage
        status: ToDo
      phase: Running
      podLastOperation: {}
    dc1-rack3:
      cassandraLastAction:
        endTime: "2023-05-15T08:34:33Z"
        name: Initializing
        status: Done
      phase: Running
      podLastOperation: {}
  lastClusterAction: UpdateDockerImage
  lastClusterActionStatus: ToDo
  phase: Pending
  seedlist:
  - cassandra-cluster-dc1-rack1-0.cassandra-cluster.prod-doaks-cassandra
  - cassandra-cluster-dc1-rack2-0.cassandra-cluster.prod-doaks-cassandra
  - cassandra-cluster-dc1-rack3-0.cassandra-cluster.prod-doaks-cassandra
  • cassandra-operator.log
time="2023-05-16T04:48:23Z" level=error msg="Issue when updating CassandraCluster" cluster=cassandra-cluster err="Operation cannot be fulfilled on cassandraclusters.db.orange.com \"cassandra-cluster\": the object has been modified; please apply your changes to the latest version and try again"
2023-05-16T04:48:23Z	INFO	controller_cassandracluster	Reconciling CassandraCluster	{"Request.Namespace": "prod-doaks-cassandra", "Request.Name": "cassandra-cluster"}
time="2023-05-16T04:48:23Z" level=info msg="We will request : cassandra-cluster-dc1-rack1-0.cassandra-cluster to catch hostIdMap" cluster=cassandra-cluster err="<nil>"
time="2023-05-16T04:48:23Z" level=info msg="The Operator Waits 20 seconds for the action to start correctly" cluster=cassandra-cluster rack=dc1-rack1
time="2023-05-16T04:48:25Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:26Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:27Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:28Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:29Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:29Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:29Z" level=info msg="Error Waiting for sts change" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
2023-05-16T04:48:29Z	INFO	controller_cassandracluster	Reconciling CassandraCluster	{"Request.Namespace": "prod-doaks-cassandra", "Request.Name": "cassandra-cluster"}
time="2023-05-16T04:48:29Z" level=info msg="We will request : cassandra-cluster-dc1-rack1-0.cassandra-cluster to catch hostIdMap" cluster=cassandra-cluster err="<nil>"
time="2023-05-16T04:48:29Z" level=info msg="[cassandra-cluster][dc1-rack1]: Update UpdateStatefulSet is Done"
time="2023-05-16T04:48:31Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:32Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:33Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:34Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:35Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:35Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:35Z" level=info msg="Error Waiting for sts change" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
2023-05-16T04:48:35Z	INFO	controller_cassandracluster	Reconciling CassandraCluster	{"Request.Namespace": "prod-doaks-cassandra", "Request.Name": "cassandra-cluster"}
time="2023-05-16T04:48:35Z" level=info msg="We will request : cassandra-cluster-dc1-rack1-0.cassandra-cluster to catch hostIdMap" cluster=cassandra-cluster err="<nil>"
time="2023-05-16T04:48:35Z" level=info msg="The Operator Waits 20 seconds for the action to start correctly" cluster=cassandra-cluster rack=dc1-rack1
time="2023-05-16T04:48:37Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:38Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:39Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:40Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:41Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:41Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:41Z" level=info msg="Error Waiting for sts change" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
2023-05-16T04:48:41Z	INFO	controller_cassandracluster	Reconciling CassandraCluster	{"Request.Namespace": "prod-doaks-cassandra", "Request.Name": "cassandra-cluster"}
time="2023-05-16T04:48:41Z" level=info msg="We will request : cassandra-cluster-dc1-rack1-0.cassandra-cluster to catch hostIdMap" cluster=cassandra-cluster err="<nil>"
time="2023-05-16T04:48:41Z" level=info msg="The Operator Waits 20 seconds for the action to start correctly" cluster=cassandra-cluster rack=dc1-rack1
time="2023-05-16T04:48:43Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:44Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:45Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:46Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:47Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:47Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:47Z" level=info msg="Error Waiting for sts change" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:47Z" level=error msg="Issue when updating CassandraCluster" cluster=cassandra-cluster err="Operation cannot be fulfilled on cassandraclusters.db.orange.com \"cassandra-cluster\": the object has been modified; please apply your changes to the latest version and try again"
2023-05-16T04:48:47Z	INFO	controller_cassandracluster	Reconciling CassandraCluster	{"Request.Namespace": "prod-doaks-cassandra", "Request.Name": "cassandra-cluster"}
time="2023-05-16T04:48:47Z" level=info msg="We will request : cassandra-cluster-dc1-rack1-0.cassandra-cluster to catch hostIdMap" cluster=cassandra-cluster err="<nil>"
time="2023-05-16T04:48:47Z" level=info msg="The Operator Waits 20 seconds for the action to start correctly" cluster=cassandra-cluster rack=dc1-rack1
time="2023-05-16T04:48:49Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
time="2023-05-16T04:48:50Z" level=info msg="Waiting for new version of statefulset" cluster=cassandra-cluster statefulset=cassandra-cluster-dc1-rack1
  • cassandra.log
WARN  [cassandra-exporter-harvester-defer-0] 2023-05-16 04:47:35,734 Harvester.java:188 - Failed to register collector for MBean org.apache.cassandra.metrics:type=Connection,scope=10.244.8.24,name=LargeMessagePendingTasks
java.lang.IllegalStateException: Object NamedObject{name=org.apache.cassandra.metrics:type=Connection,scope=10.244.8.24,name=LargeMessagePendingTasks, object=org.apache.cassandra.metrics.CassandraMetricsRegistry$JmxGauge@5fe6da57} and NamedObject{name=org.apache.cassandra.metrics:type=Connection,scope=10.244.8.24,name=LargeMessagePendingTasks, object=org.apache.cassandra.metrics.CassandraMetricsRegistry$JmxGauge@3db033b} cannot be merged, yet their labels are the same.
	at com.zegelin.cassandra.exporter.collector.dynamic.FunctionalMetricFamilyCollector.lambda$merge$0(FunctionalMetricFamilyCollector.java:73)
	at java.util.HashMap.merge(HashMap.java:1255)
	at com.zegelin.cassandra.exporter.collector.dynamic.FunctionalMetricFamilyCollector.merge(FunctionalMetricFamilyCollector.java:73)
	at java.util.HashMap.merge(HashMap.java:1255)
	at java.util.Collections$SynchronizedMap.merge(Collections.java:2689)
	at com.zegelin.cassandra.exporter.Harvester.lambda$registerMBean$0(Harvester.java:184)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)
WARN  [cassandra-exporter-harvester-defer-0] 2023-05-16 04:47:35,735 Harvester.java:188 - Failed to register collector for MBean org.apache.cassandra.metrics:type=Connection,scope=10.244.8.24,name=LargeMessageCompletedTasks
java.lang.IllegalStateException: Object NamedObject{name=org.apache.cassandra.metrics:type=Connection,scope=10.244.8.24,name=LargeMessageCompletedTasks, object=org.apache.cassandra.metrics.CassandraMetricsRegistry$JmxGauge@6e911e2c} and NamedObject{name=org.apache.cassandra.metrics:type=Connection,scope=10.244.8.24,name=LargeMessageCompletedTasks, object=org.apache.cassandra.metrics.CassandraMetricsRegistry$JmxGauge@5ae16c89} cannot be merged, yet their labels are the same.
	at com.zegelin.cassandra.exporter.collector.dynamic.FunctionalMetricFamilyCollector.lambda$merge$0(FunctionalMetricFamilyCollector.java:73)
	at java.util.HashMap.merge(HashMap.java:1255)
	at com.zegelin.cassandra.exporter.collector.dynamic.FunctionalMetricFamilyCollector.merge(FunctionalMetricFamilyCollector.java:73)
	at java.util.HashMap.merge(HashMap.java:1255)
	at java.util.Collections$SynchronizedMap.merge(Collections.java:2689)
	at com.zegelin.cassandra.exporter.Harvester.lambda$registerMBean$0(Harvester.java:184)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

All pods of cassandra are running and ready from kubernetes point of view, but only first statefulset was updated and operator stucked on it:

❯ kubectl -n prod-doaks-cassandra get pods
NAME                                           READY   STATUS    RESTARTS   AGE
cassandra-cassandra-operator-6d7cdd849-rvrbc   1/1     Running   0          19h
cassandra-cluster-dc1-rack1-0                  4/4     Running   0          15m
cassandra-cluster-dc1-rack2-0                  4/4     Running   0          19h
cassandra-cluster-dc1-rack3-0                  4/4     Running   0          19h

Events for first statefulset do not show any errors:

Events:
  Type    Reason            Age                From                    Message
  ----    ------            ----               ----                    -------
  Normal  SuccessfulDelete  17m (x2 over 25m)  statefulset-controller  delete Pod cassandra-cluster-dc1-rack1-0 in StatefulSet cassandra-cluster-dc1-rack1 successful
  Normal  SuccessfulCreate  16m (x3 over 17h)  statefulset-controller  create Pod cassandra-cluster-dc1-rack1-0 in StatefulSet cassandra-cluster-dc1-rack1 successful

In cassandracluster custom resource we can see in status field that rack is not ready yet:

status:
  cassandraRackStatus:
    dc1-rack1:
      cassandraLastAction:
        name: UpdateStatefulSet
        startTime: "2023-05-16T05:03:54Z"
        status: Ongoing
      phase: Running
      podLastOperation: {}
    dc1-rack2:
      cassandraLastAction:
        name: UpdateDockerImage
        status: ToDo
      phase: Running
      podLastOperation: {}
    dc1-rack3:
      cassandraLastAction:
        endTime: "2023-05-15T08:34:33Z"
        name: Initializing
        status: Done
      phase: Running
      podLastOperation: {}

HINT: It looks like you already had similar issue: #83

Environment

  • casskop version: v2.1.16

  • Kubernetes version information: AKS v1.24.10

  • Cassandra version:
    Current version of cassandra: 3.11.14.
    I was trying with cassandra 3.11.10 and result was the same.

Backup/Restore issue

Bug Report

What did you do?

Try to create backup:

---
apiVersion: db.orange.com/v2
kind: CassandraBackup
metadata:
  name: testname
  namespace: cassandra
  labels:
    app: cassandra
spec:
  cassandraCluster: cluster_name
  datacenter: dc1
  secret: s3-access-secret
  snapshotTag: testtag
  storageLocation: s3://my-bucket

What did you expect to see?
backup is created

What did you see instead? Under which circumstances?
Tested with aws S3, Minio (oracle protocol), even file:/// as storageLocation
First backup creates fine. All next backups fails with errors like:
"failureCause": [ { "message": "Unable to upload some files successfully: data/dev/service_registry_2-b235ffd0612d11eebd1a892e753c63d8/schema.cql,data/dev/job-b043a5b0612d11eebd1a892e753c63d8/schema.cql,data/dev/person_channel_presence_v2-b68842f0612d11eebd1a892e753c63d8/schema.cql,data/dev/external_server_instance-b81d88f0612d11eebd1a892e753c63d8/schema.cql,data/dev/role-a233a420612d11eebd1a892e753c63d8/1-4225181546/me-1-big-CompressionInfo.db,data/dev/role-a233a420612d11eebd1a892e753c63d8/1-4225181546/me-1-big-Data.db,data/dev/role-a233a420612d11eebd1a892e753c63d8/1-4225181546/me-1-big-Digest.crc32,data/dev/role-a233a420612d11eebd1a892e753c63d8/1-4225181546/me-1-big-Filter.db"

List of files that couldn't be uploaded may be different
Sidecar logs (just part of output, because there are a lot of errors like this). Also it happens even if storageLocation id file:
k logs -f ice-dc1-rack1-0 -c backrest-sidecar

16:12:04.896 ERROR com.instaclustr.esop.impl.retry.Retrier$DefaultRetrier - This operation will be retried: Error occured while trying to get refresh status on ice/dc1/6184e16c-f53a-4a2f-9dbd-947f86546187/data/dev/socket_session-abb4f5d0612d11eebd1a892e753c63d8/1-1984573430/me-1-big-Statistics.db: s metadata, storage class, website redirect location or encryption attributes.
com.instaclustr.esop.impl.retry.Retrier$RetriableException: Error occured while trying to get refresh status on ice/dc1/6184e16c-f53a-4a2f-9dbd-947f86546187/data/dev/socket_session-abb4f5d0612d11eebd1a892e753c63d8/1-1984573430/me-1-big-Statistics.db: s metadata, storage class, website redirect location or encryption attributes.
	at com.instaclustr.esop.s3.BaseS3Backuper$1.call(BaseS3Backuper.java:93)
	at com.instaclustr.esop.s3.BaseS3Backuper$1.call(BaseS3Backuper.java:61)
	at com.instaclustr.esop.impl.retry.Retrier$DefaultRetrier.submit(Retrier.java:40)
	at com.instaclustr.esop.s3.BaseS3Backuper.freshenRemoteObject(BaseS3Backuper.java:61)
	at com.instaclustr.esop.impl.backup.UploadTracker$UploadUnit.lambda$call$0(UploadTracker.java:117)
	at com.instaclustr.esop.impl.retry.Retrier$DefaultRetrier.submit(Retrier.java:40)
	at com.instaclustr.esop.impl.backup.UploadTracker$UploadUnit.call(UploadTracker.java:117)
	at com.instaclustr.esop.impl.backup.UploadTracker$UploadUnit.call(UploadTracker.java:81)
	at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
	at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
	at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: s metadata, storage class, website redirect location or encryption attributes. (Service: Amazon S3; Status Code: 400; Error Code: InvalidRequest; Request ID: 178A56532CFB0CE4; S3 Extended Request ID: 6786d501-d2eb-43f2-a3e1-3694840618b7; Proxy: null)

Environment

  • casskop version: v2.0.3-release

  • icarus version: 1.1.3

  • go version: go1.21.0 darwin/arm64

  • Kubernetes version information: 1.27

  • Kubernetes cluster kind: EKS, self-hosted

  • Cassandra version: Cassandra 3.11.14

Documentation issue ?

Type of question

Documentation issue

Question

I'm curious why everywhere in the 'Helm installation' step documentation says to add one repository, and in the next step, the installation of the Helm Chart is done from another repository.

  1. Adding helm repo
  2. Installing release from OCI registry
helm repo add orange-incubator https://orange-kubernetes-charts-incubator.storage.googleapis.com/
helm install casskop oci://ghcr.io/cscetbon/casskop-helm

Thanks.

e2e tests failing

What did you do?
Currently e2e test job is failing on backup/restore test in master branch.

What did you expect to see?
Tests passing

Environment

  • casskop version: master

Possible Solution
Increase timeout - for now its look like it is not enough time for tests to pass.

During replacement of Cassandra node newly created pod couldn't be added to existing cassandra cluster

Bug Report

What did you do?
During replacement of Cassandra node newly created pod couldn't be added to existing cassandra cluster

'/bootstrap/libs/jolokia-agent.jar' -> '/extra-lib/jolokia-agent.jar'
'//bootstrap/tools/curl' -> '/opt/bin/curl'
 == We execute bootstrap script run.sh
CASSANDRA_SEEDS=tf-cassandra-config-dc1-rack1-0.tf-cassandra-config.tf,tf-cassandra-config-dc1-rack1-1.tf-cassandra-config.tf,tf-cassandra-config-dc1-rack1-2.tf-cassandra-config.tf

Try to connect to tf-cassandra-config-dc1-rack1-0.tf-cassandra-config.tf
nc: getaddrinfo for host "tf-cassandra-config-dc1-rack1-0.tf-cassandra-config.tf" port 8778: Temporary failure in name resolution

What did you expect to see?
Bootstrap should exit and retry to add node in case of DNS unavailability.

What did you see instead? Under which circumstances?
Node started as a new cluster.

Environment

  • casskop version: latest

Possible Solution
Check for DNS error in bootstrap.sh

Update kub libraries to 1.27 versions

Update casskop to use updated kub libraries:

  • k8s libs to v0.27.5
  • controller-runtime to v0.15.0
  • go 1.20

TODO:
multicasskop is dependent on multicluster-controller libs that has not been updated for 3 years and cannot be updated.

Can't deploy multi-casskop helm chart

Bug Report

What did you do?
helm upgrade -i multi-casskop -n cassandra oci://ghcr.io/cscetbon/multi-casskop-helm

What did you expect to see?
Helm release is installed

What did you see instead? Under which circumstances?
Release "multi-casskop" does not exist. Installing it now. W0811 14:27:06.527598 4079 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition Error: failed to install CRD crds/db.orange.com_multicasskops.yaml: CustomResourceDefinition.apiextensions.k8s.io "multicasskops.db.orange.com" is invalid: spec.version: Invalid value: "v2": must match the first version in spec.versions

Environment

  • casskop version:

2.1.9

  • Kubernetes version information:

v1.21.4

Allow to configure resources for backrest-sidecar container

Feature Request

Is your feature request related to a problem? Please describe.
We are utilizing our own backup implementation for Cassandra and the real resources usage by backrest-sidecar container is substantially below the requests/limits currently set. We have confirmed this by inspecting the historical data in our monitoring solution.
Therefore, we'd like to lower the requests/limits for the backrest-sidecar so that we can assign them to the primary cassandra container, while keeping Guaranteed QoS class.
The business justification for that is to optimize the utilization of nodes and consequently lower the costs by having to scale-out later.

Describe the solution you'd like to see
A possibility to set resources for backrest-sidecar via values.yaml

Describe alternatives you've considered
There's no alternative currently as resources are hardcoded in casskop code.

Operator's logs are not correct

Bug Report

What did you do?
Operator does not print logs properly.

What did you see instead? Under which circumstances?
Expected log messages are not observed in logs.

INFO controller_cassandracluster Reconciling CassandraCluster

Environment

  • casskop version: casskop:2.1.4

  • go version: 1.17

Possible Solution
Seems that setup of controller runtime logging prevent from recording logrus logs properly.

CVE in cassandra-bootstrap and casskop images

Cassandra bootstrap image:

__* libc-bin
________[CVE-2023-4911|debian 11.7] (2.31-13+deb11u7) (sysdig) | https://nvd.nist.gov/vuln/detail/CVE-2023-4911
__* libc6
________[CVE-2023-4911|debian 11.7] (2.31-13+deb11u7) (sysdig) | https://nvd.nist.gov/vuln/detail/CVE-2023-4911
__* libtinfo6
________[CVE-2023-29491|debian 11.7] (6.2+20201114-2+deb11u2) (sysdig) | https://nvd.nist.gov/vuln/detail/CVE-2023-29491
__* ncurses-base
________[CVE-2023-29491|debian 11.7] (6.2+20201114-2+deb11u2) (sysdig) | https://nvd.nist.gov/vuln/detail/CVE-2023-29491

Casskop image:

__* golang.org/x/net
________[CVE-2023-44487|debian 11.7] (v0.17.0) (sysdig)
________[CVE-2023-39325|debian 11.7] (v0.17.0) (sysdig)
__* libc6
________[CVE-2023-4911|debian 11.7] (2.31-13+deb11u7) (sysdig) | https://nvd.nist.gov/vuln/detail/CVE-2023-4911

Required steps:

  • rebuild bootstrap image
  • update golang.org/x/net version

No versioned tags after 2.1.4

Bug Report

What did you do?
Trying to pull ghcr.io/cscetbon/casskop:2.1.9, I realized that 2.1.4 is the last available tag on the repo. However, latest was correctly pushed.

What did you expect to see?
All releases to be pushed as versioned tags to the docker registry.

Configurable Environment Variables

Feature Request

We as need ability to add custom Environment Variable to each Cassandra container. It can be added by a list of key: value.

example solution in values.yaml:

env_vars:
   casandra:
       VERSION: 3.11
       CUSTOM_FLAG: test_value

Can't create bundle

@AKamyshnikova I tried your new command added in your last PR and it doesn't work atm

$ docker run --rm -ti -v $PWD:/go/casskop ghcr.io/cscetbon/casskop-build:latest bash 
root@61c08df6ae21:/go/casskop# kustomize build config/manifests | operator-sdk generate bundle -q --overwrite --version 2.1.5
/usr/local/bin/operator-sdk: line 1: Not: command not found
Error: unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory '/go/casskop/config/manifests'

it's the command kustomize build config/manifests that is failing

[Question] HEAP SETTINGS

Type of question

Configuring HEAP SETTING for CassandraCluster resource

Question

Could you tell me which method is the most correct for the hip size configuration?

For example is it possible to configure -Xms/-Xmx with CassandraCluster / MultiCasskop resource, like log_gc here ?
image

Embeded Prometheus exporter doesn't work with cassandra:4.1.4

Bug Report

What did you do?

  • deployed casskop:2.2.2
  • deployed multi-casskop:2.2.2
  • created multi-casskop resource
apiVersion: db.orange.com/v2
kind: MultiCasskop
metadata:
  name: test
  namespace: ice-cassandra
spec:
  base:
    apiVersion: db.orange.com/v2
    kind: CassandraCluster
    metadata:
      creationTimestamp: null
      labels:
        cluster: test
      name: test
      namespace: test-cassandra
    spec:
      autoPilot: true
      backRestSidecar:
        image: ghcr.io/cscetbon/instaclustr-icarus:2.0.4
        imagePullPolicy: IfNotPresent
      bootstrapImage: ghcr.io/cscetbon/casskop-bootstrap:0.1.14
      cassandraImage: cassandra:4.1.4
      configBuilderImage: datastax/cass-config-builder:1.0.8
      # configMapName: cassandra-configmap
      dataCapacity: 10Gi
      imagePullSecret:
        name: ice-cassandra-image-pull-artifactory
      imagepullpolicy: IfNotPresent
      maxPodUnavailable: 1
      noCheckStsAreEqual: true
      nodesPerRacks: 1
      resources:
        limits:
          cpu: "4"
          memory: 12Gi
        requests:
          cpu: "2"
          memory: 4Gi
      runAsUser: 999
    status:
      seedlist:
      - test-dc1-rack1-0.ice.ice-cassandra.svc.cluster.local
  deleteCassandraCluster: true
  override:
    dc1:
      spec:
        topology:
          dc:
          - name: dc1
            rack:
            - name: rack1

What did you expect to see?
No "[prometheus-netty-pool-0] java.lang.NoSuchMethodError" exception

What did you see instead? Under which circumstances?

cassandra WARN  [prometheus-netty-pool-0] 2024-03-29 11:17:35,064 DefaultChannelPipeline.java:1152 - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
cassandra java.lang.NoSuchMethodError: 'java.net.InetAddress org.apache.cassandra.utils.FBUtilities.getBroadcastAddress()'
cassandra     at com.zegelin.cassandra.exporter.InternalMetadataFactory.localBroadcastAddress(InternalMetadataFactory.java:87)
cassandra     at com.zegelin.cassandra.exporter.Harvester.globalLabels(Harvester.java:280)
cassandra     at com.zegelin.cassandra.exporter.netty.HttpHandler.sendMetrics(HttpHandler.java:289)
cassandra     at com.zegelin.cassandra.exporter.netty.HttpHandler.channelRead0(HttpHandler.java:91)
cassandra     at com.zegelin.cassandra.exporter.netty.HttpHandler.channelRead0(HttpHandler.java:36)
cassandra     at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
cassandra     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
cassandra     at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
cassandra     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
cassandra     at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
cassandra     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
cassandra     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
cassandra     at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
cassandra     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
cassandra     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
cassandra     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
cassandra     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
cassandra     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
cassandra     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
cassandra     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
cassandra     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
cassandra     at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
cassandra     at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
cassandra     at java.base/java.lang.Thread.run(Unknown Source)

Environment

  • casskop version: v2.2.2

  • go version:

  • Kubernetes version information: v1.29.2

  • Kubernetes cluster kind: k3s v1.29.2

  • Cassandra version: 4.1.4

Services are not headless anymore

From version 2.1.13 to 2.1.14 the cassandra services created are not headless anymore.

I don't know if the change was intended... anyway is a big breaking change that should go in a major.

Add abbility to change operator's log level (casskop & multicasskop)

Could you please add an ability to change operator's logging level ?
Current default level is INFO and it's too talkative...

It seems that version casskop:2.1.19 does not take into account the variable "LOG_LEVEL", I see INFO messages in the log, while for example ice-casskop:v2.0.3 normally perceives the variable and I can set, for example logging level to WARN.

Something like:

{{- if .Values.logLevel }}
          - name: LOG_LEVEL
            value: {{ .Values.logLevel }}
{{- end }}
{{- if .Values.debug.enabled }}
          - name: LOG_LEVEL
            value: Debug
{{- end }}

Thanks !

Critical CVE in cassandra bootstrap image v0.1.10

Bug Report

Critical CVE found by CVE tracker in cassandra-bootstrap image.

Required rebuild using fresher bitnami/minideb:bullseye base image.

How to debug backup solution with amazon s3

Type of question

how to implement a specific feature

Question

What did you do?
cassandra-backup.yml

apiVersion: db.orange.com/v2
kind: CassandraBackup
metadata:
  name: daily-cassandra-backup
  namespace: cassandra
  labels:
    app: cassandra
spec:
  cassandraCluster: cassandra
  datacenter: datacenter1
  storageLocation: s3://backups-dev
  snapshotTag: daily
  secret: cassandra-dev-s3-backup-secret
  schedule: "@daily"

cassandra-dev-s3-backup-secret.yml:

apiVersion: v1
kind: Secret
metadata:
  name: cassandra-dev-s3-backup-secret
  namespace: cassandra
type: Opaque
stringData:
  awsaccesskeyid: <keyid>
  awssecretaccesskey: <secret>
  awsregion: eu-central-1
  awsendpoint: https://s3.eu-central-1.amazonaws.com

What did you expect to see?
I see this in CassandraBackup CRD with all the Annotations, but I don't know how to verify if it actually works. No file is created in AWS bucket.

What did you see instead? Under which circumstances?
No data in AWS, no events for created backup CRD

Environment

  • casskop version:

2.1.17

  • Kubernetes version information:

Kustomize Version: v5.0.1
Server Version: v1.25.9+k3s1

  • Kubernetes cluster kind:

k3s - 3 master 6 workers

  • Cassandra version:

3.11.3

replicas: 3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.