GithubHelp home page GithubHelp logo

dntosas / capi2argo-cluster-operator Goto Github PK

View Code? Open in Web Editor NEW
70.0 6.0 15.0 22.6 MB

Capi2Argo Cluster Operator (CACO) can be deployed on a CAPI Management cluster and dynamically convert Workload cluster credentials into Argo Cluster definitions.

License: Apache License 2.0

Dockerfile 0.47% Makefile 8.37% Go 83.55% Smarty 7.60%
cluster-api capi kubernetes operator argocd argo clusterapi

capi2argo-cluster-operator's Introduction

Capi2Argo Cluster Operator

CI | Go Report | Go Release | Helm Chart Release | codecov

Capi-2-Argo Cluster Operator (CACO) converts ClusterAPI Cluster credentials into ArgoCD Cluster definitions and keep them synchronized. It aims to act as an integration bridge and solve an automation gap for users that combine these tools to provision Kubernetes Clusters.

What It Does

Probably to be here, you are already aware of ClusterAPI and ArgoCD. If not, lets say few words about these projects and what they want to offer:

  • ClusterAPI provides declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. In simple words, users can define all aspects of their Kubernetes setup as CRDs and CAPI controller -which follows operator pattern- is responsible to reconcile on them and keep their desired state.

  • ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It automates the deployment of the desired application states in the specified target environments. In simple words, give a Git and a target Kubernetes Cluster and Argo will keep you package up, running and always in-sync with your source.

So, we have CAPI that enables us to define Clusters as native k8s objects and ArgoCD that can take these objects and deploy them. Let's demonstrate how a pipeline like this could look like:

flow-without-capi2argo

  1. Git holds multiple Kubernetes Clusters definitions as CRDs
  2. Argo watches these resources from Git
  3. Argo deploys definitions on a Management Cluster
  4. CAPI reconciles on these definitions
  5. CAPI provisions these Clusters on Cloud Provider
  6. CAPI returns provisioned Cluster information as k8s Secrets
  7. ❌ Argo is not aware of remote Clusters plus cannot authenticate to provision additional resources

Ok, all good until here. But having bare naked k8s clusters is not something useful. Probably dozens of utils and addons are needed for a cluster to look handy (eg. CSI Drivers, Ingress Controllers, Monitoring, etc).

Argo can also take care of deploying these utils but eventually credentials will be essential to authenticate against target clusters. Of course, we can proceed with the following three manual steps to solve that:

  • Read CAPI credentials
  • Translate them to Argo types
  • Create new Argo credentials

But how can we automate this? Capi2Argo Cluster Operator was created so it can take care of above actions.

CACO implements them in an automated loop that watches for changing events in secret resources and if conditions are met to be a CAPI compliant, it converts and deploy them as Argo compatible ones. What is actually does under the hood, is a god simple KRM transformation like this:

Before we got only CAPI Cluster Spec:

kind: Secret
apiVersion: v1
type: cluster.x-k8s.io/secret
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: CAPICluster
  name: CAPICluster-kubeconfig
data:
  value: << CAPICluster KUBECONFIG based64-encoded >>

After we have also Argo Cluster Spec:

kind: Secret
apiVersion: v1
type: Opaque
metadata:
  labels:
    argocd.argoproj.io/secret-type: cluster
    capi-to-argocd/owned: "true" # Capi2Argo Controller Ownership Label
  name: ArgoCluster
  namespace: argocd
stringData:
  name: CAPICluster
  server: CAPIClusterHost
  config: |
    {
      "tlsClientConfig": {
        "caData": "b64-ca-cert",
        "keyData": "b64-token",
        "certData": "b64-cert",
      }
    }

Above functionality use-case can be demonstrated by extending the Workflow mentioned above by automating following steps:

  1. CACO watches for CAPI cluster secrets
  2. CACO converts them to Argo Clusters
  3. CACO creates them as Argo Clusters
  4. Argo reads these new Clusters
  5. ✔️ Argo provisions resources to CAPI Workload Clusters

flow-with-capi2argo

Take along labels from cluster resources

Capi-2-Argo Cluster Operator is able to take along labels from a Cluster resource and place them on the Secret resource that is created for the cluster. This is especially useful when using labels to instruct ArgoCD which clusters to sync with certain applications.

To enable this feature, add a label with this format to the Cluster resource: take-along-label.capi-to-argocd.<label-key>: "".

The following example

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata: 
  name: ArgoCluster
  namespace: default
  labels: 
    foo: bar
    my.domain.com/env: stage
    take-along-label.capi-to-argocd.foo: ""
    take-along-label.capi-to-argocd.my.domain.com/env: ""
spec: 
// ..

Results in the following Secret resource:

kind: Secret
apiVersion: v1
type: Opaque
metadata:
  name: ArgoCluster
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
    capi-to-argocd/owned: "true" 
    foo: bar
    my.domain.com/env: stage
    taken-from-cluster-label.capi-to-argocd.foo: ""
    taken-from-cluster-label.capi-to-argocd.my.domain.com/env: ""
stringData:
// ...

Use Cases

  1. Keeping your Production Pipelines DRY, everything as testable Code
  2. Avoid manual steps for credentials management through UI, cron scripts and orphaned YAML files
  3. Write end-2-end Infrastructure tests by bundling all the logic
  4. Enabler for creating trully dynamic environments when using ClusterAPI and ArgoCD

Installation

Helm

$ helm repo add capi2argo https://dntosas.github.io/capi2argo-cluster-operator/
$ helm repo update
$ helm upgrade -i capi2argo capi2argo/capi2argo-cluster-operator

Check additional values configuration in chart readme file.

Development

Capi2Argo is builded upon the powerful Operator SDK.

Gradually -and depends on how free time allow us- will try adopting all best practices that are suggested on the community, find more in here.

  • make all
  • make ci
  • make run
  • make docker-build

Contributing

TODO In the meantime, feel free to grab any of unimplemented bullets on the Roadmap section :).

Roadmap

v0.1.0

  • Core Functionality: Convert CAPI to Argo Clusters
  • Unit Tests
  • Integration Tests
  • Helm Chart Deployment
  • FAQ and Docs

v0.2.0

  • Adopt Operator Best Practices
  • Garbage Collection
  • Quickstart Deployment (Kind Cluster)
  • Support for filtering Namespaces
  • Support for multi-arch Docker images (amd64/arm64)

capi2argo-cluster-operator's People

Contributors

cyvcloud avatar dependabot[bot] avatar dntosas avatar siredmar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

capi2argo-cluster-operator's Issues

Take along cluster resource labels

Hi!

it would be nice if certain labels of the cluster resource can be taken along to the ArgoCD secret. This can be useful if you've configured ArgoCD to deploy certain stuff only on clusters with certain labels, e.g. env: stage or env: prod.

Could be configurable via a label in the cluster resource like take-along-label.capi-to-argocd.<key>=
One has to read the cluster resource and get the labels and put them on the ArgoCD secret according to the take-along label.

Example for the cluster resource:

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata: 
  labels: 
    cloudprovider.clusters.infrastructure.edgefarm.io/type: hetzner
    clusters.infrastructure.edgefarm.io/type: core
    core.edgefarm.io/name: dog
    env: stage
    deploy: group1
    foo: bar
    take-along-label.capi-to-argocd.env: ""
    take-along-label.capi-to-argocd.deploy: ""
  name: dog
  namespace: default
...

And the resulting secret:

kind: Secret
metadata: 
  labels: 
    argocd.argoproj.io/secret-type: cluster
    capi-to-argocd/cluster-namespace: coreclusters
    capi-to-argocd/cluster-secret-name: dog-kubeconfig
    capi-to-argocd/owned: "true"
    env: stage
    deploy: group1
  name: cluster-dog
  namespace: argocd
...

@dntosas what do you think?

Update: changed take-along-label.capi-to-argocd/<key> to take-along-label.capi-to-argocd-<key>, because k8s label prevents from setting multiple /. This is the case when taking a long label along e.g. my.domain.com/subdomain. Using the / as delimiter it would look like this take-along-label.capi-to-argocd/my.domain.com/subdomain which is invalid.

vCluster support?

Thoughts on direct support for syncing secrets from vCluster instances?

I realize that vCluster can be supported via the vcluster provider for CAPI, but that adds another layer and, in particular, the aforementioned provider is currently in need of maintenance.

Support for filtering namespaces?

I see this listed in the roadmap, but is the feature specified anywhere?

I want to request support for some kind of wildcard.

The simple * blobs supported by kyverno matches would be sufficient for my needs:

allowedNamespaces: cluster-*

As would a full regex:

allowedNamespaces: somecluster|someprefix.*

secret for deleted cluster not removed

Testing creation and destruction of clusters via ArgoCD including the management of those clusters with the help of this operator, I noticed that deleting a CAPI cluster entirely (The whole namespace with the Cluster resource as well as the CAPI secret is gone), the cluster remains configured to ArgoCD and the secret that was successfully generated by CACO is not removed.

Question: does it do an 'argocd cluster add'?

The description sounds like it syncs the passwords and that is all, but when you do an argocd cluster add it looks to install a service account as well.

If you sync the secrets only does it automatically install a service account later on when it needs it?

Or do you in fact use 'argocd cluster add' behind the scene?

New Feature Request: Add support for external ArgoCD Instance

Currently the ArgoCD secrets are created in the same cluster in argocd-Namespace.
If ArgoCD is running in a different environment, supporting API-Calls for cluster register/delete would provide much more flexibility. Configuration could be done via simple configmap for ArgoCD-Endpoint and Secret for credentials.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.