GithubHelp home page GithubHelp logo

admiraltyio / multicluster-service-account Goto Github PK

View Code? Open in Web Editor NEW
48.0 48.0 14.0 28.76 MB

Import and Automount Remote Kubernetes Service Accounts

Home Page: https://admiralty.io

License: Apache License 2.0

Go 92.66% Dockerfile 0.10% Shell 7.24%

multicluster-service-account's People

Contributors

adrienjt avatar asaintsever avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

multicluster-service-account's Issues

deployment 'multicluster-scheduler-agent' not ready

I am installing multicluster-scheduler v0.3.0 following the user guide.During the installation, I failed with "kubemcsa bootstrap member-cluster scheduler-cluster ".

namespace "multicluster-service-account" already exists in source cluster "cluster1" cluster role "service-account-import-controller-remote" already exists in source cluster "cluster1" service account "cluster2" already exists in namespace "multicluster-service-account" in source cluster "cluster1" cluster role binding "cluster2" already exists in source cluster "cluster1" waiting until service account "cluster2" in namespace "multicluster-service-account" in source cluster "cluster1" has a token... service account import "cluster1" already exists in namespace "multicluster-service-account" in target cluster "cluster2" created secret "cluster1-token-" in namespace "multicluster-service-account" in target cluster "cluster2" waiting until service account import "cluster1" in namespace "multicluster-service-account" in target cluster "cluster2" adopts token... kubemcsa: error: cannot bootstrap: timeout: timed out waiting for the condition

I tried "kubemcsa bootstrap scheduler-cluster member-cluster" and got "annotated service account import controller in target cluster "cluster1"".

After finishing installing agent, I checked but found no resources. Then I checked the pods and got following results.

multicluster-scheduler-agent pod-admission-controller-86c8f659d9-dzx5c 1/1 Running 1 5m19s multicluster-scheduler multicluster-scheduler-6546b48794-j7n9b 1/1 Running 0 49m multicluster-service-account-webhook service-account-import-admission-controller-76f76946d9-s8qww 1/1 Running 1 46m multicluster-service-account service-account-import-controller-7b8586f8df-x6xbt 1/1 Running 0 8m25s

And the deployment.

multicluster-scheduler-agent multicluster-scheduler-agent 0/1 0 0 8m5s multicluster-scheduler-agent pod-admission-controller 1/1 1 1 8m5s multicluster-scheduler multicluster-scheduler 1/1 1 1 52m multicluster-service-account-webhook service-account-import-admission-controller 1/1 1 1 48m multicluster-service-account service-account-import-controller 1/1 1 1 48m

The deployment multicluster-scheduler-agent isn't deployed. I re-applied the yaml file but keep unchanged.

Also the pod service-account-import-admission-controller in each member cluster is pending.

Client.Get returns errors.IsNotFound when accessing a resource from "distant" cluster

Hi,

I'm using kubemcsa to create a secret that is used by multi-casskop operator to interact with multiple kubernetes clusters. (You talked to Sebastien in the past). The issue in my local setup is that every time it does a Get it gets a NotFound error which is confusing https://github.com/Orange-OpenSource/casskop/blob/rolling-restart/multi-casskop/pkg/controller/multi-casskop/cmc_utils.go#L31. However, it can create objects without any problem and no permission error is returned during a Get as it is a NotFound error https://github.com/Orange-OpenSource/casskop/blob/rolling-restart/multi-casskop/pkg/controller/multi-casskop/cmc_utils.go#L82. So in short, our code tries to get the resource but gets a NotFound error, then attempts to create the object and then gets an alreadyExists error 🤷‍♂️ Any idea what's going on ?

I'm using the same cluster for the local and remote, just using different namespaces to be sure objects with same names do not conflict. Here is the cluster role I use to be able to access resources in the distance context which is simply another namespace like I said:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: inter-ns
rules:
- apiGroups:
  - db.orange.com
  resources:
  - cassandraclusters
  verbs:
  - "*"
- apiGroups:
    - db.orange.com
  resources:
    - cassandraclusters/status
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: inter-ns-cl1
  namespace: cluster2
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: inter-ns
subjects:
- kind: ServiceAccount
  name: cassandra-operator
  namespace: cluster1

Use kubeconfig format for mounted (and imported) secrets, to decouple client code

As of now, service account import secrets are copies of remote service account secrets, with an additional "server" field. As such, their format is neither the same as service account secrets, nor the same as kubeconfig files. They are mounted as-is, under /var/run/secrets/admiralty.io/serviceaccountimports/, and the custom pkg/config library is required to create Kubernetes configs from them.

If service account import secrets used the kubeconfig format, client-go itself could create configs from them. Existing Kubernetes clients could use multicluster-service-account with zero code change.

Note that multiple service account imports can be mounted in a single pod. When a single service account import is mounted, we would mount it in the default kubeconfig location by default, and when multiple service account imports are mounted, we would merge them before mounting them as a single kubeconfig file. client-go already has code to specify which context/cluster should be used from a kubeconfig file. [EDIT: We can't modify the content of mounted secrets at admission, so let's just mount them as usual. The user can use the --kubeconfig option or KUBECONFIG environment variable to point to the mounted kubeconfig, or use a path override in code.]

Can't reproduce the example

Hello,

I've tried reproduce the example of README.md without success.

Environment:

$ kubemcsa --version
0.5.1
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"archive", BuildDate:"2019-08-29T18:43:18Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
$ kind version
v0.5.1

Reproduce:

$ kind create cluster --name my-cluster-cluster1
$ kind create cluster --name my-cluster-cluster2
# Some extra kubeconfig stuff is done to make it work properly. See https://github.com/kubernetes-sigs/kind/issues/1034
$ export KUBECONFIG="$(for c in $(kind get clusters); do echo -n $(kind get kubeconfig-path --name="$c"):; done)"
$ export CLUSTER1=my-cluster-cluster1
$ export CLUSTER2=my-cluster-cluster2
$ $ kubectl config get-contexts
CURRENT   NAME                  CLUSTER    AUTHINFO                    NAMESPACE
*         my-cluster-cluster1   cluster1   kubernetes-admin-cluster1   
          my-cluster-cluster2   cluster2   kubernetes-admin-cluster2
$ RELEASE_URL=https://github.com/admiraltyio/multicluster-service-account/releases/download/v0.5.1
$ MANIFEST_URL=$RELEASE_URL/install.yaml
$ kubectl apply -f $MANIFEST_URL --context $CLUSTER1
customresourcedefinition.apiextensions.k8s.io/serviceaccountimports.multicluster.admiralty.io created
namespace/multicluster-service-account-webhook created
deployment.apps/service-account-import-admission-controller created
secret/service-account-import-admission-controller-cert created
serviceaccount/service-account-import-admission-controller created
clusterrole.rbac.authorization.k8s.io/service-account-import-admission-controller created
clusterrolebinding.rbac.authorization.k8s.io/service-account-import-admission-controller created
mutatingwebhookconfiguration.admissionregistration.k8s.io/service-account-import-admission-controller created
namespace/multicluster-service-account created
deployment.apps/service-account-import-controller created
serviceaccount/service-account-import-controller created
clusterrole.rbac.authorization.k8s.io/service-account-import-controller created
clusterrolebinding.rbac.authorization.k8s.io/service-account-import-controller created

$ kubemcsa bootstrap --target-context $CLUSTER1 --source-context $CLUSTER2
created namespace "multicluster-service-account" in source cluster "cluster2"
created cluster role "service-account-import-controller-remote" in source cluster "cluster2"
created service account "cluster1" in namespace "multicluster-service-account" in source cluster "cluster2"
created cluster role binding "cluster1" in source cluster "cluster2"
waiting until service account "cluster1" in namespace "multicluster-service-account" in source cluster "cluster2" has a token...
created service account import "cluster2" in namespace "multicluster-service-account" in target cluster "cluster1"
created secret "cluster2-token-" in namespace "multicluster-service-account" in target cluster "cluster1"
waiting until service account import "cluster2" in namespace "multicluster-service-account" in target cluster "cluster1" adopts token...
annotated service account import controller in target cluster "cluster1"

$ kubectl config use-context $CLUSTER2
Switched to context "my-cluster-cluster2".

$ kubectl create serviceaccount pod-lister
serviceaccount/pod-lister created

$ kubectl create role pod-lister --verb=list --resource=pods
role.rbac.authorization.k8s.io/pod-lister created

$ kubectl create rolebinding pod-lister --role=pod-lister --serviceaccount=default:pod-lister
rolebinding.rbac.authorization.k8s.io/pod-lister created

$ kubectl run nginx --image nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

$ kubectl config use-context $CLUSTER1
Switched to context "my-cluster-cluster1".

$ kubectl label namespace default multicluster-service-account=enabled
namespace/default labeled

$ cat <<EOF | kubectl create -f -
apiVersion: multicluster.admiralty.io/v1alpha1
kind: ServiceAccountImport
metadata:
  name: $CLUSTER2-default-pod-lister
spec:
  clusterName: $CLUSTER2
  namespace: default
  name: pod-lister
---
apiVersion: batch/v1
kind: Job
metadata:
  name: multicluster-client
spec:
  template:
    metadata:
      annotations:
        multicluster.admiralty.io/service-account-import.name: $CLUSTER2-default-pod-lister
    spec:
      restartPolicy: Never
      containers:
      - name: multicluster-client
        image: multicluster-service-account-example-multicluster-client:latest
EOF
serviceaccountimport.multicluster.admiralty.io/my-cluster-cluster2-default-pod-lister created
job.batch/multicluster-client created



$ kubectl config use-context $CLUSTER1

$ kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-nw88m   kubernetes.io/service-account-token   3      9m22s

$ kubectl get namespaces 
NAME                                   STATUS   AGE
default                                Active   35m
kube-node-lease                        Active   35m
kube-public                            Active   35m
kube-system                            Active   35m
multicluster-service-account           Active   31m
multicluster-service-account-webhook   Active   31m

$ kubectl get pods --namespace=multicluster-service-account
NAME                                                READY   STATUS             RESTARTS   AGE
service-account-import-controller-769c6bd86-w8gdx   0/1     CrashLoopBackOff   10         30m

$ kubectl logs --namespace=multicluster-service-account service-account-import-controller-769c6bd86-w8gdx
2019/11/04 13:28:37 Get https://127.0.0.1:36207/api?timeout=32s: dial tcp 127.0.0.1:36207: connect: connection refused

$ curl -k -Ss https://127.0.0.1:36207/api?timeout=32s
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/api\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}

$ kubectl get secret --namespace=multicluster-service-account
NAME                                            TYPE                                  DATA   AGE
cluster2-token-62v9m                            Opaque                                5      39m
default-token-bcqh8                             kubernetes.io/service-account-token   3      40m
service-account-import-controller-token-tjvwn   kubernetes.io/service-account-token   3      40m

Any idea of what is happening? Thanks in advance

Uses in kubernetes with no clusterwide actions available

Hello,

I would like to be able to install multicluster-service-account within a mutualized k8s cluster where I have only rights on my namespace and not cluster wide.

We should have option to separate installations and have lower requirements: using Roles instead of ClusterRole.

We should also have the hability to install the webhook in separate part has it will need higher cluster priority (installed by an admin).

Thanks

Race condition between service-account-import-admission-controller installation and kubemcsa bootstrap

If kubemcsa bootstrap is run right after multicluster-service-account is installed, service-account-import-admission-controller may not be ready by the time the service-account-import-controller deployment is patched with the bootstrap service account import.

In that case, a new service-account-import-controller pod is created, but the bootstrap service account import secret is not mounted into it at admission. service-account-import-controller is therefore unable to import other service accounts from the remote cluster.

Workarounds

  1. Reactively, kill the service-account-import-controller pod when service-account-import-admission-controller is ready, so a new pod is created with the secret mounted. Note, however, that service-account-import-admission-controller doesn't have a readiness probe (kubebuilder doesn't provide one and multicluster-service-account doesn't add a custom one), so there's no way to know for sure when it's ready.
  2. OR, preventively, sleep a few seconds (or more to be certain...) before running kubemcsa bootstrap.

Solution

Adding a proper readiness probe would help for sure. kubemcsa bootstrap could wait for that condition to be met. However, if service-account-import-admission-controller is unavailable later on, some other pods may be created without the service account import secrets that they request.

A better solution is to switch service-account-import-admission-controller's MutatingWebhookConfiguration failure policy to Fail. So, if it's not ready, pod creations will fail (and be retried by their controllers).

Beware! the service-account-import-admission-controller pod itself should not be subject to that policy, so we also need a namespace selector, and move service-account-import-admission-controller to a different namespace than service-account-import-controller.

How to submit PR

Hello,

I have a PR to submit to solve a use case we faced in my company but I have no permissions to do it (tried to push my branch bu got permission denied).

How can we participate in this cool project?
Thanks!

Still maintained?

I have a particular use case where this would be helpful, but I haven't seen any updates to this in a long time. Is this project archived?

Can't bootstrap gke clusters (invalid name)

CONSOLE_CLUSTER=gke_project-id-123456_europe-west1-d_excitingdev
WORKER_CLUSTER=gke_project-id-123456_europe-west1-d_throwawaydev
kubemcsa bootstrap --target-context $CONSOLE_CLUSTER --source-context 
created namespace "multicluster-service-account" in source cluster "gke_project-id-123456_europe-west1-d_throwawaydev"
created cluster role "service-account-import-controller-remote" in source cluster "gke_project-id-123456_europe-west1-d_throwawaydev"
kubemcsa: error: cannot bootstrap: ServiceAccount "gke_project-id-123456_europe-west1-d_excitingdev" is invalid: metadata.name: Invalid value: "gke_project-id-123456_europe-west1-d_excitingdev": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

By default, gke clusters include _ in the name. This makes them invalid as ObjectMeta.Names, so a ServiceAccount or other object can't just use the cluster name.

Cannot execute binary file on MacOs

Hello,

When I tried to use this, I just got an error as per below.
"bash: /usr/local/bin/kubemcsa: cannot execute binary file".
The problem is about running on MacOs. Please let me know, do you have MacOs version for this.

Regards,
Kyaw

Error initing kubemcsa

(⎈ |gke_prp-k8s_us-central1-a_test2:default)➜  ~ ./kubemcsa bootstrap nautilus gke_prp-k8s_us-central1-a_test2
namespace "multicluster-service-account" already exists in source cluster "gke_prp-k8s_us-central1-a_test2"
created cluster role "service-account-import-controller-remote" in source cluster "gke_prp-k8s_us-central1-a_test2"
created service account "nautilus" in namespace "multicluster-service-account" in source cluster "gke_prp-k8s_us-central1-a_test2"
created cluster role binding "nautilus" in source cluster "gke_prp-k8s_us-central1-a_test2"
waiting until service account "nautilus" in namespace "multicluster-service-account" in source cluster "gke_prp-k8s_us-central1-a_test2" has a token...
kubemcsa: error: cannot bootstrap: no matches for kind "ServiceAccountImport" in version "multicluster.admiralty.io/v1alpha1"

Any more yamls to tweak?

Different internal/external certificate authorities

Sometimes, the server CA certificate to call the Kubernetes API from inside the cluster (e.g., with a service account) is different than the one used from outside the cluster (e.g., a developer's kubeconfig). Currently, service account imports use the ca.crt field of the imported service account tokens. This works when there's just one CA for that cluster, but when the importing identity (e.g., kubeconfig for kubemcsa bootstrap/export, or the importing service account subsequently) uses a different CA, most of the times, we should put that in the service account import secret, because we use the server address from the importing identity.

cc @allamand

kubemcsa can only bootstrap to one cluster

kubemcsa replaces the importer's service account import name annotation, so it can only import from the last bootstrapped cluster. This wasn't a problem for agent architectures, where agents only import from one central cluster, but breaks central push/pull architectures. We need to read the annotation and append to it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.