GithubHelp home page GithubHelp logo

Comments (10)

awprice avatar awprice commented on July 17, 2024 1

Thanks for giving Escalator a go @FilipSwiatczak!

Based on the following error:

AccessDenied: User: arn:aws:sts::XXX:assumed-role/eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role\n\tstatus code: 403

I'd say the trust relationship isn't setup correctly between the two roles to allow eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX to assume bitbucket-pipelines-escalator-role.

Have a look at this page on how to allow a role to assume another role - https://nelson.cloud/aws-iam-allowing-a-role-to-assume-another-role/, it has instructions on how to allow assuming a role either in the same account or in a different account.

from escalator.

awprice avatar awprice commented on July 17, 2024

I'd also like to mention that documentation on how to configure a role to another assume role is going to be missing from our documentation as it will depend on the configuration of the end user's cluster/AWS accounts and we can't cater for all scenarios.

from escalator.

FilipSwiatczak avatar FilipSwiatczak commented on July 17, 2024

thanks @awprice, it worked with these two changes:

  1. Run EksCtl to create OIDC for the Cluster like:
    eksctl utils associate-iam-oidc-provider --cluster <cluster-name> --approve --region <your-region>

  2. and then modify Trust relationship on your aws Role by adding:

        {
            "Effect": "Allow",
            "Principal": {
                -- the exact name of the sts role the pod starts under, right now can be gleaned from the initial error log on the pod
                "AWS": "arn:aws:sts::ACCOUNT:assumed-role/eksctl-CLUSTER_NAME-nodegro-NodeInstanceRole-RANDOM_VALUE_PER_CLUSTER"
            },
            "Action": "sts:AssumeRole"
        }
  1. modify Policy which the Role references with:
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::ACCOUNT:role/eksctl-CLUSTER_NAME-nodegro-NodeInstanceRole-*"
        }

So while this works, it's not fully automated as I can't find a way to fetch the sts role the pod starts under from the cluster.
If you know that or how to structure that better, please share :)

I've mostly raised this question to save other people time, to have a copy paste solution that would be as easy as the rest of instructions in the project Readme!

from escalator.

FilipSwiatczak avatar FilipSwiatczak commented on July 17, 2024

Also @awprice if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs!
On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

time="2023-10-20T16:10:42Z" level=info msg="Sent delete request to 1 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Reaper: There were -1 empty nodes deleted this round"
time="2023-10-20T16:10:42Z" level=info msg="untainted nodes close to minimum (1). Adjusting taint amount to (0)"
time="2023-10-20T16:10:42Z" level=info msg="Scaling Down: tainting 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Tainted a total of 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:11:08Z" level=info msg="Signal received: terminated"
time="2023-10-20T16:11:08Z" level=info msg="Stopping autoscaler gracefully"
time="2023-10-20T16:11:08Z" level=info msg="Stop signal received. Stopping cache watchers"
time="2023-10-20T16:11:08Z" level=fatal msg="main loop stopped"
rpc error: code = NotFound desc = an error occurred when try to find container "50d71de1cd6378c134bcc3870d3c378860855a379a40d3a7163cf4a913733a6a": not found%  

I apologise if those are noobish questions, I'm not a kubernetes expert! (yet!)

from escalator.

FilipSwiatczak avatar FilipSwiatczak commented on July 17, 2024

Also @awprice if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs! On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

time="2023-10-20T16:10:42Z" level=info msg="Sent delete request to 1 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Reaper: There were -1 empty nodes deleted this round"
time="2023-10-20T16:10:42Z" level=info msg="untainted nodes close to minimum (1). Adjusting taint amount to (0)"
time="2023-10-20T16:10:42Z" level=info msg="Scaling Down: tainting 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Tainted a total of 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:11:08Z" level=info msg="Signal received: terminated"
time="2023-10-20T16:11:08Z" level=info msg="Stopping autoscaler gracefully"
time="2023-10-20T16:11:08Z" level=info msg="Stop signal received. Stopping cache watchers"
time="2023-10-20T16:11:08Z" level=fatal msg="main loop stopped"
rpc error: code = NotFound desc = an error occurred when try to find container "50d71de1cd6378c134bcc3870d3c378860855a379a40d3a7163cf4a913733a6a": not found%  

I apologise if those are noobish questions, I'm not a kubernetes expert! (yet!)

Using instance protection like:

# protect instance on which escalator is running from termination
aws autoscaling set-instance-protection --instance-ids XXX --auto-scaling-group-name eks-bitbucketpipelines-ng-on-demand-XXX --protected-from-scale-in --region eu-west-1

also does not work and the Node is terminated after being tainted. Though if it did work it would probably leave escalator stuck trying to remove the node over and over.

from escalator.

awprice avatar awprice commented on July 17, 2024

@FilipSwiatczak No problem!

So while this works, it's not fully automated as I can't find a way to fetch the sts role the pod starts under from the cluster.
If you know that or how to structure that better, please share :)

We tend to use IAM roles for service accounts on EKS, as this will prevent the need to deal with node instance roles. This documentation from AWS gives a good introduction and steps to use them: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs! On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

We avoid this by running multiple node groups in our clusters and running Escalator on a node group that isn't being scaled up/down by Escalator to prevent Escalator terminating the node that it itself is running on.

Escalator is primarily designed for scaling node groups that are running job-based workloads - so ones that will end. Escalator itself could be considered a service based workload - meaning that it will run forever. So it isn't really the sort of thing that should be run on the node groups that Escalator is scaling.

from escalator.

FilipSwiatczak avatar FilipSwiatczak commented on July 17, 2024

We tend to use IAM roles for service accounts on EKS, as this will prevent the need to deal with node instance roles. This documentation from AWS gives a good introduction and steps to use them: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

thank you @awprice ! I've followed above link and at the very end of Pod checks realised the Escalator pod does not have AWS_WEB_IDENTITY_TOKEN_FILE set.
Those docs suggest an amazon-eks-pod-identity-webhook is required to run to inject the Token but I suspect you are using kube2iam instead right? Thanks again for your patience

from escalator.

FilipSwiatczak avatar FilipSwiatczak commented on July 17, 2024

It appears when the escalator is deployed in a separate node-group, with custom label escalator:worker at both node and pod level, escalator does't see any cpu or mem utilisation (0). It only works when it's in the same node group for me.

apiVersion: v1
kind: ConfigMap
metadata:
  name: escalator-config
  namespace: kube-system
data:
  nodegroups_config.yaml: |
    node_groups:
      - name: "bitbucketpipelines-ng-spot"
        label_key: "escalator"
        label_value: "worker"

With this and the IAM injection issue I'm a bit stuck. Are there any more complete deployment examples in existence please?

from escalator.

FilipSwiatczak avatar FilipSwiatczak commented on July 17, 2024

When escalator is attempting to scale node-group different to one it's deployed in, it throws:

time="2023-10-24T10:46:18Z" level=info msg="Node IP.eu-west-1.compute.internal, aws:///eu-west-1c/ID ready to be deleted" drymode=false nodegroup=bitbucketpipelines-ng-spot
time="2023-10-24T10:46:18Z" level=error msg="failed to terminate node in cloud provider IP.eu-west-1.compute.internal, aws:///eu-west-1c/ID" error="node ip.eu-west-1.compute.internal, aws:///eu-west-1c/id belongs in a different node group than eks-bitbucketpipelines-ng-spot-id"
time="2023-10-24T10:46:18Z" level=fatal msg="node ip.eu-west-1.compute.internal, aws:///eu-west-1c/id belongs in a different node group than eks-bitbucketpipelines-ng-spot-id"

from escalator.

awprice avatar awprice commented on July 17, 2024

@FilipSwiatczak Some answers to your questions:

  • Escalator definitely works with IAM roles for service accounts, as we have it working that way at the moment and not using kube2iam. You can either add that environment variable manually to the Escalator deployment yourself or rely on something like that pod identity webhook to add it manually. Up to you, but adding it manually is a lot more simpler.
  • In terms of running Escalator in a different node group - this is definitely possible as we have it running this way internally. It's hard to say what the exact issue is without access to your cluster, but I would check the following things: Labels on nodes are correct, nodeSelectors on pods are correct, nodeAffinities on the pods are correct and IAM permissions are correct. The values for all of these will really depend on your environment, so I can't say what these should be set to. I'd also recommend having a read of https://github.com/atlassian/escalator/blob/master/docs/pod-node-selectors.md, as this explains how Escalator selects pods/nodes.

from escalator.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.