GithubHelp home page GithubHelp logo

karpenter-emr-on-eks's Introduction

EMR on EKS: High performance autoscaling with Karpenter

This repository provides source code for the Karpenter workshop with EMR on EKS. For the workshop purpose, we will run Karpenter in AZ-a, and Cluster Autoscaler in AZ-b. Each job will be submitted twice, ie. one per AZ. However, this is not a recommended design for a real-world workload.

See the reference architecture as below:

1. Infrastructure setup

Run the following scripts in AWS CloudShell. The default region is us-east-1. Change it on your console if needed.

# download the project
git clone https://github.com/melodyyangaws/karpenter-emr-on-eks.git
cd karpenter-emr-on-eks
echo $AWS_REGION

The script install CLI tools, creates a new EKS cluster, enables EMR on EKS, and installs Karpenter.

./install_cli.sh
./provision/create-workshop-env.sh $AWS_REGION

2. Build a custom docker image

While the environment setup is still running, let's build a custom EMR on EKS docker image via the "workshop-ide" in AWS Cloud9.

export ACCOUNTID=$(aws sts get-caller-identity --query Account --output text)
export AWS_REGION=$(curl http://169.254.169.254/latest/meta-data/placement/region)
export ECR_URL="$ACCOUNTID.dkr.ecr.$AWS_REGION.amazonaws.com"
# remove existing images to save disk space
docker rmi $(docker images -a | awk {'print $3'}) -f
# create ECR repo
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_URL
aws ecr create-repository --repository-name eks-spark-benchmark --image-scanning-configuration scanOnPush=true
# get image
docker pull public.ecr.aws/myang-poc/benchmark:6.5
# tag image
docker tag public.ecr.aws/myang-poc/benchmark:6.5 $ECR_URL/eks-spark-benchmark:emr6.5 
# push
docker push $ECR_URL/eks-spark-benchmark:emr6.5

3. Test with sample Spark jobs

To analyse the autoscaling performance, we use Amazon Managed Service for Prometheus (AMP) to ingest Spark metrics and use an Amazon Managed Grafana dashboard to visualize.

Download the Grafana setup instruction to get your dashboard ready. Ensure you have pushed the download button from the github.

To monitor the autoscaling status in real time, go to your AWS CloudShell. Click on the "Actions" button -> select "New tab" twice. Note: the default region is us-east-1. Change it on the CloudShell console if necessary.

Watch a job pod's autoscaling status in a command line window (nothing returns at the start):

watch -n1 "kubectl get pod -n emr"

Observe EC2/node autoscaling status in a 2nd tab. By design, Cluster Autoscaler schedules EC2/node in ZONE "b" , and Karpenter schedules ZONE "a" node.

watch -n1 "kubectl get node --label-columns=node.kubernetes.io/instance-type,karpenter.sh/capacity-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone,app"

Submit jobs in a 3rd window. The suffix 'ca' represents Cluster Autoscaler. We have two types of Spark test jobs:

cd karpenter-emr-on-eks
./install_cli.sh
# run a job on existing nodes. No autoscaling is triggered at the node level.
./example/sample-job-ca.sh
./example/sample-job-karpenter.sh

Monitor the job progress in your pod's autoscaling status window.

Submit a medium size job (47 executors) once the previous job is completed.

./example/emr6.5-benchmark-ca.sh
./example/emr6.5-benchmark-karpenter.sh

Observe autoscaling status in real time by examining other tabs in Cloudshell. The scope of the workshop is to compare the autoscaling performance, not to benchmark a Spark job. Check out the next section, and don't wait for the job to be finished.

4. Observe in Grafana Dashboard

Go to Amazon Grafana console, and open the EMR on EKS dashboard created earlier. The metrics may take 1 minute to show up after a job is submitted.

Expand the first graph Pod State Timelines, choose different ids (EMR on EKS job ID) from the Job ID dropdown list. Let's observe the job spin-up time and node autoscaling performance. You can locate Job IDs from your EMR console.

To learn how to read the graph, check out the Appendix section at the end of the Grafana setup instruction.

5. Clean up

./install_cli.sh
./clean-up.sh

Go to "workshop-ide" AWS Cloud9, delete the ECR:

export ECR_URL="$ACCOUNTID.dkr.ecr.$AWS_REGION.amazonaws.com"
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ECR_URL
aws ecr delete-repository --repository-name eks-spark-benchmark --force

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.