GithubHelp home page GithubHelp logo

vermacodes / one-click-aks Goto Github PK

View Code? Open in Web Editor NEW
4.0 0.0 1.0 26.14 MB

One click AKS Deployment & Then Destroy.

HCL 5.03% Shell 15.56% Go 21.74% HTML 0.12% CSS 0.07% TypeScript 57.31% Dockerfile 0.05% JavaScript 0.11%

one-click-aks's Introduction

One Click AKS: Simplest way to deploy Complex AKS Cluster

Introduction

Deploying Azure Kubernetes Cluster is really easy. You can create production grade cluster with couple CLI commands. So what this project brings to the table you ask. There are hundereds of ways that an AKS cluster can be deployed in and then thousands more to configure and meet your unique requirements. If you have to deploy AKS with differnet configurations over and over again its no more an easy task. Along comes this project. This project runs locally on your computer and deploys AKS cluster in many different ways. (not all)

Getting Started

To get started, headover to ACT Labs Start page and follow the simple setup wizard.

This setup wizard will help you with following.

  • Running server on your computer.
  • Select your Azure Subscription.
  • Authenticate Azure CLI
  • Create Storage Account
    • Storage Account will get a random generated name.
    • You can see this storage account name in settings.
    • This storage account will be created in a resource group named 'repro-project' in your selected subscription.
    • You will see that two containers are created in this storage account.
      • tfstate: terraform state files will be stored in this container.
      • labs: the labs that you will save will be stored in this container.

Important points to note

  • All your data is stored in a storage account in 'repro-project' resource group of your subscription. If you delete this storage account, all data will be lost. We don't keep a copy of your data.
  • Make sure there is exactly one storage account in 'repro-project' resource group. If you create additional storage accounts in this resource-group, you will see unexpected behaviors.

If this is the first time you are setting up. It will take sometime to download the image. After its cached it should be faster every othertime.

Server

The server you started in docker is a ubuntu VM with following components installed on it.

  • Azure CLI
  • Terraform
  • Go
  • Helm
  • Git
  • kubectl
  • openshift cli
  • redis

Builder

Builder is used to build a lab

Lab

In simplest term a Lab is a scenario that you would want to create. For example, you may want to create an AKS cluster with following specifications.

  • Create a VNET
  • Create an Azure Firewall
  • Add all required Egress rules to Azure Firewall.
  • Create a Private AKS Cluster with UDR
  • Create a Jump Server in VNET with Public IP to SSH and hop to access your private cluster.

or maybe,

  • Create a VNET
  • Create a Private AKS Cluster with Standard LB
  • Deploy Ingress Nginx controller and a dummy app on this cluster.

You can use this tool to create and deploy labs. Labs can be saved for re-use in future, exported and shared with others and imported to the tool.

To create, deploy, import or export a lab, you can use Builder

This is what a lab object looks like.

{
 "id": "",
 "name": "",
 "description": "",
 "tags": [],
 "template": {
  "resourceGroup": {
   "location": "East US"
  },
  "virtualNetworks": [],
  "subnets": [],
  "jumpservers": [],
  "networkSecurityGroups": [],
  "kubernetesClusters": [
   {
   "kubernetesVersion": "1.24.9",
   "networkPlugin": "kubenet",
   "networkPolicy": "null",
   "networkPluginMode": "null",
   "outboundType": "loadBalancer",
   "privateClusterEnabled": "false",
   "addons": {
    "appGateway": false,
    "microsoftDefender": false
   },
   "defaultNodePool": {
    "enableAutoScaling": false,
    "minCount": 1,
    "maxCount": 1
   }
   }
  ],
  "firewalls": [],
  "containerRegistries": [],
  "appGateways": []
 },
 "extendScript": "removed for simplicity of docs",
 "message": "",
 "type": "template",
 "createdBy": "",
 "createdOn": "",
 "updatedBy": "",
 "updatedOn": ""
}

Lab consists of two important parts.

Template

Template is a collection of objects and is part of lab object. For example in the object share above, following is the template.

"template": {
  "resourceGroup": {
   "location": "East US"
  },
  "virtualNetworks": [],
  "subnets": [],
  "jumpservers": [],
  "networkSecurityGroups": [],
  "kubernetesClusters": [
   {
   "kubernetesVersion": "1.24.9",
   "networkPlugin": "kubenet",
   "networkPolicy": "null",
   "networkPluginMode": "null",
   "outboundType": "loadBalancer",
   "privateClusterEnabled": "false",
   "addons": {
    "appGateway": false,
    "microsoftDefender": false
   },
   "defaultNodePool": {
    "enableAutoScaling": false,
    "minCount": 1,
    "maxCount": 1
   }
   }
  ],
  "firewalls": [],
  "containerRegistries": [],
  "appGateways": []
 }

Go server running in docker container translates this template to TF_VAR environment variables which are then used by Terraform code to deploy resources the way desired. We use builder to modify the template which then influences what the targeted deployment would look like.

Template is already providing us greater flexibility in acheiving complex scenarios with ease. But, its not doing it all. And, probably will never be able to. To achieve more flexibility, we have Extension Script.

Extension Script

Extension script gives you the ability to go beyond what this tool can do out of the box and be really creative. You can use this to do everything that can be done using Azure CLI. Some examples use cases are:

  • Pulling an image from docker hub to your ACR.
  • Deploy an application to Kubernetes cluster.
  • Adding additional node pools to your cluster.
  • Ordering food online for free. Well, not that, but you get the idea.

How this works?

This script runs in two primary modes.

  • Deploy
  • Destroy
Deploy (Extend) Mode

When click 'Deploy' button, the base infra is deployed using terraform code. After that completes successfully, extension script is deployed. Both these steps happen automatically in order. Since extension script runs after terraform apply is finished. It has access to terraform output. When running in deploy (extend) mode, 'extend' function is called.

function extend() {
 # Add your code here to be executed after apply
 ok "nothing to extend"
}

See deployment flow

Destroy Mode

When click 'Destroy' button, first, extension script runs in destroy mode, and lets you delete the resources that were created in deploy mode. Or do any other activity that must be done gracefully before resources are destroyed. When running in destroy mode, 'destroy' function is called.

function destroy() {
 # Add your code here to be executed before destruction
 ok "nothing to destroy"
}

See destroy flow

Environment Variables

Following environment variables are available for script to use. There may be other variables that are not in this list. Any terraform output is automatically added as an even variable for extension script. For example, terraform output "resource_group" is automatically added as an env variable "RESOURCE_GROUP". You can see entire terraform output in the deployment logs.

Variable Description
RESOURCE_GROUP Name of the resource group in azure. This is where all resources will be deployed. Please note if you create additional resource groups using extension script you need to manage the deleting in destroy function.
ACR_NAME Name of the ACR if deployed
AKS_LOGIN Command to login to the AKS Cluster if deployed
CLUSTER_NAME Name of AKS Cluster if deployed
CLUSTER_VERSION Version of AKS Cluster if deployed
FIREWALL_PRIVATE_IP Private IP address of the firewall.
NSG_NAME Name of the NSG associated with subnet where AKS cluster is deployed, you can use this to add/remove rules using extension scripts"
LOCATION This is the Azure Region where the resources are deployed. None of the resources are given region exclusively. They all inherit it from resource group.
VNET_NAME Name of the virtual network.
CLUSTER_MSI_ID Clusters managed identity ID.
KUBELET_MSI_ID Kubelet's managed identity

Shared Functions

There are few things that almost all scripts will do. We are aware of these and added them as shared functions which are available to the script and are ready for use.

  • Loging function log() Args: "string" Example: log "this statement will be logged"
log() {
  echo -e "[$(date +'%Y-%m-%dT%H:%M:%S%z')]: INFO - $*" >&1
}

log "this statement will be logged in normal font"
  • Green (OK) Logging function ok() Args: "string" Example: ok "this statement will be logged as INFO log in green color"
ok() {
  echo -e "${GREEN}[$(date +'%Y-%m-%dT%H:%M:%S%z')]: INFO - $* ${NC}" >&1
}

ok "this statement will be logged in green color"
  • Error Logging function err() Args: "string" Example: err "this error occrured"
err() {
  echo -e "${RED}[$(date +'%Y-%m-%dT%H:%M:%S%z')]: ERROR - $* ${NC}" >&1
}

err "this statement wil logged in red color"

In addition to these, we figured that there are few things that we will be doing over and over again in extension scripts. Ultimate goal is to add them as a flag (Switch Button) and make part of terraform, but as an interim solution they are provided as shared functions.

  • Deploy ARO Cluster
function deployAROCluster() {

    # Set the cluster name, and network name variables
    ARO_CLUSTER_NAME="${PREFIX}-aro"

    az group show --name ${RESOURCE_GROUP} --output none > /dev/null 2>&1
    if [ $? -ne 0 ]; then
        err "Resource Group not found. Skipped creating cluster."
        return 1
    fi

    # Deploy the cluster
    log "deploying aro cluster"
    az aro create \
    --resource-group ${RESOURCE_GROUP} \
    --name ${ARO_CLUSTER_NAME} \
    --location ${LOCATION} \
    --vnet ${VNET_NAME} \
    --master-subnet AROMasterSubnet \
    --worker-subnet AROWorkerSubnet \
    --no-wait
    if [ $? -ne 0 ]; then
        err "Command to create ARO Cluster failed."
        return 1
    fi

    # Wait for the cluster to be ready
    counter=0
    ok "waiting for cluster to be created. this can take several minutes, script will wait for an hour."
    while true; do
        status=$(az aro show --resource-group ${RESOURCE_GROUP} --name ${ARO_CLUSTER_NAME} --query provisioningState -o tsv)
        
        if [[ ${status} == "Succeeded" ]]; then
            ok "cluster created."
            break
        fi
        
        if [[ ${counter} -eq 3600 ]]; then
            err "enough wait.. the cluster is no ready yet. please check from portal"
            break
        fi
        
        
        counter=$((${counter}+30))
        sleep 30

        if [[ ${status} == "Creating" ]]; then
            log "cluster state is still 'Creating'. Sleeping for 30 seconds. $((${counter}/60)) minutes passed."
        else
            log "Wait time didn't finish and cluster state isn't 'Creating' anymore. $((${counter}/60)) minutes passed."
        fi
    done

    # Get the cluster credentials
    log "cluster credentials"
    az aro list-credentials --resource-group ${RESOURCE_GROUP} --name ${ARO_CLUSTER_NAME}

    pass=$(az aro list-credentials -g ${RESOURCE_GROUP} -n ${ARO_CLUSTER_NAME} --query kubeadminPassword -o tsv)
    apiServer=$(az aro show -g ${RESOURCE_GROUP} -n ${ARO_CLUSTER_NAME} --query apiserverProfile.url -o tsv)
    apiServerIp=$(az aro show -g ${RESOURCE_GROUP} -n ${ARO_CLUSTER_NAME} --query apiserverProfile.ip -o tsv)

    ok "Login command -> oc login $apiServer -u kubeadmin -p $pass --insecure-skip-tls-verify"
}
  • Delete ARO Cluster
function deleteAROCluster() {

    # Set the cluster name, and network name variables
    ARO_CLUSTER_NAME="${PREFIX}-aro"

    # Deploy the cluster
    log "deleting aro cluster"
    az aro delete \
    --resource-group ${RESOURCE_GROUP} \
    --name ${ARO_CLUSTER_NAME} \
    --yes \
    --no-wait
    if [ $? -ne 0 ]; then
        err "Command to delete ARO Cluster failed."
        return 1
    fi

    # Wait for the cluster to be ready
    counter=0
    ok "waiting for cluster to be deleted. this can take several minutes, script will wait for an hour."
    while true; do
        status=$(az aro show --resource-group ${RESOURCE_GROUP} --name ${ARO_CLUSTER_NAME} --query provisioningState -o tsv)
        
        if [[ ${status} != "Deleting" ]]; then
            ok "cluster deleted."
            break
        fi
        
        if [[ ${counter} -eq 3600 ]]; then
            err "enough wait.. the cluster is not deleted yet. please investigate"
            break
        fi
        
        
        counter=$((${counter}+30))
        sleep 30

        if [[ ${status} == "Deleting" ]]; then
            log "cluster state is still 'Deleting'. Sleeping for 30 seconds. $((${counter}/60)) minutes passed."
        else
            log "Wait time didn't finish and cluster state isn't 'Deleting' anymore. $((${counter}/60)) minutes passed."
        fi
    done
}
  • Deploy Ingress Nginx Controller.
function deployIngressNginxController() {
    # Deploy Ingress Controller.
    log "Deploying Ingress Controller"
    NAMESPACE=ingress-basic

    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    helm repo update

    helm install ingress-nginx ingress-nginx/ingress-nginx \
    --create-namespace \
    --namespace $NAMESPACE

    # This loop is to wait for 5 minutes to ensure the external ip was allocated to the service.
    for i in {1..11}; do
        log "Checking external ip - Attemp $i"
        if [[ $i -eq 11 ]]; then
            err "Not able to secure external ip"
            exit 1
        fi
        EXTERNAL_IP=$(kubectl get svc/ingress-nginx-controller -n ingress-basic -o json | jq -r .status.loadBalancer.ingress[0].ip)
        if [[ "$EXTERNAL_IP" != "" ]]; then
            ok "External IP : $EXTERNAL_IP"
            break
        fi
        sleep 30s
    done
}
  • Deploy Dummy App (HTTPBIN)
function deployHttpbin() {
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      role: frontend
  template:
    metadata:
      labels:
        app: httpbin
        role: frontend
    spec:
      containers:
        - name: httpbin
          image: kennethreitz/httpbin
          resources:
            requests:
              cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  selector:
    app: httpbin
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: azure/application-gateway
  name: httpbin-ingress-agic
  namespace: default
spec:
  rules:
  - host: httpbin-agic.evaverma.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: httpbin
              port:
                number: 80
EOF
}

Lab Lifecycle

sequenceDiagram
App ->> Server : Build
App ->> Storage Account : Save
Storage Account ->> App: Load To Builder
App ->> Server: Plan
App ->> Server: Deploy
App ->> Server: Destroy

Build

A lab is built using Builder. Flags in builder can be used to build a tempalte and if needed, extension script can be used to extend it even furhter. A lab is not saved by default.

Plan

You can do a terraform plan using 'Plan' button in Builder. This will generate a terrafrom plan and you will be able to see output.

I highly recommend to run a plan before deploymnet just to be sure you dont accidently delete stuff you dont intend to.

Note: Extension script is not tested/executed in plan mode

Plan Flow
sequenceDiagram
App ->> Server : Terraform Plan
Server ->> App: Success

Deploy

In a nutshell this will deploy the lab. This is a two step process.

  • terraform apply - when you hit 'Deploy' button, first, terraform part of the lab is deployed. The lab object contains 'template' object. These are the values that server will translate to terraform variables and set them as environment variables in server. After the terraform apply is successfuly, execention script will be executed.
Deployment Fow
sequenceDiagram
App ->> Server : Deploy Request
Server ->> Azure : Terraform Apply
Azure ->> Server: Success
Server ->> App: Success
App ->> Server: Extension Script (Deploy)
Azure ->> Server: Pull Terraform Output
Server ->> Azure: Exteion Script (Deploy)
Azure ->> Server: Success
Server ->> App: Success
  • extension script - extension script is a huge topic, its covered in its own section.

Note: Its important that you Plan before deploymnet to avoid accidently deleting stuff that you dont want to.

Destroy

You can destroy the resources created with this tool by using 'Destroy' button. It executes extension script in destroy mode and then executes terraform destroy

Destroy Flow
sequenceDiagram
App ->> Server : Detroy Request
Azure ->> Server : Pull Terraform Output
Server ->> Azure : Extension Script (Destroy Mode)
Azure ->> Server: Success
Server ->> App: Success
App ->> Server: Terraform Destroy
Server ->> Azure: Terraform Destroy
Azure ->> Server: Success
Server ->> App: Success

Saving your lab

You should be able to recreate simple scenarios easily. But for complex scenarios especially when you end up using Extension Script then it becomes absolutely necessary to save your work. You can use 'Save' button in Builder to save your work. You will be presented with a form and following information will be requested.

  • Name:: I know it's hard to name stuff. But try your best to give one liner introduction of your lab.

  • Description: Add as much information as humanly possible. It's important that you get the idea of what this lab does when you come back later after a month and shouldn't have to read the extension script. trust me, it's important.

  • Tags: Plan is to add search feature later which will help you find labs based on tags, something like tags in stack overflow.

  • Template: This is auto populated.

  • Extension Script: This is auto populated.

  • Update: This will update the existing lab.

  • Save as New: This will save as a new lab. Use this to make a copy of your existing lab.

Sharing Lab

  • Export - You can use 'Export' button in Builder to export lab to a file, which then can be shared with anyone, and they can use this to import and use.
  • Import - You can use 'Import' button in Builder to import lab from a file. You can then Save it in your templates.
  • Shared Templates - There are some pre-built labs that you can use to get a head start.
  • Contributing to shared templates. - Coming soon

Learning

If you are going thru L100 or other internal trainings, you will be assigned labs by your mentor. Once the labs are assigned you should be able to see them here. If you dont see any labs, none is assigned to you.

After a lab is assigned you can

  • Deploy
  • Troubleshoot and fix the problem.
  • Use 'Validate' button to check if the fix you applied is valid.
  • After successful validatio, get in touch with your mentor.
  • Destroy

If you are planning to work multiple labs at once you need to use different worksapces for each lab. Check Terraform > Workspaces.

Labs

This part of tool is only accessible to ACT Readiness team. If you are not able to access/view this but should be please reachout to ACT Readiness Team.

How to create a lab exercise?

Labs for Readiness traning can be built using Builder. When saving lab, select 'labexercise' as the type of lab. One additional requirement readiniess labs has is the validation script which is part of extension-script. You can write validation code in function validate() section of the script and it will be run when user hits 'Validate' button in Learning section on the assigned lab. For any additional questions, please reachout to readiness team.

How to assign lab exercise?

Lab can only be assigned to an engineer by a priviledged user. To assign a lab.

  • Navigate to Labs
  • Find the lab you would want to assign.
  • Enter user's alias and hit 'Assign' button.
  • You will see confirmation of assignment or Failure if any. If you get Failure, please ensure the user's alias is correct. If issue persists, please reachout to ACT readiness team.
  • After assignment is done, you will be able to manage assingments here

Mock Cases

This part of tool is only accessible to ACT Readiness team. If you are not able to access/view this but should be please reachout to ACT Readiness Team.

How to create a mock case?

Mock cases can be built using Builder. When saving lab, select 'mockcase' as the type of lab. For any additional questions, please reachout to readiness team.

How to deploy a mock case?

  • Its not required, but recommended that you create mock cases in isolated workspaces.
  • Add a new workspace or ensure that correct work space is selected.
  • Navigate to mock cases
  • Use 'Deploy' button to deploy the mock case that you would want to.
  • Go to Azure portal and create the case BAU.

one-click-aks's People

Contributors

vermacodes avatar deletordev avatar

Stargazers

 avatar  avatar Manny Aparaschivei avatar

Forkers

deletordev

one-click-aks's Issues

use web sockets for logs. Checkout socket.io

something like this on server side would be needed.

package main

import (
	"fmt"
	"net/http"

	"github.com/gin-gonic/gin"
	"github.com/googollee/go-socket.io"
)

func main() {
	// Start a HTTP server using gin
	r := gin.Default()
	r.GET("/ping", func(c *gin.Context) {
		c.JSON(200, gin.H{
			"message": "pong",
		})
	})
	go func() {
		if err := r.Run(":8080"); err != nil {
			fmt.Println(err)
		}
	}()

	// Start a socket.io server
	server, err := socketio.NewServer(nil)
	if err != nil {
		fmt.Println(err)
	}
	server.OnConnect("/", func(s socketio.Conn) error {
		s.SetContext("")
		fmt.Println("connected:", s.ID())
		return nil
	})
	server.OnEvent("/", "notice", func(s socketio.Conn, msg string) {
		fmt.Println("notice:", msg)
		s.Emit("reply", "have "+msg)
	})
	server.OnEvent("/chat", "msg", func(s socketio.Conn, msg string) string {
		s.SetContext(msg)
		return "recv " + msg
	})
	server.OnEvent("/", "bye", func(s socketio.Conn) string {
		last := s.Context().(string)
		s.Emit("bye", last)
		s.Close()
		return last
	})
	server.OnError("/", func(s socketio.Conn, e error) {
		fmt.Println("meet error:", e)
	})
	server.OnDisconnect("/", func(s socketio.Conn, reason string) {
		fmt.Println("closed", reason)
	})

	go func() {
		http.Handle("/socket.io/", server)
		if err := http.ListenAndServe(":8000", nil); err != nil {
			fmt.Println(err)
		}
	}()

	// Wait indefinitely
	select {}
}

Manage Defaults

I've created a TypeScript type called TfvarAddonsType, and I want to use default values for this type at multiple places throughout my code. Here's how I'm going to do it:

  1. Defining Default Values:

    First, I'll define default values for the TfvarAddonsType. I'll create a constant object with these default values:

    const defaultTfvarAddons: TfvarAddonsType = {
      appGateway: false,
      microsoftDefender: false,
      virtualNode: false,
      httpApplicationRouting: false,
    };
  2. Using Default Values:

    I can now use the defaultTfvarAddons object as a starting point wherever I need to create an instance of TfvarAddonsType with default values. For example:

    const someComponentConfig: TfvarAddonsType = { ...defaultTfvarAddons };
    
    // If needed, I can also modify specific properties
    someComponentConfig.appGateway = true;
  3. Reusing Default Values:

    To maintain consistency and make it easy to update defaults in the future, I'll reuse the defaultTfvarAddons object throughout my code whenever I need to initialize an object of type TfvarAddonsType with default values.

Here's the complete example:

const defaultTfvarAddons: TfvarAddonsType = {
  appGateway: false,
  microsoftDefender: false,
  virtualNode: false,
  httpApplicationRouting: false,
};

// Example usage
const someComponentConfig: TfvarAddonsType = { ...defaultTfvarAddons };
// I can modify specific properties as needed
someComponentConfig.appGateway = true;

// Reusing default values in other parts of my code
const anotherComponentConfig: TfvarAddonsType = { ...defaultTfvarAddons };

This approach allows me to maintain and apply default values consistently throughout my code.

Websocket broken connection error

Sometimes WebSocket connection is broken, make sure user is displayed the appropriate message and steps to reconnect. (repro steps needed)

Break Terraform State Lock

If you know terraform, you know that it often crashes. And when it does, the lock on state file is often not removed. It's too much work to go to your storage account and break the lock. Add a button on UI to do that which just does that.

May be add another API in storage account service which does exactly that.

Adding deployment is creating defunct deployments or workspaces.

Right now, we are still using legacy add/delete workspace APIs.

  • UI adds workspace and on success adds deployment.
  • UI deletes workspace and on success deletes deployment.

Plan is to move all this as a single API operation from UI to be handled by server itself. That will give better control over rollbacks.

Subnets should be part of VirtualNetwork

To be able to provide feature to add multiple virutal network, we need to fix the vnet and subnets relationship. I propose that subnets should be part of a specific vnet. so instead of

    "virtualNetworks": [
        {
            "addressSpace": [
                "10.1.0.0/16"
            ]
        }
    ],
    "subnets": [
        {
            "addressPrefixes": [
                "10.1.1.0/24"
            ],
            "name": "AzureFirewallSubnet"
        },
        {
            "addressPrefixes": [
                "10.1.2.0/24"
            ],
            "name": "JumpServerSubnet"
        },
        {
            "addressPrefixes": [
                "10.1.3.0/24"
            ],
            "name": "KubernetesSubnet"
        },
        {
            "addressPrefixes": [
                "10.1.4.0/24"
            ],
            "name": "AppGatewaySubnet"
        },
        {
            "addressPrefixes": [
                "10.1.5.0/24"
            ],
            "name": "AROMasterSubnet"
        },
        {
            "addressPrefixes": [
                "10.1.6.0/24"
            ],
            "name": "AROWorkerSubnet"
        },
        {
            "addressPrefixes": [
                "10.1.7.0/24"
            ],
            "name": "KubernetesVirtualNodeSubnet"
        }
    ],

it should be

    "virtualNetworks": [
        {
            "addressSpace": [
                "10.1.0.0/16"
            ],
            "subnets": [
                {
                    "addressPrefixes": [
                        "10.1.1.0/24"
                    ],
                    "name": "AzureFirewallSubnet"
                },
                {
                    "addressPrefixes": [
                        "10.1.2.0/24"
                    ],
                    "name": "JumpServerSubnet"
                },
                {
                    "addressPrefixes": [
                        "10.1.3.0/24"
                    ],
                    "name": "KubernetesSubnet"
                },
                {
                    "addressPrefixes": [
                        "10.1.4.0/24"
                    ],
                    "name": "AppGatewaySubnet"
                },
                {
                    "addressPrefixes": [
                        "10.1.5.0/24"
                    ],
                    "name": "AROMasterSubnet"
                },
                {
                    "addressPrefixes": [
                        "10.1.6.0/24"
                    ],
                    "name": "AROWorkerSubnet"
                },
                {
                    "addressPrefixes": [
                        "10.1.7.0/24"
                    ],
                    "name": "KubernetesVirtualNodeSubnet"
                }
            ],
        }
    ]

Enforce Workspace Naming.

Workspace Names are inherited by resources. It then impacts the naming requirements of the resources. For example, you can add a workspace with name My_Loved_Workspace. But, when creating a log analytics workspace in that workspace, LAW will be named My_Loved_Workspace-law.

This will fail because workspace names cant have underscore ( _ )

This is also a potential issue with resources like keyvault where length of the name matters too.

Potential Fix would be to enforce workspace naming restrictions.

Azure Regions - Populate List Automatically.

Right now user can go to settings and update his preferred region. This is the region where they will be deploying stuff primarily. We can pre-populate this field with all the available regions by getting data from this AZ CLI command.

az account list-locations

Immediate user feedback on Action.

Clicking on 'Plan' (or any other action button) is not giving appropriate user feedback instantly causing confusion. As the buttons are disabled only after server action status changes, User can send multiple operations causing issue.

Ensure that the buttons are disable immediately. (optimistic update) and user is given feedback.

make vm size editable

There are many ways to achieve this.

  • Ability to edit the VM Size.
  • Making it part of preference.

Application Gateway Ingress Controller addon is not supported with Azure CNI Overlay


│ Error: creating Kubernetes Cluster (Subscription: "da846304-0089-48e0-bfa7-65f68a3eb74f"
│ Resource Group Name: "default-vmej-rg"
│ Kubernetes Cluster Name: "default-vmej-aks"): managedclusters.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="AddonInvalid" Message="Application Gateway Ingress Controller addon is not supported with Azure CNI Overlay" Target="networkProfile.networkPluginMode"

│ with azurerm_kubernetes_cluster.this[0],
│ on kubernetes_main.tf line 34, in resource "azurerm_kubernetes_cluster" "this":
│ 34: resource "azurerm_kubernetes_cluster" "this" {


[2023-11-03T22:03:46+0000]: ERROR - Terraform Apply Failed

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.