-
boto3 - aws_get_memcached_filtered by VPC
usage:
vpc_list = ["vpc1","vpc2","vpc3"]
getMemcachedList(vpc_list) -
boto3 - auto_scaling_group_filtered by VPC
usage:
asg_list = get_asg(vpcid)
/* this will write asg count in a file and can we read to set back the Minsize and Maxsize of the asg automatically. */ -
Controlling your AWS costs by deleting unused Amazon EBS volumes
get_volumes_status.pyDescription:
- Get output with Volume ID/Creation Date/Attachments/Event name/Event Date
- Filter based on the requirement.
- Unsed volumes can be filtered for deletion to save cost.
USAGE:
- Set the aws credentials
- python get_volumes_status.py
- jinja - jinja load and merge yml
usage:# usage.sls {% from "jinja_load_and_merge_yml.jinja" import config with context %} create_values_yaml: file.managed: - name: /opt/user.yaml - user: root - group: root - mode: 640 - makedirs: True - contents: | item: {{ config.item1 }} userad: "{{ config.user_addrs }}"
Terraform-example includes
1. Define versions of the terraform and provider aws
2. Usage of Module for different resources
3. AWS authentication provider
# Terraform first will check environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) and if profile is defined then check in default location ~/.aws/credentials and if location is specified then in the speficied credentials file (if not found in default location)
4. Custom tfvars file(default naming it will auto pick is terraform.tfvars) which can be passed with to the terraform command using -f option
5. Usage of locals and data
kapitan : Generic templated configuration management for Kubernetes, Terraform etc.
spin up the container has the kapiton installed with kubernetes example checkout from git
command to start container -
a) go to the kapitan directory
b) run docker-compose up -d