cloudavail / aws-missing-tools Goto Github PK
View Code? Open in Web Editor NEWtools for managing AWS resources including EC2, EBS, RDS, IAM, CloudFormation and Route53.
tools for managing AWS resources including EC2, EBS, RDS, IAM, CloudFormation and Route53.
Backing up by selecting volume seems to work:
$ ec2-automate-backup-awscli.sh -r us-west-2 -k 30 -v vol-abcd5678
Snapshots taken by ec2-automate-backup-awscli.sh will be eligible for purging after the following date (the purge after date given in seconds from epoch): 1424711645.
Tagging Snapshot snap-abcd5678 with the following Tags: Key=PurgeAfterFE,Value=1424711645 Key=PurgeAllow,Value=true
Backing up by selecting tags seems to fail (no output or snap created):
$ ec2-automate-backup-awscli.sh -r us-west-2 -k 30 -s tag -t "Backup-Daily=true" -h
Snapshots taken by ec2-automate-backup-awscli.sh will be eligible for purging after the following date (the purge after date given in seconds from epoch): 1424711461.
I think I have the volume tagged correctly:
$ aws ec2 describe-volumes --volume-ids vol-abcd5678
{
"Volumes": [
{
"AvailabilityZone": "us-west-2a",
"Attachments": [
{
"AttachTime": "2012-12-01T01:20:58.000Z",
"InstanceId": "i-abcd5678",
"VolumeId": "vol-abcd5678",
"State": "attached",
"DeleteOnTermination": true,
"Device": "/dev/sda1"
}
],
"Tags": [
{
"Value": "true",
"Key": "Backup-Daily"
},
{
"Value": "a.iam.vpc",
"Key": "Name"
}
],
"Encrypted": false,
"VolumeType": "standard",
"VolumeId": "vol-abcd5678",
"State": "in-use",
"SnapshotId": "snap-abcd5678",
"CreateTime": "2012-12-01T01:20:58.000Z",
"Size": 8
}
]
}
This is actually a bug with AWS but I wanted to open an issue here so that anyone using aws-ha-release.sh might see this.
It seems that when an instance is being spun up, the ELB health checks first fail, then pass (false positive; they can't possible pass because Passenger (in my case) is still starting up), then fail, and then pass (true positive; the web requests are being processed).
aws-ha-release sees the first pass and thinks the instance is healthy and so moves on before the instance is actually healthy.
I'm checking to see if there's already a bug post with AWS; if not, I'll make one.
We could add some sort of tolerance to aws-ha-release where it requires an instance to be in service for some amount of time before moving forward.
Command ec2-automate-backup-awscli.sh -r us-east-1 -s tag -t Backup,Values=true is not working on Debian GNU/Linux 7 (wheezy)" and no message is generated.
Strangely, ec2-automate-backup-awscli.sh -v "vol-6d6a0527" works properly.
I probably don't have things setup right. but I'm getting the following error when I try to run backup.
ubuntu@ip-10-0-0-102:~/aws-missing-tools-master/ec2-automate-backup$ sudo sh /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh -s tag "Backup-Daily=true"
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 14: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 14: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 14: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 14: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 14: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 14: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 153: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 31: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
/home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: 42: /home/ubuntu/aws-missing-tools-master/ec2-automate-backup/ec2-automate-backup.sh: [[: not found
I have ec2-api-tools installed. Version 1.5.0.0
ubuntu@ip-10-0-0-102:~/aws-missing-tools-master/ec2-automate-backup$ ec2-version
1.5.0.0 2011-11-01
What am I missing/doing wrong?? Anything special that I need to do to setup your scripts? What directory do I put them in?
Thanks!
Example Use:
as-ha-release -a
Example Usage:
Auto Scaling Group "my-scaling-group" found.
Auto Scaling Group "my-scaling-group" is currently 2 instances.
Auto Scaling Group "my-scaling-group" will be scaled to 3 instances (the "new" desired capacity), at which point the first of the "old" instances will be removed from service when the number of machines is equal to "new" desired capacity.
For each record that points to an obvious AWS resource, find out if that resource exists.
Hi,
I bill my clients for all the resources they use on AWS, including snapshot that I could create through ec2-automate-backup (works really nice btw, thank you !). I use cost allocation so I have a strategy based on tags with Name and Client tags. However I have not been able to add custom key => value tags when using ec2-automate-backup.
Did I miss an option ?
Thanks !
Currently, the tool simply updates the launch config. With the "-f" flag the tool should remove the instances forcefully.
tool should allow the modification of either volume size of an existing root volume.
Moves a single box to VPC by snapshot of EBS and restore into new VPC.
A couple of changes that should be made:
The tool for automated backups should be ported to work with RDS also. This also includes the option of hourly backups.
from email:
Is it possible for you to maybe extend it (ec2-automate-backup) to also include hours in the purge and backup plans? We have a few important servers which we want to do a snapshot backup of every 3 hour etc and have those hourly backups purged after 12 hours.
Tool to use AWS Resources: S3, Route53 and CloudFront for a URL shortener. Ruby script to take URL to and from location plus desired "stub" file name.
Example:
./s3-cost-calculate
Returns:
Bucket | Items | Cost
cdnback.mycompany.com | 7334 | $2.00
backup.mycompany.com | 7332 | $2.30
web.mycompany.com | 7338 | $2.50
Any plans to publish this gem to RubyGems now that it's been (partially) "gemified"?
Confirm that ELB AZ and ASG AZ are Equal Prior to Release - if the AZs and ELBs that host instances are not equal it is possible that an instance is scaled into an ASG but will not ever increase the number of healthy ELB instances, causing aws-ha-release to fail.
sent the pull request
AWS seemingly randomly assigns AZs - one customer's us-east-1a might be another customer's us-east-1b - a tool to determine the actual AZ would be useful as some AZs allows particular instances types or offer potentially improved performance.
The current awscli.sh script is fetching the incorrect colume for the volume id, using result from:
aws ec2 describe-volumes --output text | grep VOLUMES
cut -f 7 - returns 'in use'...
cut -f 8 - returns volume id >> this is what we want
from Email:
Hi Colin,
Thanks for the awesome backup script! The tagging feature is exactly what we needed.
I have an issue though where unless I specify the region the script will not work. Is there a way to include all regions?
Thanks,
Ben
./route53-migrate-zone.py
Traceback (most recent call last):
File "./route53-migrate-zone.py", line 42, in
log_level = str.upper(args.loglevel)
TypeError: descriptor 'upper' requires a 'str' object but received a 'NoneType'
Trying this:
./route53-migrate-zone.py --loglevel INFO
works
I noticed that the script was not tagging instances correctly
ran "bash -x /home/appsupp/ec2-automate-backup-awscli.sh -r us-east-1 -s tag -t 'Backuptest,Values=true' -k 31 -n -h -u"
noticed the following:
checked where the scripts was pulling this value and found
I modified line 210
From:
210 ec2_snapshot_resource_id=$(echo "$ec2_create_snapshot_result" | cut -f 5)
To:
210 ec2_snapshot_resource_id=$(echo "$ec2_create_snapshot_result" | cut -f 4)
Script is now tagging correctly
Update
As of Oct 9th 2014 instances have stopped tagging again. Updated the cut value to 'cut -f 3' and is tagging correctly again.
Hi, i should plan a snapshot a day and keep them for a maximum of 7 days, which parameters must to the use? I did some testing with the parameters -k and -p, but every time i launch the scritp with k -7, the script delete the snapshot of the previous day
I installed and tried running it. I get:
In order to use ec2-automate-backup.sh, the executable "ec2-create-snapshot" must be installed
Where do I download and put that? Its not on my EC2 server.
Im on amazon api version 1.5, ubuntu 12.04, grep 2.10, date 8.13
The grep was not matching to pull purge dates. I had to change the match for purgeallowed to be the following:
snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep ".*PurgeAllow\s*true" | cut -f 3`
and change the match for the purgeafter to be
purge_after_date=`echo "$snapshot_tag_list" | grep ".*$snapshot_id_evaluated\s*PurgeAfter.*" | cut -f 5`
basically I replaced '\t' with \s*
Also, there was an issue with converting the date to epoch, I was getting an error stating there is no -j option. I had to change the date conversion to be.
date_current_epoch=`date +%s -d"$date_current"`
purge_after_date_epoch=`date +%s -d"$purge_after_date"`
With those changes the purge is working ok.
aws-combined-queue --queues "queue-name1, queue-name2" --metric sum
The cut command for getting the snapshot resource id is using the wrong field number (using 3 instead of 4). The command should be
cut -f 4
in an email from a user regarding as-update-launch-config:
Only thing response to you is that, if user didn't set --key or --group when create launch config, when he uses as-update-launch-config, it'll meet issues like, The Auto Scaling Group "xxxx" uses the security group "(nil)". The Auto Scaling Group "xxxx" uses the key "(nil)." and fail the process.
Tool will be used to do any of the following:
Hi
Thanks for these scripts.
I am having an iops configure volume in one of my instance.
i am using this ec2-automate-backup-awscli.sh. It works fine
for the standard volumes to make the snapshot process automated.
When it comes to the iops volumes it throws the following error
Snapshots taken by ec2-automate-backup-awscli.sh will be eligible for purging after the following date: 2014-03-27.
An error occured when running ec2-create-snapshot. The error returned is below:
A client error (InvalidParameterValue) occurred when calling the CreateSnapshot operation: Value (in-use) for parameter volumeId is invalid. Expected: 'vol-...'.
Can some one help what will be a issue. Because the same scripts working fine for the other standard volumes.
I am sure i am accurate with the volume id when passed as argument.
When i contacted the support guys of aws they told me to check the script.
Below is his reply,
"The problem is with these lines in the script: --quote-- line 41: ebs_backup_list_complete=aws ec2 describe-volumes --region $region $ebs_selection_string --output text
... line 48: ebs_backup_list=echo "$ebs_backup_list_complete" | grep ^VOLUMES | cut -f 7
--/quote-- Line 48 is blindly taking the 7th field from the output of 'aws ec2 describe-volumes'. "
Please help me to proceed further.
Thank you
Siva
I'd like the Instance ID, Instance Name and original volume Name to be included either as a tag and in the description. This would help in managing snapshots and finding the snapshot you need quickly.
Something like this would work for tags:
InstanceID
InstanceName
Volume
Additionally It would be helpful if you also allowed the setting of additional tags on the command line (to be applied to the snapshot) and inherited from the original volume. For example today I use ec2-copy-snapshot to replicate certain snaphosts based on the tags to other regions.
It also might be helpful to have the Instance Name or Volume Name in the name of the snapshot.
From user feedback: ec2-automate-backup = cron-primer.sh file needs to include EC2_PRIVATE_KEY and EC2_CERT.
I downloaded the "ec2-automate-backup" and upload it on my ec2-user folder, I executed it like this:
/home/ec2-user/ec2-automate-backup/ec2-automate-backup.sh -s "volumeid" -v "vol-XXXXXXXX"
now it keeps returning:
"The selection method "volumeid" (which is ec2-automate-backup.sh's default
selection_method of operation or requested by using the -s volumeid parameter)
requires a volumeid (-v volumeid) for operation. Correct usage is as follows:
"-v vol-6d6a0527","-s volumeid -v vol-6d6a0527" or "-v "vol-6d6a0527
vol-636a0112"" if multiple volumes are to be selected."
is it a bug or i did something wrong?
On Particular Systems:
Need to modify:
purge_after_date=date -v+${purge_after_days}d -u +%Y-%m-%d
Becomes:
purge_after_date=date -d "+ ${purge_after_days} days" -u +%Y-%m-%d
To login to an instance:
1. as-describe-auto-scaling-groups --max-records 100 | grep -i asgname (get instance ids)
2. ec2din i-xxxxxxx (get IP address)
3. ssh -i /path/to/key/key.pem [email protected] (actually log)
Shorten to:
asgLogin.sh asgname/partialasgname region
Return:
ASG: ASG-Name-1
ASG Member Instances:
1. DOMAIN: ec2-107-22-50-220.compute-1.amazonaws.com | ID: i-6d13f808 | ZONE: us-east-1b | STATUS: running | START TIME: 2012-01-24T14:47:27+0000
2. DOMAIN: ec2-23-20-39-50.compute-1.amazonaws.com | ID: i-7fb9531a | ZONE: us-east-1c | STATUS: running | START TIME: 2012-01-24T16:08:36+0000
3. DOMAIN: ec2-107-20-41-40.compute-1.amazonaws.com | ID: i-ff20ca9a | ZONE: us-east-1d | STATUS: running | START TIME: 2012-01-24T17:54:15+0000
4. DOMAIN: ec2-184-73-151-60.compute-1.amazonaws.com | ID: i-df24ceba | ZONE: us-east-1a | STATUS: running | START TIME: 2012-01-24T17:57:17+0000
ASG: ASG-Name-2
ASG Member Instances:
5. DOMAIN: ec2-23-20-16-102.compute-1.amazonaws.com | ID: i-04372466 | ZONE: us-east-1b | STATUS: running | START TIME: 2012-01-21T20:54:19+0000
Enter instance number into which you want to SSH (or 0 to exit):
I got the error message:
root ~ $ ./ec2-automate-backup.sh -r sa-east-1b -v vol-b4ddceb2
An error occurred when running ec2-describe-volumes. The error returned is below:
Unknown problem connecting to host: 'https://ec2.sa-east-1b.amazonaws.com'
Unable to execute HTTP request: ec2.sa-east-1b.amazonaws.com
Help Please?
Desired Enhancement:
for Elastic IP:
1. Instance Comes Up, Runs as-associate -i 12.12.5.27
2. Determines if IP is available.
3. Associates IP to self if available, or forces association if -f argument is given.
for EBS:
1. Instance Comes Up, Runs as-associate -v vol-l0527mjs
2. Determines if volume is attached.
3. Associates volume to self if available, or forces association if -f argument is given.
as-ha-release - Confirm Support for Multiple ELBs - currently may only support a single ELB.
aws-ha-release calls as-suspend-processes and as-resume-processes but doesn't pass the region to them. That means if I'm working with an autoscaling group outside of my default region, it won't suspend processes even if I pass the correct region to aws-ha-release.
The script uses us-east-1 as the default value if there is no region value set on the command line or environment variable. This does not take into account the config file now supported by the AWS CLI commands.
Not sure the best way to handle this. I suggest the following precedence:
Example of Desired Use:
Why:
allow run through cron or as a user that does not have an environment variable defined.
Note:
The choice of "--aws-credential-file" may be a bit odd (as opposed to --credential file, -c), but stays consistent with Amazon's CLI tools.
To test, run the route53-migrate-zone migration tool twice. It will migrate all records. The second run will attempt to send an empty change set and throw the following error:
An error occured when attempting to commit records to the zone "dest_test.com"
The error message given was: Invalid XML ; cvc-complex-type.2.4.b: The content of element 'Changes' is not complete. One of '{"https://route53.amazonaws.com/doc/2012-02-29/":Change}' is expected..
Here is the info you requested from the user in issue #43..
I am using the latest version of the script modified 8 days ago.
[root@ip-ec2-automate-backup]# ./ec2-automate-backup.sh -v"vol-c12eXXXX" -s "volumeid" -r us-west-2
[root@ip-ec2-automate-backup]# ./ec2-automate-backup-awscli.sh -v"vol-c12eXXXX" -s "volumeid" -r us-west-2
An error occured when running ec2-create-snapshot. The error returned is below:
A client error (InvalidParameterValue) occurred when calling the CreateSnapshot operation: Value (standard) for parameter volumeId is invalid. Expected: 'vol-...'.
[root@ip-ec2-automate-backup]#
root@ip-ec2-automate-backup]# bash -x ./ec2-automate-backup-awscli.sh -v"vol-c12eXXXX" -s "volumeid" -r us-west-2
++ basename ./ec2-automate-backup-awscli.sh
A client error (InvalidParameterValue) occurred when calling the CreateSnapshot operation: Value (standard) for parameter volumeId is invalid. Expected: 'vol-...'.
would be nice if you could name the LC yourself
rather than date_1
cause now I don't know the machine size from the LC
What is the format these scripts are looking for ?
Hi,
I am using this script in my aws.
Taking snapshots works fine, thanks for it.
I am running the below command from the cron and executing the script,
01 16 * * * /home/ubuntu/ec2-automate-backup-awscli.sh -v vol-12ab2345 -r ap-southeast-1 -k 01 -p > /home/ubuntu/snapshot.log 2>&1
And the log is some thing like this
"Snapshots taken by ec2-automate-backup-awscli.sh will be eligible for purging after the following date: 2014-03-27.
Tagging Snapshot None with the following Tags: Key=PurgeAfter,Value=2014-03-27 Key=PurgeAllow,Value=true
Snapshot Purging is Starting Now."
But I realized its not deleting the snapshot created.
I am just giving only one day for my snapshot to get deleted but still even after 3 days its not getting deleted.
Help me,
Thank you,
SIva
The tag syntax isn't correct in the example
ec2-automate-backup-awscli.sh -r us-east-1 -s tag -t 'Name=tag:Backup,Values=True' -k 31 -p -n
Should be
ec2-automate-backup-awscli.sh -r us-east-1 -s tag -t 'Backup,Values=True' -k 31 -p -n
The script prepends the "Name=tag:" implicitly
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.