GithubHelp home page GithubHelp logo

stafdehat / scripts Goto Github PK

View Code? Open in Web Editor NEW
25.0 25.0 10.0 516 KB

Scripts I've written that I never want to lose. Mostly bash.

Shell 78.80% C++ 0.49% PHP 13.56% JavaScript 4.79% Python 2.36%

scripts's People

Contributors

gtmanfred avatar rsahoward avatar stafdehat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

scripts's Issues

API token rotation breaks cloud-image-transfer.sh

Not 100% certain how I want to work around this yet. Gonna have to test for 401s on every curl, and either switch to API key auth instead of token, or prompt for input whenever I need the new token. I'm not a huge fan of either option. User input means a long, tedious process is suddenly no longer unattended, and API key auth means that bash shell histories are likely to contain un-expiring admin-level credentials.

Consider removing 'nc' dependency

For cloud-image-transfer.sh.

We'd use /dev/tcp instead. I'm not 100% on compatibility with non-bash, non-linux unix systems though.
if ( echo > /dev/tcp/google.com/80 ) &>/dev/null; then
echo open
else
echo closed
fi

Line 87 typos

Hey Andrew, just wanted to let you know that line 87 has a typo and is missing a closing quote.
This is for cloud-image-transfer.sh

Catch download failures in cloud-image-transfer.sh

Jose ran into an issue where an attempt to download a 128M segment created an empty file, and the script retried indefinitely. If there was an error from the API, I didn't catch it appropriately. Even if a completely unexpected issue occurred, we should at least bail after 10 tries instead of retrying indefinitely.

Add -n to override DST image name

Add optional command-line argument "-n", so user has the ability to name the image at destination something different than the original image.

Consider migrating your scripts to SparrowHub

HI @StafDehat! Hi , I have checked your repo and would suggest you to migrate your scripts to https://sparrowhub.org - a automation scripts repository. A benefits you will gain:

  • scripts could be packaged, versioned and distributed in the same way as many convention package managers do - like apt , yum , cpan, ruby gems, etc.
  • scripts configuration is provided out of the box with some well known formats - yaml, json, Config::general , command line parameters , etc.
  • scripts documentation support - markdown , common line ( like a Linux man ) , web interface for browsering documentation, links to github source code , etc.
  • scripts authority provided by account at sparrowhub , so scripts users understand clearly who is script maintainer and who address the issues to.

For examples of existed scripts - take a look at http://sparrowhub.org , let me know and I help on migration process.

ps. If it does not sound interesting, just close the issue.


Thanks . Alexey Melezhik , the author of SparrowHub.

Occasional 409 on container deletion

Occasionally in cloud-image-transfer.sh, when attempting to delete the source container, a 409 error (conflict) is detected and the script bails. I suspect the cause of this is an incomplete deletion of the objects within that container, which was probably caused by an incomplete container listing returned by the API. Consider the 3x60sec listing strategy used in the transfer, and/or just use the same container listing as from the transfer. Maybe I'm already doing that? Regardless, this needs further investigation.

Use static manifest files

For cloud-image-transfer.sh, stop using a dynamic manifest - it has potential for false-positive name matches. Granted, the odds are astronomically small, but it's possible with dynamic, and not possible at all with static.

cloud-image-transfer.sh detect >40G image

No way to estimate this until after the export task is complete, but once that does finish, check the total size of the Cloud Files container. If >40G, then the import's gonna fail and there's no sense transferring the image to Cloud Files in the destination region. Print a meaningful error message, and suggest a workaround (resize + cat /dev/zero)

Fix an error message

In cloud-image-transfer.sh, there's this error message:

Error: You won't be able to import this image at the destination,
because it was taken of a server with >40G OS disk. You'll need
to build a Standard NextGen server from this image at the source
region, resize it to <=2G RAM (<=40G disk), then take a new image
and transfer that new image instead.
Note: In order to resize down, you may need to first manually set
the min_disk and min_ram values on this image to <= 40 disk and
1024 RAM.

The min_disk value is set when the image is taken, based on the total VDI-chain size. Make this more descriptive, and suggest fixes (resize + cat /dev/zero)

cloud-image-transfer.sh saves locally

Currently this script downloads 1G, saving it locally, then uploads that 1G to the destination and deletes the local 1G copy. This works, but it might be better to do an I/O stream, passing the curl output directly into the next curl's input. Look into it when you get a chance.

Quietly absorb 503 errors from API

In cloud-image-transfer.sh, see if we can detect random 503 errors from the API and recover gracefully instead of bailing. As-is, current behaviour is the following:

2015-01-14 03:43:32 Waiting for completion - will check every 60 seconds.

Error: Unable to query task details - maybe API is unavailable?
       Maybe your API token just expired?
Script will attempt to retry.
Response data from API was as follows:

Response code: 503

Docs say this means the API was "unavailable":
http://docs.rackspace.com/images/api/v2/ci-devguide/content/GET_getTask_tasks__taskID__Image_Task_Calls.html#GET_getTask_tasks__taskID__Image_Task_Calls-Request

cloud-files-createobject.sh vault name detection.

Vault names have changed from "MossoCloudFS_xxxx" to just the tenant ID. Somehow I need to detect the correct vault name by region. Sadly, this is probably going to require JSON-awareness within bash, which means I need to roll in my bash-based JSON-to-SNMP-OID compiler, or add a dependency. Hopefully not.

Getting 401 when running the script

Came across your script late last night while trying to clone a VM across regions to help with the mass reboots. Kept getting 401 when I run it & can't seem to figure out why. Here's how I'm running it:

./cloud-image-transfer.sh -a <account_api_key> -t <account_num> -1 -r ord -i <image_uuid> -R dfw

where the <> placeholders are the actual values from my account.

This is the output:

Attempting to authenticate against Identity API with source account info.
Error: Unable to authenticate against API using SRCAUTHTOKEN and SRCTENANTID
provided.  Raw response data from API was the following:

Response code: 401
401
----------------------------------------
Script exited prematurely.
You may need to manually delete the following:
----------------------------------------

I figure the 401 means I wasn't authenticating properly but I know for sure I'm using the right API key and account number because I can use it manually interacting with the API directly via curl. If you could let me know what I'm doing wrong, would greatly appreciate it for next time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.