GithubHelp home page GithubHelp logo

jayc13 / s3_dumps Goto Github PK

View Code? Open in Web Editor NEW

This project forked from rakeshgunduka/s3_dumps

0.0 0.0 0.0 297 KB

Package provides easy scripts that can be used to backup data from programs likes PostgreSQL, Redis, Mysql.

Home Page: http://s3-dumps.readthedocs.io/en/latest/

Makefile 3.95% Python 96.05%

s3_dumps's Introduction

S3 Dumps

Note: A rewritten fork of s3-backups.

S3dumps provides easy scripts that system administrators can use to backup data from programs likes PostgreSQL, Redis, etc.

Installation

To install s3-backups:

$ sudo pip install s3_dumps

Usage

For Backup

Using --backup flag, the script creates the dump and stores in the bucket as it is without year/month/date directory structure.

--backup

For Archive

Using --archive flag, the script takes all the files from the bucket and archives it in year/month/date directory structure.

--archive

For Archive and Backup

Using --backup --archive flags together, the script takes all the files from the bucket and archives it in year/month/date directory structure and creates a dump at the parent directory (inside Bucket).

--backup --archive

To dump into amazon s3 service.

Set --SERVICE_NAME to 'amazon'.

--SERVICE_NAME='amazon'

To dump into digitalocean spaces.

Set --SERVICE_NAME to 'digitalocean'.

--SERVICE_NAME='digitalocean'

Setting Up S3 Dumps to Run Automatically Using Cron

PostgreSQL

Add the following to the file /etc/cron.d/postgres_to_s3 and then change the command arguments so the command is using your correct AWS credentials, backup bucket and the correct base S3 Key/base folder.

Amazon Services

0 */1 * * * postgres postgres_to_s3.py --SERVICE_NAME='amazon' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --FILE_KEY='postgres/my-awesome-server' --backup

Digitalocen Spaces

0 */1 * * * postgres postgres_to_s3.py --SERVICE_NAME='digitalocean' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --FILE_KEY='postgres/my-awesome-server' --backup

To create dump of a specific database (my-db-name).

0 */1 * * * postgres postgres_to_s3.py --SERVICE_NAME='amazon' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --DB_NAME='my-db-name' --FILE_KEY='postgres/my-awesome-server' --backup

To backup and archive at the same time.

0 */1 * * * postgres postgres_to_s3.py --SERVICE_NAME='amazon' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --FILE_KEY='postgres/my-awesome-server' --backup --archive

Redis

Add the following to the file /etc/cron.d/redis_to_s3 and then change the command arguments so the command is using your correct AWS credentials, backup bucket and the correct base S3 Key/base folder.

Amazon Services

0 */1 * * * postgres redis_to_s3.py --SERVICE_NAME='amazon' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --FILE_KEY='postgres/my-awesome-server' --backup

Digitalocen Spaces

0 */1 * * * postgres redis_to_s3.py --SERVICE_NAME='digitalocean' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --FILE_KEY='postgres/my-awesome-server' --backup

Provide Redis working according to the system redis config directory. (Not mandatory field) If not provided it sets to default.

0 */1 * * * root redis_to_s3.py --SERVICE_NAME='amazon' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --FILE_KEY='redis/my-awesome-server' --REDIS_DUMP_DIR='/Your/Redis/Config/Dir' --backup

To backup and archive at the same time.

0 */1 * * * root redis_to_s3.py --SERVICE_NAME='amazon' --ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' --SECRET='xxxxxxxxxxxxxxxxxxxx' --REGION='bucket-region' --BUCKET_NAME='my-backup-bucket' --FILE_KEY='redis/my-awesome-server' --REDIS_DUMP_DIR='/Your/Redis/Config/Dir' --REDIS_SAVE_CMD='redis-cli save' --backup --archive

Manually Running Dumps and Archiving

When running the archive command, S3 Dumps moves backups into a year/month/date sub folder (technically a S3 key).

The default archive mode will ...

  • keep all archives for 7 days
  • keep midnight backups for every other day for 30 days
  • keep the first day of the month forever
  • remove all other files that aren't scheduled to be kept

To backup PostgreSQL, run the following:

$ postgres_to_s3.py \
--SERVICE_NAME='amazon'
--ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' \
--SECRET='xxxxxxxxxxxxxxxxxxxx' \
--REGION='bucket-region' \
--BUCKET_NAME='my-backup-bucket' \
--FILE_KEY='postgres/my-awesome-server' \
--backup

To archive PostgreSQL backups, run the following:

$ postgres_to_s3.py \
--SERVICE_NAME='amazon'
--ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' \
--SECRET='xxxxxxxxxxxxxxxxxxxx' \
--REGION='bucket-region' \
--BUCKET_NAME='my-backup-bucket' \
--FILE_KEY='postgres/my-awesome-server' \
--archive

To backup Redis, run the following:

$ redis_to_s3.py \
--SERVICE_NAME='amazon'
--ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' \
--SECRET='xxxxxxxxxxxxxxxxxxxx' \
--REGION='bucket-region' \
--BUCKET_NAME='my-backup-bucket' \
--FILE_KEY='postgres/my-awesome-server' \
--REDIS_DUMP_DIR='/Your/Redis/Config/Dir' \
--REDIS_SAVE_CMD='redis-cli save' \
--backup

To archive Redis, run the following:

$ redis_to_s3.py \
--SERVICE_NAME='amazon'
--ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' \
--SECRET='xxxxxxxxxxxxxxxxxxxx' \
--REGION='bucket-region' \
--BUCKET_NAME='my-backup-bucket' \
--FILE_KEY='postgres/my-awesome-server' \
--REDIS_DUMP_DIR='/Your/Redis/Config/Dir' \
--REDIS_SAVE_CMD='redis-cli save' \
--archive

To backup MySQL, run the following:

$ mysql_to_s3.py \
--SERVICE_NAME='amazon'
--ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' \
--SECRET='xxxxxxxxxxxxxxxxxxxx' \
--REGION='bucket-region' \
--BUCKET_NAME='my-backup-bucket' \
--FILE_KEY='postgres/my-awesome-server' \
--backup

To archive MySQL, run the following:

$ mysql_to_s3.py \
--SERVICE_NAME='amazon'
--ACCESS_KEY='xxxxxxxxxxxxxxxxxxxx' \
--SECRET='xxxxxxxxxxxxxxxxxxxx' \
--REGION='bucket-region' \
--BUCKET_NAME='my-backup-bucket' \
--FILE_KEY='postgres/my-awesome-server' \
--backup

To Do's

  1. Add tests

Contributers

  1. Brent O'Connor
  2. Rakesh Gunduka
  3. Shekhar Tiwatne

s3_dumps's People

Contributors

rakeshgunduka avatar harrypotterismyname avatar rajatsingla avatar shon avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.