GithubHelp home page GithubHelp logo

node-s3-cli's Introduction

s3 cli

Command line utility frontend to node-s3-client. Inspired by s3cmd and attempts to be a drop-in replacement.

Features

  • Compatible with s3cmd's config file
  • Supports a subset of s3cmd's commands and parameters
    • including put, get, del, ls, sync, cp, mv
  • When syncing directories, instead of uploading one file at a time, it uploads many files in parallel resulting in more bandwidth.
  • Uses multipart uploads for large files and uploads each part in parallel.
  • Retries on failure.

Install

sudo npm install -g s3-cli

Configuration

s3-cli is compatible with s3cmd's config file, so if you already have that configured, you're all set. Otherwise you can put this in ~/.s3cfg:

[default]
access_key = foo
secret_key = bar

You can also point it to another config file with e.g. $ s3-cli --config /path/to/s3cmd.conf.

Documentation

put

Uploads a file to S3. Assumes the target filename to be the same as the source filename (if none specified)

Example:

s3-cli put /path/to/file s3://bucket/key/on/s3
s3-cli put /path/to/source-file s3://bucket/target-file

Options:

  • --acl-public or -P - Store objects with ACL allowing read for anyone.
  • --default-mime-type - Default MIME-type for stored objects. Application default is binary/octet-stream.
  • --no-guess-mime-type - Don't guess MIME-type and use the default type instead.
  • --add-header=NAME:VALUE - Add a given HTTP header to the upload request. Can be used multiple times. For instance set 'Expires' or 'Cache-Control' headers (or both) using this options if you like.
  • --region=REGION-NAME - Specify the region (defaults to us-east-1)

get

Downloads a file from S3.

Example:

s3-cli get s3://bucket/key/on/s3 /path/to/file

del

Deletes an object or a directory on S3.

Example:

s3-cli del [--recursive] s3://bucket/key/on/s3/

ls

Lists S3 objects.

Example:

s3-cli ls [--recursive] s3://mybucketname/this/is/the/key/

sync

Sync a local directory to S3

Example:

s3-cli sync [--delete-removed] /path/to/folder/ s3://bucket/key/on/s3/

Supports the same options as put.

Sync a directory on S3 to disk

Example:

s3-cli sync [--delete-removed] s3://bucket/key/on/s3/ /path/to/folder/

cp

Copy an object which is already on S3.

Example:

s3-cli cp s3://sourcebucket/source/key s3://destbucket/dest/key

mv

Move an object which is already on S3.

Example:

s3-cli mv s3://sourcebucket/source/key s3://destbucket/dest/key

node-s3-cli's People

Contributors

andrewrk avatar jareware avatar mloar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-s3-cli's Issues

Defaults to environment variables not in documentation

I spent a bit wondering why it wasn't reading my config file. After looking in the code I found it looks at your environment variables first. I work in multiple aws accounts and my environment variables were set up for a different account than the one I was accessing but all I got was 'Error: http status code 403'. Perhaps the documentation should say it looks to your environment variables first, even if you specify a config file using --config. Thanks!

node-s3-cli with Digital Ocean Spaces

After searching through and following this comment, I managed to make the node-s3-cli work with Digital Ocean Spaces (DO Spaces).
In the config file "~/.s3cfg" just put the default configuration:

[default]
access_key = foo
secret_key = bar

Of course you have to change the foo and the bar according to the data you receive from DO API page.

Then on the file cli.cfg, usually found at:

/usr/local/lib/node_modules/s3-cli

you change the following:

  client = s3.createClient({
    s3Options: {
      accessKeyId: accessKeyId,
      secretAccessKey: secretAccessKey,
      sslEnabled: !args.insecure,
      region: args.region,
endpoint: 'ams3.digitaloceanspaces.com',
    },

Basically you enter the endpoint endpoint: 'ams3.digitaloceanspaces.com', change according to the location of your space.

If I won't hardcore the endpoint I get this error message:

Error: The AWS Access Key Id you provided does not exist in our records.

Thank you!

Project specifc config

I see that it looks for a config file in the user's home folder. What about project specific configs? Or maybe passing the credential from the command line. Is it possible?

Sync complaining one target must be local for relative paths

Does s3-cli not like relative paths?

s3-cli sync --delete-removed ./dist/staging s3://staging.example.org
one target must be from S3, the other must be from local file system.

s3-cli sync --delete-removed dist/staging s3://staging.example.org
one target must be from S3, the other must be from local file system.

I'd like to use this tool in my build process, and it's crucial that I can use relative paths (to CWD).

Out of memory on big directory sync

Hi,

I try to backup a 300Go directory containing lot of files on an S3 ceph radosgw. The source directory is on a lustre file system. I replace findit library by findit2 and update the endpoint of s3 Client to my Ceph S3 rados gateway. s3-cli start well and try to list all files to backup.

An option to avoid this stage and start upload directly could be good.
Ex: When an S3 upload worker is free, It ask for a next file to upload. A file system walker restart on last point and find the next file to upload.
Other solution:
A new findit3 that can work by part. Each 100 000 files events, It pause and wait for a called to its startNextPart() function. When s3 worker finished, It call the startNextPart(). When end event is get, s3 worker don't call startNextPart but stop.

Out of memory error get with the actual s3-cli:

ROOT:/s3-cli > node cli.js --config s3.conf sync /my-big-dir s3://MY_BUCKET/
1301046 files
<--- Last few GCs --->

[10845:0x21afd30]   851433 ms: Mark-sweep 1385.4 (1448.3) -> 1385.3 (1450.8) MB, 1849.4 / 3.2 ms  allocation failure GC in old space requested
[10845:0x21afd30]   853877 ms: Mark-sweep 1385.3 (1450.8) -> 1385.3 (1413.3) MB, 2421.9 / 1.8 ms  last resort GC in old space requested
[10845:0x21afd30]   855909 ms: Mark-sweep 1385.3 (1413.3) -> 1385.3 (1412.3) MB, 2031.5 / 2.6 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0xe386a725739 <JSObject>
    1: /* anonymous */ [/s3-cli/node_modules/s3/lib/index.js:~1149] [pc=0xa91f3224238](this=0x21132216e859 <EventEmitter map = 0xeee3e8b4559>,file=0x2edeaec78c99 <String[92]: /path_to_a_file_to_backup>,stat=0xdc985fbbf61 <Stats map = 0xeee3e8bf609>)
    2: arguments adaptor frame: 3->2
    3: emit [...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [node]
 2: 0x11ef43c [node]
 3: v8::Utils::ReportOOMFailure(char const*, bool) [node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [node]
 5: v8::internal::Factory::NewUninitializedFixedArray(int) [node]
 6: 0xdf0023 [node]
 7: v8::internal::Runtime_GrowArrayElements(int, v8::internal::Object**, v8::internal::Isolate*) [node]
 8: 0xa91f30842fd

Supply credentials from env vars

I'm looking at a way to deploy code through CI, but for that there needs to be an alternative way to supply creds. I currently get the error:

This utility needs a config file formatted the same as for s3cmd

What do you think to adding something that would allow:

AWS_ACCESS_KEY_ID=x AWS_SECRET_KEY=x s3cli

Great project by the way. The API is very clean.

Not a valid S3 URL

No matter what I put in for a URL, the cli returns Not a valid S3 URL

How to set Expires HTTP Header? (blank spaces)

How do I set the params if there are spaces in the VALUE param?

Docs:
--add-header=NAME:VALUE

I'm trying this (doesn't work):
s3-cli sync -P --add-header=Cache-Control:public,max-age=31536000 --add-header=Expires:'Fri, 18 Nov 2016 11:00:00 GMT' static/ s3://my-bucket/static/

Error: Unexpected key 'Expires:Fri, 18 Nov 2016 11:00' found in params

not compatible with s3cmd

If the goal is to be a faster s3cmd with compatibility, there appears to be some way to go?

Unrecognized option(s): v, access_key, secret_key, host, host-bucket, check-md5, preserve, skip-existing

progress output makes no sense

Here's an example of some output:

191.5MB/1.7GB MD5, 114.8MB/125.5MB 91% done, 731.1KB/s

Later the output looks like this:

268.4MB/1.7GB MD5, 191.4MB/201.0MB 95% done, 906.5KB/s

So the first number is going up steadily, the second number is constant. Then there's the phrase "MD5", which I don't understand. I think that was supposed to be for back when it was checking md5sums, and it appears to be left over. Then we see a fraction where the numerator increases while the upload is going on, so it probably represents the number of bytes uploaded. The denominator keeps increasing in big chunks, probably representing new files being considered. The percent seems to be this fraction, so it keeps jittering around erratically in the 80%-90% range. Wouldn't a percent of the first fraction be better, which would better represent how done we are? Then there's the word "done" even though it's not done yet. Then a network speed, which makes sense.

sync seems to freeze up sometimes

I'm trying to upload 1.7GB to s3, and I keep having to restart the sync, because it seems the client freezes. For example, this is the output I see now:

436.7MB/1.7GB MD5, 371.0MB/371.0MB 100% done, 625.8KB/s

The speed on the right is steadily dropping, as though a sliding window is measuring more and more of a period of nothing happening.

The above is the 4th time this has happened during this synchronization attempt. I Ctrl+C and Up Enter, and it gets further each time.

Cannot use an S3 compatible Server like Ceph Radosgw

Consider use host_base parameter of s3cfg file when initalize s3.Client.

I hardcode this to suit my case :

function setup(secretAccessKey, accessKeyId) {
  var maxSockets = parseInt(args['max-sockets'], 10);
  http.globalAgent.maxSockets = maxSockets;
  https.globalAgent.maxSockets = maxSockets;
  client = s3.createClient({
    s3Options: {
      accessKeyId: accessKeyId,
      secretAccessKey: secretAccessKey,
      sslEnabled: !args.insecure,
      endpoint: 'cephgw.my.com',
      region: args.region,
    },
  });
  var cmd = args._.shift();
  var fn = fns[cmd];
  if (!fn) fn = cmdHelp;
  fn();
}

Special characters in the secret key cause failures

(This is not necessarily a bug in S3 CLI, but the bug does affect it.)

If the AWS secret key contains a special character (in my case, a [), the s3-cli commands do not work. Instead, the commands yield the error, Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.

To work around the problem, issue yourself a new access key/secret key pair, where the latter does not have any special characters.

nodejs-legacy

Please consider inform in Install session of README.md that someone needs to run
sudo apt install nodejs-legacy
otherwise s3-cli gets error
/usr/bin/env: β€˜node’: No such file or directory
Took me a while to figure out, this happens at least here, running Ubuntu 16.04.

Region

Could you update the readme to include the --region flag?

And is it possible to read the default region from .s3cfg rather than having to specify it with the command every time? I tried to include it but it doesn't seem to work.

Unrecognised options

$ s3-cli --region=eu-west-1 --access_key=${ACCESS_KEY} --secret_key="${SECRET_KEY}" sync s3://bucket/ ./
Usage: s3-cli (command) (command arguments)
Commands: sync ls help del put get cp mv
Unrecognized option(s): access_key, secret_key

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.