GithubHelp home page GithubHelp logo

knox's Introduction

knox

Node Amazon S3 Client.

Features

  • Familiar API (client.get(), client.put(), etc.)
  • Very Node-like low-level request capabilities via http.Client
  • Higher-level API with client.putStream(), client.getFile(), etc.
  • Copying and multi-file delete support
  • Streaming file upload and direct stream-piping support

Examples

The following examples demonstrate some capabilities of knox and the S3 REST API. First things first, create an S3 client:

var client = knox.createClient({
    key: '<api-key-here>'
  , secret: '<secret-here>'
  , bucket: 'learnboost'
});

More options are documented below for features like other endpoints or regions.

PUT

If you want to directly upload some strings to S3, you can use the Client#put method with a string or buffer, just like you would for any http.Client request. You pass in the filename as the first parameter, some headers for the second, and then listen for a 'response' event on the request. Then send the request using req.end(). If we get a 200 response, great!

If you send a string, set Content-Length to the length of the buffer of your string, rather than of the string itself.

var object = { foo: "bar" };
var string = JSON.stringify(object);
var req = client.put('/test/obj.json', {
    'Content-Length': Buffer.byteLength(string)
  , 'Content-Type': 'application/json'
});
req.on('response', function(res){
  if (200 == res.statusCode) {
    console.log('saved to %s', req.url);
  }
});
req.end(string);

By default the x-amz-acl header is private. To alter this simply pass this header to the client request method.

client.put('/test/obj.json', { 'x-amz-acl': 'public-read' });

Each HTTP verb has an alternate method with the "File" suffix, for example put() also has a higher level method named putFile(), accepting a source filename and performing the dirty work shown above for you. Here is an example usage:

client.putFile('my.json', '/user.json', function(err, res){
  // Always either do something with `res` or at least call `res.resume()`.
});

Another alternative is to stream via Client#putStream(), for example:

http.get('http://google.com/doodle.png', function(res){
  var headers = {
      'Content-Length': res.headers['content-length']
    , 'Content-Type': res.headers['content-type']
  };
  client.putStream(res, '/doodle.png', headers, function(err, res){
    // check `err`, then do `res.pipe(..)` or `res.resume()` or whatever.
  });
});

You can also use your stream's pipe method to pipe to the PUT request, but you'll still have to set the 'Content-Length' header. For example:

fs.stat('./Readme.md', function(err, stat){
  // Be sure to handle `err`.

  var req = client.put('/Readme.md', {
      'Content-Length': stat.size
    , 'Content-Type': 'text/plain'
  });

  fs.createReadStream('./Readme.md').pipe(req);

  req.on('response', function(res){
    // ...
  });
});

Finally, if you want a nice interface for putting a buffer or a string of data, use Client#putBuffer():

var buffer = new Buffer('a string of data');
var headers = {
  'Content-Type': 'text/plain'
};
client.putBuffer(buffer, '/string.txt', headers, function(err, res){
  // ...
});

Note that both putFile and putStream will stream to S3 instead of reading into memory, which is great. And they return objects that emit 'progress' events too, so you can monitor how the streaming goes! The progress events have fields written, total, and percent.

GET

Below is an example GET request on the file we just shoved at S3. It simply outputs the response status code, headers, and body.

client.get('/test/Readme.md').on('response', function(res){
  console.log(res.statusCode);
  console.log(res.headers);
  res.setEncoding('utf8');
  res.on('data', function(chunk){
    console.log(chunk);
  });
}).end();

There is also Client#getFile() which uses a callback pattern instead of giving you the raw request:

client.getFile('/test/Readme.md', function(err, res){
  // check `err`, then do `res.pipe(..)` or `res.resume()` or whatever.
});

DELETE

Delete our file:

client.del('/test/Readme.md').on('response', function(res){
  console.log(res.statusCode);
  console.log(res.headers);
}).end();

Likewise we also have Client#deleteFile() as a more concise (yet less flexible) solution:

client.deleteFile('/test/Readme.md', function(err, res){
  // check `err`, then do `res.pipe(..)` or `res.resume()` or whatever.
});

HEAD

As you might expect we have Client#head and Client#headFile, following the same pattern as above.

Advanced Operations

Knox supports a few advanced operations. Like copying files:

client.copy('/test/source.txt', '/test/dest.txt').on('response', function(res){
  console.log(res.statusCode);
  console.log(res.headers);
}).end();

// or

client.copyFile('/source.txt', '/dest.txt', function(err, res){
  // ...
});

even between buckets:

client.copyTo('/source.txt', 'dest-bucket', '/dest.txt').on('response', function(res){
  // ...
}).end();

and even between buckets in different regions:

var destOptions = { region: 'us-west-2', bucket: 'dest-bucket' };
client.copyTo('/source.txt', destOptions, '/dest.txt', function(res){
  // ...
}).end();

or deleting multiple files at once:

client.deleteMultiple(['/test/Readme.md', '/test/Readme.markdown'], function(err, res){
  // ...
});

or listing all the files in your bucket:

client.list({ prefix: 'my-prefix' }, function(err, data){
  /* `data` will look roughly like:

  {
    Prefix: 'my-prefix',
    IsTruncated: true,
    MaxKeys: 1000,
    Contents: [
      {
        Key: 'whatever'
        LastModified: new Date(2012, 11, 25, 0, 0, 0),
        ETag: 'whatever',
        Size: 123,
        Owner: 'you',
        StorageClass: 'whatever'
      },
      โ‹ฎ
    ]
  }

  */
});

And you can always issue ad-hoc requests, e.g. the following to get an object's ACL:

client.request('GET', '/test/Readme.md?acl').on('response', function(res){
  // Read and parse the XML response.
  // Everyone loves XML parsing.
}).end();

Finally, you can construct HTTP or HTTPS URLs for a file like so:

var readmeUrl = client.http('/test/Readme.md');
var userDataUrl = client.https('/user.json');

Client Creation Options

Besides the required key, secret, and bucket options, you can supply any of the following:

endpoint

By default knox will send all requests to the global endpoint (s3.amazonaws.com). This works regardless of the region where the bucket is. But if you want to manually set the endpoint, e.g. for performance or testing reasons, or because you are using a S3-compatible service that isn't hosted by Amazon, you can do it with the endpoint option.

region

For your convenience when using buckets not in the US Standard region, you can specify the region option. When you do so, the endpoint is automatically assembled.

As of this writing, valid values for the region option are:

  • US Standard (default): us-standard
  • US West (Oregon): us-west-2
  • US West (Northern California): us-west-1
  • EU (Ireland): eu-west-1
  • Asia Pacific (Singapore): ap-southeast-1
  • Asia Pacific (Tokyo): ap-northeast-1
  • South America (Sao Paulo): sa-east-1

If new regions are added later, their subdomain names will also work when passed as the region option. See the AWS endpoint documentation for the latest list.

Convenience APIs such as putFile and putStream currently do not work as expected with buckets in regions other than US Standard without explicitly specify the region option. This will eventually be addressed by resolving issue #66; however, for performance reasons, it is always best to specify the region option anyway.

secure and port

By default, knox uses HTTPS to connect to S3 on port 443. You can override either of these with the secure and port options. Note that if you specify a custom port option, the default for secure switches to false, although you can override it manually if you want to run HTTPS against a specific port.

token

If you are using the AWS Security Token Service APIs, you can construct the client with a token parameter containing the temporary security credentials token. This simply sets the x-amz-security-token header on every request made by the client.

style

By default, knox tries to use the "virtual hosted style" URLs for accessing S3, e.g. bucket.s3.amazonaws.com. If you pass in "path" as the style option, or pass in a bucket value that cannot be used with virtual hosted style URLs, knox will use "path style" URLs, e.g. s3.amazonaws.com/bucket. There are tradeoffs you should be aware of:

  • Virtual hosted style URLs can work with any region, without requiring it to be explicitly specified; path style URLs cannot.
  • You can access programmatically-created buckets only by using virtual hosted style URLs; path style URLs will not work.
  • You can access buckets with periods in their names over SSL using path style URLs; virtual host style URLs will not work unless you turn off certificate validation.
  • You can access buckets with mixed-case names only using path style URLs; virtual host style URLs will not work.

For more information on the differences between these two types of URLs, and limitations related to them, see the following S3 documentation pages:

agent

Knox disables the default HTTP agent, because it leads to lots of "socket hang up" errors when doing more than 5 requests at once. See #116 for details. If you want to get the default agent back, you can specify agent: require("https").globalAgent, or use your own.

Beyond Knox

Multipart Upload

S3's multipart upload is their rather-complicated way of uploading large files. In particular, it is the only way of streaming files without knowing their Content-Length ahead of time.

Adding the complexity of multipart upload directly to knox is not a great idea. For example, it requires buffering at least 5 MiB of data at a time in memory, which you want to avoid if possible. Fortunately, @nathanoehlman has created the excellent knox-mpu package to let you use multipart upload with knox if you need it!

Easy Download/Upload

@superjoe30 has created a nice library, called simply s3, that makes it very easy to upload local files directly to S3, and download them back to your filesystem. For simple cases this is often exactly what you want!

Uploading With Retries and Exponential Backoff

@jergason created intimidate, a library wrapping Knox to automatically retry failed uploads with exponential backoff. This helps your app deal with intermittent connectivity to S3 without bringing it to a ginding halt.

Listing and Copying Large Buckets

@goodeggs created knox-copy to easily copy and stream keys of buckets beyond Amazon's 1000 key page size limit.

@segmentio created s3-lister to stream a list of bucket keys using the new streams2 interface.

@drob created s3-deleter, a writable stream that batch-deletes bucket keys.

Running Tests

To run the test suite you must first have an S3 account. Then create a file named ./test/auth.json, which contains your credentials as JSON, for example:

{
  "key": "<api-key-here>",
  "secret": "<secret-here>",
  "bucket": "<your-bucket-name>",
  "bucket2": "<another-bucket-name>",
  "bucketUsWest2": "<bucket-in-us-west-2-region-here>"
}

Then install the dev dependencies and execute the test suite:

$ npm install
$ npm test

knox's People

Contributors

aheckmann avatar andrewrk avatar aslakhellesoy avatar bioball avatar charuru avatar clbn avatar coen-hyde avatar colinmutter avatar domenic avatar dweinstein avatar hurrymaplelad avatar ianshward avatar jackhsu978 avatar jameshome avatar jbuck avatar jergason avatar jgcaruso avatar kof avatar kristokaiv avatar mackyi avatar pauliusuza avatar pifantastic avatar rauchg avatar relistan avatar sprice avatar staer avatar tj avatar tmuellerleile avatar tootallnate avatar werehamster avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

knox's Issues

Client Request with Hostname/IP doesn\'t match certificate\'s altnames error

Hi.

I was using Knox 0.9 on Nodejs v0.6.14 in production and it worked fine ( but with some memory leak issues ). So I've decided to update Nodejs to v0.8.4 and Knox before trying to solve this issues. When I updated just the Node I had some "socket hangup" errors so I saw an update on Knox ( version 0.11 ). I updated it but now nothing works. My code gets the UploadId from amazon before sending the parts to it using this line:

var request = client.request('POST', dest + '?uploads', { 'x-amz-acl' : 'public-read' });
Where "dest" is the file name.
But I'm always getting a status 400 from amazon and when I look the response I get the "Hostname/IP doesn't match certificate's altnames". Does anyone knows what it is?

403 on client.get

I don't have a good idea of what the problem is, but I'm getting a 403 on all objects in my bucket. Here's what I know:

  • Everything works on 0.0.3
  • 2f28de9 broke the signature signing so that AWS would tell me the signature does not match the request
  • a2303ef seems to fix the signature problem
  • a2303ef brought up a new problem where I get 403 for all objects when trying to download them one by one w/ client.get
  • I'm not specifying an endpoint when instantiating the client
  • The bucket is private
  • The bucket is in the US Standard zone
  • There are no objects in the root of the bucket, just folders, within which there are objects

Are there other details I could provide to narrow this down? Thanks.

Write JSON output to S3 without fs.readFile

Hi TJ,
Don't know the right place to ask howto questions for Knox [couldn't find a wiki/doc beyond the Readme.md...]
Simple question:
I want to write the output of a db query directly to S3 to use Cloudfront as my database cache. This does not involve an fs.readFile rather a db.find() query which I want to put the result to S3 as a JSON object.
Is this possible? or do I need to write it as a temporary file on my fs and then read/put it to S3?
Thanks.

SignatureDoesNotMatch Issue

I've worked through numerous examples on this, but still receiving a SignatureDoesNotMatch error when I try to 'put()' a jpeg into S3.

I've used the amazon s3 signature testing tool to troubleshoot and identified a difference between the request I am sending and what amazon expects.

What the amazon test tool expects when I enter the hexcode returned in the StringToSign section,

PUT\n\nimage/jpeg\nThu, 29 Dec 2011 06:32:18 GMT\nx-amz-acl:public-read\n/girlsafaritest2\28.jpg

What I am seeing in console.log in the request object I send out to amazon via the put() call in Knox,

_header: 'PUT \28.jpg HTTP/1.1\r\nExpect: 100-continue\r\nx-amz-acl: public-r
ead\r\nContent-Length: 45800\r\nContent-Type: image/jpeg\r\nDate: Thu, 29 Dec 20
11 06:32:18 GMT\r\nHost: girlsafaritest2.s3.amazonaws.com\r\nAuthorization: AWS
AKIAJ3S7Y6CDASJHLUWA:Tm+8/ljOevwZnoV9sEzduo4rYhw=\r\nConnection: keep-alive\r\n
r\n'

Almost everything matches except I noticed that there are two differences, 1) - the \28.jpg (is the \ causing a problem?) and 2) Amazon expects \n while in the request it uses \r\n (is windows causing an issue? I am running nodejs/knox from Win7)

In case anyone wants to see the code I am using, here it is as well:

fs.readFile('./public/images/mockdata/28.jpg', function(err, buf){
console.log('readfile completed, length is: ' + buf.length);
var req = client.put('28.jpg', {
'Content-Length': buf.length
, 'Content-Type': 'image/jpeg'
});

      console.log(req);

      req.on('response', function(res){
          console.log('[Response]:\n\r res.statusCode: ' + res.statusCode);
          console.log('res.headers: ' + res.headers);
          res.on('data', function(chunk){
                console.log('chunk string: ' + chunk.toString());
          });
          if (200 == res.statusCode) {
              console.log('saved to %s', req.url);
          }
      });
      req.end(buf);

});

Appreciate any help anyone can provide!

putStream timeout

Hi.
I was using the "pipe" method and saw that you guys implemented this on the 0.3 release. Trying to use the putStream I can only upload files with a size of about 1mb. Files larger than that gives me a 400 status Code from Amazon with the Socket Timeout error. Does anyone knows whats happening?

Broken pipe when uploading file

When I use the example code:

  var filename = 'image.jpg';
  fs.readFile(filename, function(err, buf){
    var req = client.put(filename, {
        'Content-Length': buf.length
      , 'Content-Type': 'text/plain'
    });
    req.on('response', function(res){
      if (200 == res.statusCode) {
        console.log('saved to %s', req.url);
      }
    });
    req.end(buf);
  });

And I get this error:

node.js:116
        throw e; // process.nextTick error, or 'error' event on first tick
        ^
Error: EPIPE, Broken pipe
    at Client._writeImpl (net.js:138:14)
    at Client._writeOut (net.js:427:25)
    at Client.flush (net.js:506:24)
    at Client._onWritable (net.js:584:12)
    at IOWatcher.onWritable [as callback] (net.js:167:12)

Is that because I'm using the new node v0.4.0 ?
Anyone else having that problem?

Improved performance by tweaking agent.maxSockets

This is more of a question than an issue. Will Knox be able to handle massive concurrency putting files on S3 w/o tweaking agent.maxSockets to be something greater than 5? If so, any suggestions/best practices for setting this number and/or other options for getting better perf. under heavy load?

putFile response url parameter is empty

While inspecting the response from putFile, I see the newly generated URL in some nested properties. The outer-most url property of the response is blank, though. Should this contain that URL, as well?

AwsJobHandler.prototype.putPsd = function (psd, callback) {

    //  the local temp path to the psd to put on aws
    var localPsdPath = psd.temporaryPath;

    // the base url address for this psd on aws
    var basePsdUrl = '/' + psd.user + '/psd/' + psd._id + '.psd';

    var req = this.client.putFile(localPsdPath, basePsdUrl, function (err, res) {

        if (err || res.statusCode != 200)
            return callback(new Error('aws upload failed'));

        // this is blank
        console.log(res.url);

        // update the psd doc with new url
        models.psd.update({ _id: psd._id }, { '$set': { url: basePsdUrl }});

        return callback(basePsdUrl);

    });

};

Authentication Error with custom endpoint

first of all thanks for all your great node-modules :)

unfortunately i'm having troubles with a custom endpoint set within the knox-client.
the response of aws says:

<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
var client = knox.createClient({
     key: *****
    , secret: ****
    , bucket: ****
    , endpoint: 's3-eu-west-1.amazonaws.com'
});

client.get('/').on('response', function(res){
  res.setEncoding('utf8');
  res.on('data', function(chunk){
    console.log(chunk);
  });
}).end();

is there anything wrong with my usage of knox? the accessKey & token works with no custom endpoint set.

Uppercase bucket name breaks API

If you use a bucket name containing uppercase characters, the exports.stringToSign(options) call in auth.js will return the resource name with uppercase characters, however Amazon waits this in strictly lowercase format in the signature.

Example:

Init knox with bucket name testUpperCase

StringToSign in knox:

GET


Tue, 01 Nov 2011 06:51:56 GMT
/testUpperCase/pic.jpg

Amazon response HTTP 403:

<Error><Code>SignatureDoesNotMatch</Code> [...]
<StringToSign>GET


Tue, 01 Nov 2011 06:51:56 GMT
/testuppercase/pic.jpg</StringToSign> [...]

501 Transfer-Encoding not implemented on a PUT. Wtf?

I'm trying to use knox to put some data on S3 but it always fails with a 501. Here's my code. Ignore the extra returns -- it's generated from coffee-script.

  knox = require('knox');
  s3 = knox.createClient({####SECRET####})
  test_s3 = function() {
    var buffer, request;
    buffer = new Buffer(6);
    buffer.write("Hello");
    request = s3.put('test-key');
    return request.on('response', function(response) {
      console.log(response.statusCode);
      return response.on('data', function(data) {
        return console.log(data.toString('ascii'));
      });
    });
  };
  test_s3();

What I get on the console is this:

501
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><Header>Transfer-Encoding</Header><RequestId>67F400ED6D76C07A</RequestId><HostId>oyEU9C4gk/Kf6PQta2roQ2q0Pe1cfvi1BeoVUhsEPNV5qf/39ZmOVXCn5HqdCZtq</HostId></Error>

This doesn't make sense to me. Googling around, I found that S3 usually returns this response when you try and do some operation that shouldn't send any content, but you sent content anyway. But this is a put. It's supposed to have content. Looking through the request object, it appears that content length is set and the body is being written as it should be. So what is going wrong?

Can't get simple putStream() script to work...

var s3 = require('s3') // s3 is an initialized knox S3 Client instance
var fs = require('fs')
var file = '/Users/nrajlich/Pictures/avatar.jpg'; // This is a valid JPG file that does exist

var headers = {};
headers['Content-Type'] = 'image/jpeg';

var res = fs.createReadStream(file);
var s3req = s3.putStream(res, '/nate-avatar.jpg', headers, function(err, s3res){
  if (err) throw err;
  console.log(s3res.statusCode);
  console.log(s3res.headers);
  s3res.pipe(process.stdout, {end: false});
  console.error('s3 callback', s3req.url);
});
s3req.on('progress', console.log);

Using knox v0.3.0, this script makes S3 return a 501 error code, with the following XML body:

<Error>
  <Code>NotImplemented</Code>
  <Message>A header you provided implies functionality that is not implemented</Message>
  <Header>Transfer-Encoding</Header><RequestId>7A05E23CB3DBBD1E</RequestId>
  <HostId>Dylx7eIcmXceZCEOZKmMxpGclu2i5bsdlV6pdyH8uAzNm+DAXPE/vdrFkFwFaHht</HostId>
</Error>

Not sure if this is misuse on my part or a problem with knox, but it's kinda holding me up at the moment :)

Multipart upload

Consider adding support for the multipart upload API. See #17 and #50 for stale pull requests.

We still need to determine what the API would be, how much it would change or add to Knox, and whether it's in scope for Knox or should perhaps be a different package.

Streaming an octet stream from request to S3

I'm trying to stream an octet-stream straight to S3. The octet-stream is an XHR file upload from the browser. I assumed that I could just stream the request into putStream and everything would just work, but alas no.

Here's my code:

client = knox.createClient({ ... });

if (req.headers['content-type'].match(/application\/octet-stream/i)) {

  var filename = '/'+req.headers['x-file-name'];

  client.putStream(req, filename, function(err, res){
    // TODO: Catch errors
    body = '{"success":"true"}'
    res.writeHead(200, 
      { 'Content-Type':'text/html'
      , 'Content-Length':body.length
      })
    res.end(body)
  });

}

And the error I receive:

TypeError: Bad argument
    at Object.stat (fs.js:354:11)
    at Client.putStream (knox/client.js:181:6)

Connection reset by peer on put.

Howdy guys. We are using knox to store virtual machine snapshots (200mb - 2gb) on Amazon s3, but running into a problem:

Error: ECONNRESET, Connection reset by peer
    at Socket._writeImpl (net.js:159:14)
    at Socket._writeOut (net.js:450:25)
    at Socket.flush (net.js:529:24)
    at Socket._onWritable (net.js:609:12)
    at IOWatcher.onWritable [as callback] (net.js:188:12)

This is probably occurring because we are maxing out our connection, pushing close to 100Mbit a second for a few seconds before connection reset. Any idea how to throttle the upload to say 50Mbit/Sec?

Silent error with [email protected] and node 0.4.1

Results from make test:

..........
uncaught: AssertionError: 403 == 200
at /Users/thegoleffect/Documents/Projects/Spoondate/Spoondate.Website/node_modules/knox/test/knox.test.js:79:14
at ClientRequest. (/Users/thegoleffect/Documents/Projects/Spoondate/Spoondate.Website/node_modules/knox/lib/knox/client.js:199:7)
at ClientRequest.emit (events.js:42:17)
at HTTPParser.onIncoming (http.js:1299:9)
at HTTPParser.onHeadersComplete (http.js:87:31)
at Socket.ondata (http.js:1183:22)
at Socket._onReadable (net.js:654:27)
at IOWatcher.onReadable as callback

^C
Failures: 1

make: *** [test] Error 1

if I switch to 0.0.5, works just fine with the same auth file.

Support parameters in get request

Listing buckets is relatively simple to do (client.get(''...)), but it's currently impossible to add important parameters, like ?max-keys=n. Being able to do this within Client.prototype.get etc would probably be best.

Dev Instructions + Expresso Test Error

When running

make test

We get the error 'expresso' not found -- so we should add instructions in the Readme.md:

git submodule init
git submodule update

Then, when you do run make test, the following error occurs:

laptop@laptop ~/Public/knox $ make test
The "sys" module is now called "util". It should have a similar interface.

node.js:201
        throw e; // process.nextTick error, or 'error' event on first tick
              ^
Error: require.paths is removed. Use node_modules folders, or the NODE_PATH environment variable instead.
    at Function.<anonymous> (module.js:376:11)
    at Object.<anonymous> (/home/laptop/Public/knox/support/expresso/bin/expresso:127:24)
    at Module._compile (module.js:432:26)
    at Object..js (module.js:450:10)
    at Module.load (module.js:351:31)
    at Function._load (module.js:310:12)
    at Array.0 (module.js:470:10)
    at EventEmitter._tickCallback (node.js:192:40)

Resolve if you could ๐Ÿ‘

Documentation should mention that specifying the endpoint is not really optional

I'm finding that putFile consistently reports a false success with a 307 redirect if the endpoint option is not specific to the region. Someone has already reported the issue with 307 being treated as success, but it should also be documented that you're not going to get very far without specifying the precise endpoint for your region rather than the generic one.

Assertion Error happens eventually and crashes the server

Hi.
Thanks for the module, it helped me a lot in a feature that I'm developing in my product.
A thing that've noticing is that sometimes when the response.statusCode isn't 200 the nodeJS crashes at an exception:

assert.js:93

throw new assert.AssertionError({

    ^

AssertionError: true == false
at IncomingMessage. (http.js:1341:9)
at IncomingMessage.emit (events.js:61:17)
at HTTPParser.onMessageComplete (http.js:133:23)
at Socket.ondata (http.js:1231:22)
at Socket._onReadable (net.js:683:27)
at IOWatcher.onReadable as callback

Does anyone knows why this happens? Is there a way to catch that exception avoiding the server to crash?

Thanks a lot

Thiago

Knox not avail from npm

Hello

I know, it's a bug in a blog article. But thechangelog is stating knox is in npm. Appears to b otherwise:

npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm ERR! Error: 404 Not Found: knox
npm ERR! at IncomingMessage. (/usr/local/lib/node/.npm/npm/0.2.12-1/package/lib/utils/registry/request.js:136:16)
npm ERR! at IncomingMessage.emit (events:41:20)
npm ERR! at HTTPParser.onMessageComplete (http:107:23)
npm ERR! at Client.onData as ondata
npm ERR! at IOWatcher.callback (net:494:29)
npm ERR! at node.js:773:9
npm ERR! 404
npm ERR! 404 Looks like 'knox' is not in the npm registry.
npm ERR! 404 You should bug the author to publish it.
npm ERR! 404 Note that you can also install from a tarball or local folder.
npm ERR! 404

tests fail on master

Here's what I got after a fresh clone and doing git submodule update --init (could be a useful hint in the README?):

tom% make test
.................
   uncaught: AssertionError: 404 == 100
    at /Users/tom/Documents/Code/knox/test/ns3.test.js:70:18
    at ClientRequest.<anonymous> (lib/knox/client.js:160:7)
    at ClientRequest.emit (events:27:15)
    at HTTPParser.onIncoming (http:959:9)
    at HTTPParser.onHeadersComplete (http:87:31)
    at Client.onData [as ondata] (http:848:27)
    at IOWatcher.callback (net:494:29)
    at node.js:772:9


   uncaught: Error: 'test .putFile()' timed out
    at Timer.callback (/Users/tom/Documents/Code/knox/support/expresso/bin/expresso:746:43)
    at node.js:772:9


   Failures: 2

I'm using node v0.2.4 on Mac OS X.

knox broken in node 0.4.1

Upgrading from node 0.4.0 to 0.4.1, there has been a change in fs.open that breaks knox.

fs.js:195
binding.open(path, stringToFlags(flags), mode, callback);
^
TypeError: Bad argument
at Object.open (fs.js:195:11)
at new (fs.js:786:6)
at Object.createReadStream (fs.js:741:10)
at Object.readFile (fs.js:49:23)
at Client.putFile (/usr/local/lib/node/.npm/knox/0.0.2/package/lib/knox/client.js:153:6)

Had a look at the line myself and admit I can't see what's up with, but I wrote a quick seperate file test and it worked fine, so can only assume it's coming from somewhere in knox

Progress support

Hello,

is there any straightforward way to get progress support while uploading a file to S3?

PUT buffer

lots of canvas related interactions are just with a Buffer, resizing etc so it would be sweet to just PUT the buf

Uses HTTP, not HTTPS

Client uses http://bucket.s3.amazonaws.com/ instead of https://s3.amazonaws.com/bucket/ :-(

InvalidURI in request

Hi, trying to run knox to upload an image file into s3, and followed the tutorial and received a 400 error with invalid URI.

Here's the code:

var fs = require('fs');
var knox = require('knox');
var client = knox.createClient({
key: 'accessKeyGoesHere'
, secret: 'secretyKeyGoesHere'
, bucket: 'GSTest'
, endpoint: 'GSTest.s3-website-us-east-1.amazonaws.com'
});

fs.readFile('C:\Dev\Java\workspace\mobileservice\src\public\images\mockdata\425268bb7a292ce6af493680fe641ed0.jpg', function(err, buf){
console.log('File has been read, buffer length of file is:' + buf.length);
var req = client.put('/image.jpg', {
'Content-Length': buf.length
, 'Content-Type': 'text/plain'
});
req.on('response', function(res){

        console.log('res.statusCode: ' + res.statusCode);
        console.log('res.headers: ' + res.headers);
        res.on('data', function(chunk){
            console.log('chunk string: ' + chunk.toString());
        });
        if (200 == res.statusCode) {
          console.log('saved to %s', req.url);
        }
      });

      req.end(buf);
      console.log('req.end finished');
      res.send([{result: 'success'}]);
});

Any thoughts? When I output the req.url I get http://GirlSafariTest.s3-website-us-east-1.amazonaws.com\image.jpg with an opposite slash instead of http://GirlSafariTest.s3-website-us-east-1.amazonaws.com/image.jpg. Perhaps this is the problem... not sure what the error is.

I also tried an alternative method where I do not specify the endpoint, and ran into a signature match error.

client.signedUrl does not produce a working URL

get() works fine for me, but the URL produced from client.signedUrl gives me an error:

<Error>
  <Code>SignatureDoesNotMatch</Code>
  <Message>
    The request signature we calculated does not match the signature you provided. Check your key and signing method.
  </Message>
  <StringToSignBytes>
    47 45 54 0a 0a 0a 31 33 31 30 32 38 32 33 37 31 0a 2f 70 68 6f 74 6f 2e 77 65 73 74 2e 73 70 79 2e 6e 65 74 2f 6f 72 69 67 69 6e 61 6c 2f 30 35 2f 30 35 38 65 33 36 62 62 64 61 31 65 36 34 63 65 62 30 39 63 66 31 37 64 63 63 35 61 61 62 61 31 2e 6a 70 67
  </StringToSignBytes>
  <RequestId>742F2DC7510AEB55</RequestId>
  <HostId>
    WiEAw4VFA57d/SEGbiGlmKccSvBUv/JnED0YHqTesizK79/GMgHavQCWFgnkXblj
  </HostId>
  <SignatureProvided>fjsaPEGJcL4Kr+pIGz5txlDrOH0=</SignatureProvided>
  <StringToSign>
    GET 1310282371 /photo.west.spy.net/original/03/038e36b3da1e34ceb09cf17dcf5aaba1.jpg
  </StringToSign>
  <AWSAccessKeyId>0F4GCGJRS56V9XG6XJR2</AWSAccessKeyId>
</Error>

Move bucket name to hostname for regions other than Virginia

I haven't been able to get knox to work with buckets created outside of the US Virginia region. Knox puts the bucket name in the path, not the in hostname, which doesn't seem to work for other regions.

Take an bucket in Ireland for example. Knox tries this:

http://s3.amazonaws.com/ireland-region-test

But S3 expects this:

http://ireland-region-test.s3.amazonaws.com/

But for buckets in Virgina, both schemes work. This works:

http://s3.amazonaws.com/virginia-region-test

So does this:

http://virginia-region-test.s3.amazonaws.com/

I believe, adjusting knox to always put the bucketname in the hostname would fix this problem.

Upload success but empty file

Everything uploaded as planned and it showed up in S3 AWS console. I made sure permissions are set. When I double clicked the file, it takes me to nothing. Not sure what I am doing wrong here.

The local upload works.

    if req.method == 'POST'
        Knox = require 'knox'
        S3 = Knox.createClient
            key: CONFIG.amazon.key
            secret: CONFIG.amazon.secret
            bucket: CONFIG.amazon.s3
        path = req.body.path || 'etc'
        path = '/'+req.subdomain+'/'+path
        files = []
        rFiles = []

        console.log req.files

        _.each req.files, (file, k) -> files.push file
        async.forEachLimit files, 6,
            ((file, cb) -> # Upload and move the files
                if acceptableFile.indexOf(file.type) != -1
                    fullpath = BASEPATH+'/static/uploads'+path
                    filename = (file.path.split('/')[2]+'.'+file.name.substr(file.name.lastIndexOf('.')+1)).toLowerCase()

                    ### Local copy
                    mkdir fullpath, (err) -> # recursively create dir
                        fs.link file.path, fullpath+'/'+filename, (err) ->
                            if err then console.log err
                            return
                        rFiles.push path+'/'+filename
                        fs.unlink file.path
                        cb()
                        return # mkdir
                    ###

                    console.log file

                    S3.putFile file.path, path+'/'+file.name, (err, r) ->
                        if err
                            log.error "c/user.upload - Error uploading #{file.name} -> #{path+'/'+filename} to S3 - #{err.message}"
                            console.error err
                            rFiles.push null
                            fs.unlink file.path
                            cb err
                            return

                        if r.statusCode == 200
                            rPath = "https://#{CONFIG.amazon.s3}.s3.amazonaws.com#{path}/#{filename}"
                            log.info "c/user.upload - #{rPath} uploaded"
                            rFiles.push rPath
                            fs.unlink file.path
                            cb()
                        else
                            log.error "c/user.upload - Error uploading #{file.name} -> #{path+'/'+filename} to S3 - #{r.statusCode} error"
                            rFiles.push null
                            fs.unlink file.path
                            cb new Error r.statusCode
                        return
                return),
            ((err) ->
                if err
                    res.send 'Error uploading file'
                    return
                res.send 'File uploaded'
                return)
    else
        res.render BASEPATH+'/v/public/user/upload',
            layout: null
            path: req.query.path || 'etc'
        next()

mocking

I'm having difficulties incorporating this library into my app as it's a bit tricky to set up a mocked version. Any tips?

node app crashes when amazon returns an error

How to reproduce: Change your key or secret so you will get an authorization error.

node 0.4.11, knox 0.0.9

The following error is output when the Amazon response is received:


assert.js:93
  throw new assert.AssertionError({
        ^

AssertionError: true == false
    at IncomingMessage.<anonymous> (http.js:1341:9)
    at IncomingMessage.emit (events.js:61:17)
    at HTTPParser.onMessageComplete (http.js:133:23)
    at Socket.ondata (http.js:1231:22)
    at Socket._onReadable (net.js:683:27)
    at IOWatcher.onReadable [as callback] (net.js:177:10)

Expiring Links

In aws php sdk, you can temporarily grant access for your ACL_PRIVATE files by generating links like:

$url = $s3->get_object_url('aws-php-sdk-test', $filename, '45 minutes');

Is it possible to generate these kind of links using knox?

Make putStream work with any stream

Also, reimplement putFile in terms of it.

All the existing pull requests go about this in a strange way. It should be as simple as

Client.prototype.putStream = function (stream, filename, headers, fn) {
  var req = this.put(filename, headers);
  stream.pipe(pipe);
  // a bunch of code to hook up to the appropriate listeners and then call `fn` appropriately
};

Then we can reimplement putFile to do a fs.stat to get Content-Type and Content-Length headers before just proxying to putStream.

Possible to list contents of a bucket?

Wasn't sure if this was possible. I played around with some GET requests, but it hasn't been immediately obvious to me where the data is hiding. Is this even possible?

ACL Support

Are there any plans to inspect whether objects are public or private? Obviously this requires XML parsing, which is a nodejs wasteland, but it's sadly necessary in a quite a few cases.

Chunked upload

Silly question:
Is it possible to upload by chunked content? I'm using knox with formidable and it would be great to just push the chunks as they are parsed.

Add example for moving files

It's relatively simple to move files with knox, but it requires you to set Content-Length: 0 so that nodejs doesn't send a Transfer-Encoding header to S3, which will make S3 501.

Adding some documentation for this might save people some time:

client.put('0/0/0.png', {
    'Content-Type': 'image/jpg',
    'Content-Length': '0',
    'x-amz-copy-source': '/test-tiles/0/0/0.png',
    'x-amz-metadata-directive': 'REPLACE'
}).on('response', function(res) {

}).end();

putFile/putStream API should handle HTTP 307 redirects from AWS S3

I totally get the "low level" knox API should leave this kind of handling to the client. But I wanted to suggest that the "high level" knox S3 API handle the cases where AWS S3 returns a HTTP 307 status code.

The Amazon's S3 doc says:

If you create a bucket using <CreateBucketConfiguration>, applications that access your
bucket must be able to handle 307 redirects."

This happened to me when I used #53 a few minutes after creating a new bucket in AWS to use. Once DNS is sync'd with S3, the case is likely more rare.

Obviously there are "workarounds" for this, but wanted to bring it up since knox is working pretty well for me otherwise! Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.