GithubHelp home page GithubHelp logo

keystone-storage-adapter-s3's Introduction

⚠️ Archived

This repository is archived and is no longer maintained.

For the latest Keystone release please visit the Keystone website.



🚨 Deprecated 🚨

This adapter only works with Keystone Classic (Keystone v4 and below). If you're using Keystone v5 or higher, please use the S3 File Adapter instead.

S3-based storage adapter for KeystoneJS

Build Status

This adapter is designed to replace the existing S3File field in KeystoneJS using the new storage API.

Usage

Configure the storage adapter:

var storage = new keystone.Storage({
  adapter: require('keystone-storage-adapter-s3'),
  s3: {
    key: 's3-key', // required; defaults to process.env.S3_KEY
    secret: 'secret', // required; defaults to process.env.S3_SECRET
    bucket: 'mybucket', // required; defaults to process.env.S3_BUCKET
    region: 'ap-southeast-2', // optional; defaults to process.env.S3_REGION, or if that's not specified, us-east-1
    path: '/profilepics', // optional; defaults to "/"
    publicUrl: "https://xxxxxx.cloudfront.net", // optional; sets a custom domain for public urls - see below for details
    uploadParams: { // optional; add S3 upload params; see below for details
      ACL: 'public-read',
    },
  },
  schema: {
    bucket: true, // optional; store the bucket the file was uploaded to in your db
    etag: true, // optional; store the etag for the resource
    path: true, // optional; store the path of the file in your db
    url: true, // optional; generate & store a public URL
  },
});

Then use it as the storage provider for a File field:

File.add({
  name: { type: String },
  file: { type: Types.File, storage: storage },
});

Options:

The adapter requires an additional s3 field added to the storage options. It accepts the following values:

  • key: (required) AWS access key. Configure your AWS credentials in the IAM console.

  • secret: (required) AWS access secret.

  • bucket: (required) S3 bucket to upload files to. Bucket must be created before it can be used. Configure your bucket through the AWS console here.

  • region: AWS region to connect to. AWS buckets are global, but local regions will let you upload and download files faster. Defaults to 'us-standard'. Eg, 'us-west-2'.

  • path: Storage path inside the bucket. By default uploaded files will be stored in the root of the bucket. You can override this by specifying a base path here. Base path must be absolute, for example '/images/profilepics'.

  • uploadParams: Default params to pass to the AWS S3 client when uploading files. You can use these params to configure lots of additional properties and store (small) extra data about the files in S3 itself. See AWS documentation for options. Examples: { ACL: "public-read" } to override the bucket ACL and make all uploaded files globally readable.

  • publicUrl: Provide a custom domain to serve your S3 files from. This is useful if you are storing in S3 but reading through a CDN like Cloudfront. Provide either the domain as a string eg. publicUrl: "https://xxxxxx.cloudfront.net" or a function which takes a single parameter file and return the full public url to the file.

Example with function:

publicUrl: (file) => `https://xxxxxx.cloudfront.net${file.path}/${file.filename}`;
  • generateFilename: A function that accepts a file, a parameter and a callback to generate a strong pseudo-random 16 byte filename.
generateFilename: (file, param, cb) => { cb(null, file.filename); }

Schema

The S3 adapter supports all the standard Keystone file schema fields. It also supports storing the following values per-file:

  • bucket: The bucket for the file to be stored in the database. If this is present when reading or deleting files, it will be used instead of looking at the adapter configuration. The effect of this is that you can have some (eg, old) files in your collection stored in different buckets.

  • path: The path within the bucket. If this is present when reading or deleting files, it will be used instead of looking at the adapter configuration. The effect of this is that you can have some (eg, old) files in your collection stored in different paths inside your bucket.

The main use for both of these values is to allow slow data migrations. If you don't store these values you can arguably migrate your data more easily - just move it all, then reconfigure and restart your server.

  • etag: The etag of the stored item. This is equal to the MD5 sum of the file content.

  • url: The absolute URL path of the file located on s3.

Change Log

v2.0.0

Overview

The Knox library which this package was previously based on has gone unmaintained for some time and is now failing in many scenarios. This version replaces knox with the official AWS Javascript SDK.

Breaking changes

The option headers has been replaced with uploadParams. If you were setting custom headers with previous version of the S3 Storage Adapter you will need to change these to use the appropriate param as defined in the AWS Documentation

For example, { headers: { 'x-amz-acl': 'public-read' } } should now be { uploadParams: { ACL: 'public-read' } }.

Additions

  • publicUrl: You can now customise the public url by passing either a domain name as a string (eg. { publicUrl: "https://xxxxxx.cloudfront.net" }) or by passing a function which takes the file object and returns a the url as a string.
{ publicUrl: file => `https://xxxxxx.cloudfront.net${file.path}/${file.filename}` }

Other

  • path: The requirement for path to have a leading slash has been removed. The previous implementation failed to catch this miss-configuration and Knox helpfully made the file uploads work anyway. This has lead to a situation where it is possible/likely that there are existing installations where a miss-configured path is stored in the database. To avoid breaking these installs we now handle adding or removing the leading slash as required.

License

Licensed under the standard MIT license. See LICENSE.

keystone-storage-adapter-s3's People

Contributors

autoboxer avatar bladey avatar cetchells avatar cvn avatar dominikwilkowski avatar jedwatson avatar josephg avatar mikehazell avatar n-devr avatar noviny avatar stevenkaspar avatar stevenkasparwork avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keystone-storage-adapter-s3's Issues

"Field errors"

I'm currently failing to make S3 work at all...

I'll paste my entire model below.

Firstly, I can't always create a new item, which only requires name to be entered in the initial form. All I get then is "Unknown Error". It seems to occur when creating the first item. A page refresh fixes this.

After this, when trying to upload a file, I almost always get a flash message saying "Field errors", and nothing more. There is nothing logged anywhere, and my pre-save hook doesn't seem to be reached.

I'm not really sure how to get at more detailed errors either... help!

var keystone = require('keystone');
var Types = keystone.Field.Types;

/**
 * File Model
 * ==========
 */

var File = new keystone.List('File', {
    autokey: { path: 'slug', from: 'name', unique: true },
    track: true,
});

var storage = new keystone.Storage({
    adapter: require('keystone-storage-adapter-s3'),
    s3: {
        path: process.env.S3_BUCKET,
        region: process.env.S3_REGION,
        headers: {
            'x-amz-acl': 'public',
        },
    },
    schema: {
        bucket: true, // optional; store the bucket the file was uploaded to in your db
        etag: true, // optional; store the etag for the resource
        path: true, // optional; store the path of the file in your db
        url: true, // optional; generate & store a public URL
    },
});

File.add({
    name: { type: String, required: true },
    file: {
        type: Types.File,
        storage: storage,
        filename: function (item, file) {
            return encodeURI(item._id + '-' + item.name);
        },
        format: function (item, file) {
            return '<pre>' + JSON.stringify(file, false, 2) + '</pre>'
                + '<img src="' + file.url + '" style="max-width: 300px">';
        },
    },
    link: { type: Types.Url, note: 'This will be automatically populated once you\'ve uploaded a file.' },
});

File.schema.pre('save', function (next) {
    console.log(this);
    if (this.file && this.file.url) {
        this.link = this.file.url;
    }
    next();
});

File.defaultColumns = 'name';
File.register();

Digital Ocean Spaces support?

Hi there,

I'm trying to use the keystone-storage-adapter-s3 for my File field with my S3 Digital Ocean Spaces, but no matter what I try, I keep on getting Field errors when trying to save my model...

import keystone from 'keystone';
const Types = keystone.Field.Types;

const s3Storage = new keystone.Storage({
	adapter: require('keystone-storage-adapter-s3'),
	s3: {
		// endpoint    : 'ams3.digitaloceanspaces.com',
		key         : process.env.S3_KEY,
		secret      : process.env.S3_SECRET,
		bucket      : process.env.S3_BUCKET,
		region      : process.env.S3_REGION, // -> ams3
		path        : 'uploads',
		uploadParams: {
			ACL     : 'public-read',
		}
	},
	schema: {
		bucket: true,
		etag  : true,
		path  : true,
		url   : true
	}
});

const MyModel = new keystone.List('MyModel ', {
	map: {
		name: 'title'
	}
});

MyModel.add({
	fileUpload: {
		type        : Types.File,
		label       : 'Upload file',
		storage     : s3Storage,
		initial     : true,
		required    : true
	}
});

Is this adapter not compatible with other endpoints then AWS?
I found this post that I have to set the endpoint:
https://www.digitalocean.com/community/questions/how-to-use-digitalocean-spaces-with-the-aws-s3-sdks

But I don't find any reference to that In the adapter's code anywhere?
Thanks!

Multiple images?

not an issue but can this be used or extended for Multiple Images?
for front-end I use:

<input id="upload" type='file' name="File-image-1001" accept=".png,.jpg,.jpeg" multiple />
<input type="hidden" name="image" value="upload:File-image-1001">

on model:

	var s3Storage = new keystone.Storage({
	      adapter: require('keystone-storage-adapter-s3'),
	      s3: {
	        path: 'some_path',
	        region: process.env.S3_REGION,
	        headers: {
	          'x-amz-acl': 'public-read',
	        },
	      },
				schema: {
					filename: true,
					size: true,
					mimetype: true,
					path: true,
					originalname: true,
					url: true
				}
		});

MyModel.add({ image: { type: Types.File, storage: s3Storage } });

then on server side:

keystone.list('MyModel').updateItem( model, req.body, { files: req.files }, function(err){
     if(err) return error(err,'Some Error');
}

`encodeSpecialCharacters()` behaviour

If I upload Book(2013).pdf, I see the following in the logs

Uploading file "/resources/Book%282013%29.pdf" to "a.bucket.com.au" bucket with mimetype "application/pdf"
file upload successful /resources/Book%282013%29.pdf

The file is uploaded with the key /resources/Book%282013%29.pdf

But, S3 encodes the key in the Object URL with %25 for the % symbol, which becomes /resources/Book%25282013%2529.pdf

When I try to download that file, it can't be found because S3 encodes the key a second time.

function encodeSpecialCharacters (filename) {
// Note: these characters are valid in URIs, but S3 does not like them for
// some reason.
return encodeURI(filename).replace(/[!'()#*+? ]/g, function (char) {
return '%' + char.charCodeAt(0).toString(16);
});
}

If I remove the encodeSpecialCharacters() function and upload the same file again, I see

Uploading file "/resources/Book(2013).pdf" to "a.bucket.com.au" bucket with mimetype "application/pdf"
file upload successful /resources/Book(2013).pdf

Which uploads as /resources/Book(2013).pdf, and the Object URL that is encoded by S3 becomes /resources/Book%282013%29.pdf, which I can download

The comment says that

S3 does not like them for some reason.

But I'm not sure what the reason is? I looked in the PR #35 but there wasn't any mention of the change. I've also looked at the relevant aws documentation, but it says those characters "are generally safe for use in key names"

I'd be happy to put together a PR for the change. If the current behaviour isn't working, I can't imagine changing it will impact anyone?

Integrate into keystone-test-project

Per the conversation with @JedWatson on Slack (#dev):

To get the new S3 storage adapter that josephg contributed integrated into keystone-test-project:

Steps are [basically]:

  • add the keystone-storage-adapter-s3 dependency using the git url to the test project’s package.json
  • if (and only if) S3 environment variables are present, create an adapter in the Files list and add a field that uses it
  • edit admin/client/utils/List.js and remove both instances of /legacy (this will switch you over to the new api)
  • load the Admin UI, make sure the new field works as expected (upload / remove files, check they exist in S3 with the correct headers, etc)

@josephg , I can take this on if you are not already working on it?

failure to prefix path with `/` results in leaking local directory info into s3 url

When specifying the [s3 config].path property without a / prefix, the resulting filename at s3 includes the current running directory of the application. I believe this is less than desirable, and possibly a bug.

I believe this is due to the _resolveFilename step here:

var destpath = self._resolveFilename(file);
which in turn calls:
// Get the full, absolute path name for the specified file.
S3Adapter.prototype._resolveFilename = function (file) {
// Just like the bucket, the schema can store the path for files. If the path
// isn't stored we'll assume all the files are in the path specified in the
// s3.path option. If that doesn't exist we'll assume the file is in the root
// of the bucket. (Whew!)
var path = file.path || this.options.path || '/';
return pathlib.posix.resolve(path, file.filename);
};

Code to reproduce:

const keystone = require('keystone');

const s3 = {
  bucket: 'foobar',
  key: '...',
  secret: '...',
  path: 'dev',
  headers: { 'x-amz-acl': 'public-read' }
}
const storage = new keystone.Storage({
  adapter: require('keystone-storage-adapter-s3'),
  s3,
  schema: {
    bucket: true, // optional; store the bucket the file was uploaded to in your db
    etag: true, // optional; store the etag for the resource
    path: true, // optional; store the path of the file in your db
    url: true, // optional; generate & store a public URL
  },
});

const Types = keystone.Field.Types;
const Client = new keystone.List('Client', {
  autokey: { from: 'name', path: 'key', unique: true }
});

Client.add({
  name: { type: String, required: true },
  image: { type: Types.File, storage },
});
Client.register();

When uploading the image to the Client through the admin, the resulting URL is
https://foobar.s3.amazonaws.com/Users/john/mysrc/[redacted]/[redcated]/app/dev/a5735c91051859368da65af294f11892.png
when it probably should be:
http://foobar.s3.amazonaws.com/dev/a5735c91051859368da65af294f11892.png

Even though the docs specify a / as the prefix, I myself failed to enter it and it took quite a while to understand what was going on.

No option to use original filename

It looks like the generateFilename function is set to nameFunctions.randomFilename - meaning any upload using this storage adapter will always have a random name.

Suggest adding an option to allow the original name to be used instead.

Concept:

    if(options.s3.originalname) {
        this.options.generateFilename = function(file, attempt, callback) {
            return callback(null, file.originalname);
        }
    } else {
        this.options.generateFilename = ensureCallback(this.options.generateFilename);
    }

Or should this be handled somewhere higher up in the keystone Types? Or be changed to support a filename function in the Field definition (e.g., filename: (file) => file.originalName)?

pre:upload option is gone?

Hello, before the S3 storage adapter it was possible to use a pre:upload hook to do stuff with the file before uploading. I'm just wondering if that's still a possibility. Thanks!

Code Review

General

  • Should be keystone-storage-adapter-s3, it's specifically a storage adapter and not a generic s3 lib
  • Needs to follow keystone package standards:
    • MIT License
    • index.js as entry point
    • use keystone-config-eslint
    • add npm run lint / npm run test-unit tasks
    • npm test should run linter and unit tests

👍 for comments. Would be helpful to see examples of config, resulting schema and stored data from the File field in the Readme for clarity.

debug

Uploads can run in parallel asynchronously, tie debug messages together somehow (filename?). This is lossy: https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L115

options example

This mix of comments in a comment and invalid syntax in a comment is weird: https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L18

pathlib

In keystone we refer to the path lib as path. For consistency, I'd suggest doing that here as well. Name other variables more explicitly instead:

https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L36

var s3path = options.s3.path;

callback with errors

/*93 */  return callback(new Error('Amazon returned status code: ' + res.statusCode));
// vs.
/*138*/ return callback('Amazon returned status code ' + res.statusCode);

tone

This isn't helpful, comments should be informative and not convey attitude https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L4

suggested change: note support for node versions in Readme, replace comment with

// support for node 0.12

No support for IAM roles

Knox doesn't support IAM roles, which is becoming more and more the de-facto authentication inside AWS instead of keys and secrets. See: Automattic/knox#262
I suggest using the native aws-sdk package instead.

S3 Multiple Uploads

Like with the CloudinaryImages will there be an option for S3 or is there a current workaround for uploading multiple files.

Original Filename

Previously with Types.S3File the database also stored originalname, however this doesn't seem to be possible anymore. It would be nice to be able to continue doing this.

It's documented with the keystone File field type here, but there it says that the adapter can override these fields, so I assume it's this breaking it? If not, happy to close this and re-open against keystone itself.

uploads added to /tmp on server

Hi there,

Please excuse me if Im missing something simple or misunderstanding.... When running our app on the server, we found that before the uploads are uploaded to the s3 bucket they are also stored on the server in the /tmp folder. Now its beginning to fill up the server so, Im wondering Is there a way to specify the directory these uploads go to? Furthermore I guess Im planning to add a post save hook in order to delete the file in /tmp when the upload is complete (not sure if this will work yet). Any Ideas or other suggestions would be greatly appreciated...

thanks!

ACL: "public-read"

ACL in uploadParams don't seems to work.
I've tried with keystone-storage-adapter-s3 v2.0.0

Cloudfront support

It would be nice if we could had a separate cloudfront / cdn link and would pull the files from there.

Problem with updates and deletes

Hi,
I am newbie with nodejs, so I dont understand my problem.
When I create a new document that contains a s3 file all is ok but if I modify this document ( I modify the file) or I delete the document the associated file dont dissapear over S3.

Whats happen? Configuration problem? Any suggest?

Excuse me for my bad English. Thanks in advise

Public URL is incorrectly generated?

Hello,

Very grateful for this adapter, did notice a small problem with the public url:

return 'https://' + bucket + '.s3.amazonaws.com' + absolutePath; on line 210

all of my public urls are

'https:// 's3.amazonaws.com/' + bucket + '/' +absolutePath;

Am I the only one that has this problem?

I could easily submit a PR for this.
Thanks

Note field option

It appears that using this storage adapter in conjunction with the field option 'note' leads to a blank page for all pages in back-office. Am I doing something wrong?

var storage = new keystone.Storage({
  adapter: require('keystone-storage-adapter-s3'),
  s3: {
    key: process.env.S3_KEY,
    secret: process.env.S3_SECRET,
    bucket: process.env.S3_BUCKET,
    region: process.env.S3_REGION,
    path: '/images/myModel/',
    headers: {
      'x-amz-acl': 'public-read'
    }
  },
  schema: {
    bucket: true,
    etag: false,
    path: true,
    url: true
  }
})

MyModel.add({
  image: {
    type: Types.File,
    note: '324 x 324 px (retina)',
    storage: storage
    mimetype: ['image/png', 'image/jpeg']
  }
})

Thx by advance

Dynamic path

I'm trying to assign unique paths for each object that's saved. Is there a way we can assign a dynamic path (perhaps based on the id of the object via callback) and then save that?

cover_image: {
      label: "Cover Image",
      type: Types.File,
      storage: s3Storage(returnsId),
      filename: function (item, file) {
          return encodeURI(item._id + '-' + item.name);
      }
}

Support environment variables?

Keystone currently supports detecting S3 options in process.env - see https://github.com/keystonejs/keystone/blob/master/index.js#L75-L77

I think it would be worth continuing to support these, like this: https://github.com/keystonejs/keystone-email/blob/master/lib/transports/mailgun/getSendOptions.js#L7-L12

We may remove support in Keystone for the s3 config option though..? Would probably be awkward to support and breaks separation of concerns between the packages

S3 filenames not working correctly?

I've just started trying to migrate an app to keystone v4.0.0-beta. I've currently got a working s3 setup. However, now I've changed to using keystone-storage-adapter-s3, the uploaded files on S3 have strange filenames, and these file names do not match what is stored in S3.

The following is going to be a wall of code, but I figured I should provide all possible info.

My model is as follows:

var keystone = require('keystone');
var Types = keystone.Field.Types;

/**
 * File Model
 * ==========
 */

var File = new keystone.List('File', {
    autokey: { path: 'slug', from: 'name', unique: true },
    track: true,
});

var storage = new keystone.Storage({
    adapter: require('keystone-storage-adapter-s3'),
    s3: {
        path: process.env.S3_BUCKET,
        region: process.env.S3_REGION,
        headers: {
            'x-amz-acl': 'private',
        },
    },
    schema: {
        bucket: true, // optional; store the bucket the file was uploaded to in your db
        etag: true, // optional; store the etag for the resource
        path: true, // optional; store the path of the file in your db
        url: false, // optional; generate & store a public URL
    },
});

File.add({
    name: { type: String, required: true },
    file: {
        type: Types.File,
        storage: storage,
        filename: function (item, file) {
            return encodeURI(item._id + item.name);
        },
        format: function (item, file) {
            return '<pre>' + JSON.stringify(file, false, 2) + '</pre>'
                    + '<img src="' + file.url + '" style="max-width: 300px">';
        },
    },
    folder: { type: Types.Relationship, ref: 'Folder', many: false },
    mimetype: { type: String, hidden: true },
});

File.schema.pre('save', function (next) {
    if (this.file.filetype !== undefined) {
        this.mimetype = convert(this.file.filetype);
    }
    next();
});

File.defaultColumns = 'name, folder, author, lastModifiedDate';
File.register();

I uploaded a file, which I called testing, however this is what it looks like on S3:

Bucket: jstockwin-development
Name:   D:\Documents\Development\GitHub\students4students\jstockwin-development\nJmJPmxQdzsjygqw.pdf
Link:   <HIDDEN>
Size:   111495
Last Modified:  Sun Aug 28 11:42:30 GMT+100 2016
Owner:  jstockwin2
ETag:   6584aab099ef04293e204adb3b935bd0

Note that D:\Documents\Development\GitHub\students4students\ is where my keystone app is stored locally, and not where the file is being uploaded from....

However, on keystone, the file looks like this:

{ _id: 57c2c00c148fbb401c470eac,
  updatedAt: 2016-08-28T10:42:29.086Z,
  createdAt: 2016-08-28T10:42:20.028Z,
  slug: 'testing',
  name: 'testing',
  __v: 0,
  file: 
   { mimetype: 'application/pdf',
     size: 111495,
     filename: 'nJmJPmxQdzsjygqw.pdf',
     etag: '"6584aab099ef04293e204adb3b935bd0"',
     bucket: 'jstockwin-development' } }

So when I try to get the file, which I do using aws-sdk, I'm essentially getting a 404 error. (The specified key does not exist).

Sorry if it's me doing something wrong :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.