keystonejs / keystone-storage-adapter-s3 Goto Github PK
View Code? Open in Web Editor NEW⚠️ Archived - Legacy S3 Storage Adapter for KeystoneJS
License: MIT License
⚠️ Archived - Legacy S3 Storage Adapter for KeystoneJS
License: MIT License
Per the conversation with @JedWatson on Slack (#dev):
To get the new S3 storage adapter that josephg contributed integrated into keystone-test-project:
keystone-storage-adapter-s3
dependency using the git url to the test project’s package.json
Files
list and add a field that uses itadmin/client/utils/List.js
and remove both instances of /legacy
(this will switch you over to the new api)@josephg , I can take this on if you are not already working on it?
keystone-storage-adapter-s3
, it's specifically a storage adapter and not a generic s3 libindex.js
as entry pointkeystone-config-eslint
npm run lint
/ npm run test-unit
tasksnpm test
should run linter and unit tests👍 for comments. Would be helpful to see examples of config, resulting schema and stored data from the File field in the Readme for clarity.
Uploads can run in parallel asynchronously, tie debug messages together somehow (filename?). This is lossy: https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L115
This mix of comments in a comment and invalid syntax in a comment is weird: https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L18
In keystone we refer to the path lib as path
. For consistency, I'd suggest doing that here as well. Name other variables more explicitly instead:
https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L36
var s3path = options.s3.path;
/*93 */ return callback(new Error('Amazon returned status code: ' + res.statusCode));
// vs.
/*138*/ return callback('Amazon returned status code ' + res.statusCode);
This isn't helpful, comments should be informative and not convey attitude https://github.com/keystonejs/keystone-s3/blob/master/s3adapter.js#L4
suggested change: note support for node versions in Readme, replace comment with
// support for node 0.12
It appears that using this storage adapter in conjunction with the field option 'note' leads to a blank page for all pages in back-office. Am I doing something wrong?
var storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
key: process.env.S3_KEY,
secret: process.env.S3_SECRET,
bucket: process.env.S3_BUCKET,
region: process.env.S3_REGION,
path: '/images/myModel/',
headers: {
'x-amz-acl': 'public-read'
}
},
schema: {
bucket: true,
etag: false,
path: true,
url: true
}
})
MyModel.add({
image: {
type: Types.File,
note: '324 x 324 px (retina)',
storage: storage
mimetype: ['image/png', 'image/jpeg']
}
})
Thx by advance
Hi there,
I'm trying to use the keystone-storage-adapter-s3 for my File field with my S3 Digital Ocean Spaces, but no matter what I try, I keep on getting Field errors
when trying to save my model...
import keystone from 'keystone';
const Types = keystone.Field.Types;
const s3Storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
// endpoint : 'ams3.digitaloceanspaces.com',
key : process.env.S3_KEY,
secret : process.env.S3_SECRET,
bucket : process.env.S3_BUCKET,
region : process.env.S3_REGION, // -> ams3
path : 'uploads',
uploadParams: {
ACL : 'public-read',
}
},
schema: {
bucket: true,
etag : true,
path : true,
url : true
}
});
const MyModel = new keystone.List('MyModel ', {
map: {
name: 'title'
}
});
MyModel.add({
fileUpload: {
type : Types.File,
label : 'Upload file',
storage : s3Storage,
initial : true,
required : true
}
});
Is this adapter not compatible with other endpoints then AWS?
I found this post that I have to set the endpoint:
https://www.digitalocean.com/community/questions/how-to-use-digitalocean-spaces-with-the-aws-s3-sdks
But I don't find any reference to that In the adapter's code anywhere?
Thanks!
Previously with Types.S3File
the database also stored originalname
, however this doesn't seem to be possible anymore. It would be nice to be able to continue doing this.
It's documented with the keystone File field type here, but there it says that the adapter can override these fields, so I assume it's this breaking it? If not, happy to close this and re-open against keystone itself.
more of an Enhancement request, but might there be plans to allow saving image dimensions in the schema?
schema { dimensions:true,...
It would be nice if we could had a separate cloudfront / cdn link and would pull the files from there.
I've just started trying to migrate an app to keystone v4.0.0-beta. I've currently got a working s3 setup. However, now I've changed to using keystone-storage-adapter-s3
, the uploaded files on S3 have strange filenames, and these file names do not match what is stored in S3.
The following is going to be a wall of code, but I figured I should provide all possible info.
My model is as follows:
var keystone = require('keystone');
var Types = keystone.Field.Types;
/**
* File Model
* ==========
*/
var File = new keystone.List('File', {
autokey: { path: 'slug', from: 'name', unique: true },
track: true,
});
var storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
path: process.env.S3_BUCKET,
region: process.env.S3_REGION,
headers: {
'x-amz-acl': 'private',
},
},
schema: {
bucket: true, // optional; store the bucket the file was uploaded to in your db
etag: true, // optional; store the etag for the resource
path: true, // optional; store the path of the file in your db
url: false, // optional; generate & store a public URL
},
});
File.add({
name: { type: String, required: true },
file: {
type: Types.File,
storage: storage,
filename: function (item, file) {
return encodeURI(item._id + item.name);
},
format: function (item, file) {
return '<pre>' + JSON.stringify(file, false, 2) + '</pre>'
+ '<img src="' + file.url + '" style="max-width: 300px">';
},
},
folder: { type: Types.Relationship, ref: 'Folder', many: false },
mimetype: { type: String, hidden: true },
});
File.schema.pre('save', function (next) {
if (this.file.filetype !== undefined) {
this.mimetype = convert(this.file.filetype);
}
next();
});
File.defaultColumns = 'name, folder, author, lastModifiedDate';
File.register();
I uploaded a file, which I called testing
, however this is what it looks like on S3:
Bucket: jstockwin-development
Name: D:\Documents\Development\GitHub\students4students\jstockwin-development\nJmJPmxQdzsjygqw.pdf
Link: <HIDDEN>
Size: 111495
Last Modified: Sun Aug 28 11:42:30 GMT+100 2016
Owner: jstockwin2
ETag: 6584aab099ef04293e204adb3b935bd0
Note that D:\Documents\Development\GitHub\students4students\
is where my keystone app is stored locally, and not where the file is being uploaded from....
However, on keystone, the file looks like this:
{ _id: 57c2c00c148fbb401c470eac,
updatedAt: 2016-08-28T10:42:29.086Z,
createdAt: 2016-08-28T10:42:20.028Z,
slug: 'testing',
name: 'testing',
__v: 0,
file:
{ mimetype: 'application/pdf',
size: 111495,
filename: 'nJmJPmxQdzsjygqw.pdf',
etag: '"6584aab099ef04293e204adb3b935bd0"',
bucket: 'jstockwin-development' } }
So when I try to get the file, which I do using aws-sdk
, I'm essentially getting a 404 error. (The specified key does not exist).
Sorry if it's me doing something wrong :)
not an issue but can this be used or extended for Multiple Images?
for front-end I use:
<input id="upload" type='file' name="File-image-1001" accept=".png,.jpg,.jpeg" multiple />
<input type="hidden" name="image" value="upload:File-image-1001">
on model:
var s3Storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
path: 'some_path',
region: process.env.S3_REGION,
headers: {
'x-amz-acl': 'public-read',
},
},
schema: {
filename: true,
size: true,
mimetype: true,
path: true,
originalname: true,
url: true
}
});
MyModel.add({ image: { type: Types.File, storage: s3Storage } });
then on server side:
keystone.list('MyModel').updateItem( model, req.body, { files: req.files }, function(err){
if(err) return error(err,'Some Error');
}
Hi,
I am newbie with nodejs, so I dont understand my problem.
When I create a new document that contains a s3 file all is ok but if I modify this document ( I modify the file) or I delete the document the associated file dont dissapear over S3.
Whats happen? Configuration problem? Any suggest?
Excuse me for my bad English. Thanks in advise
ACL in uploadParams don't seems to work.
I've tried with keystone-storage-adapter-s3 v2.0.0
I'm trying to assign unique paths for each object that's saved. Is there a way we can assign a dynamic path (perhaps based on the id of the object via callback) and then save that?
cover_image: {
label: "Cover Image",
type: Types.File,
storage: s3Storage(returnsId),
filename: function (item, file) {
return encodeURI(item._id + '-' + item.name);
}
}
Hi there,
Please excuse me if Im missing something simple or misunderstanding.... When running our app on the server, we found that before the uploads are uploaded to the s3 bucket they are also stored on the server in the /tmp folder. Now its beginning to fill up the server so, Im wondering Is there a way to specify the directory these uploads go to? Furthermore I guess Im planning to add a post save hook in order to delete the file in /tmp when the upload is complete (not sure if this will work yet). Any Ideas or other suggestions would be greatly appreciated...
thanks!
Is this adapter compatible with the last version of keystone (beta) ?
There are a lot of s3 compatible object storages out there, like minio and co. But there is currently no option to define your own endpoint which could be used instead of the amazon one. While I'm not to familiar with the official AWS sdk it looks like you can define it and expose it as a config to the users. This would make this plugin usable for many a lot more situations.
It looks like the generateFilename
function is set to nameFunctions.randomFilename
- meaning any upload using this storage adapter will always have a random name.
Suggest adding an option to allow the original name to be used instead.
Concept:
if(options.s3.originalname) {
this.options.generateFilename = function(file, attempt, callback) {
return callback(null, file.originalname);
}
} else {
this.options.generateFilename = ensureCallback(this.options.generateFilename);
}
Or should this be handled somewhere higher up in the keystone Types? Or be changed to support a filename function in the Field definition (e.g., filename: (file) => file.originalName
)?
If I upload Book(2013).pdf
, I see the following in the logs
Uploading file "/resources/Book%282013%29.pdf" to "a.bucket.com.au" bucket with mimetype "application/pdf"
file upload successful /resources/Book%282013%29.pdf
The file is uploaded with the key /resources/Book%282013%29.pdf
But, S3 encodes the key in the Object URL with %25 for the %
symbol, which becomes /resources/Book%25282013%2529.pdf
When I try to download that file, it can't be found because S3 encodes the key a second time.
keystone-storage-adapter-s3/index.js
Lines 34 to 40 in 52f7f56
If I remove the encodeSpecialCharacters()
function and upload the same file again, I see
Uploading file "/resources/Book(2013).pdf" to "a.bucket.com.au" bucket with mimetype "application/pdf"
file upload successful /resources/Book(2013).pdf
Which uploads as /resources/Book(2013).pdf
, and the Object URL that is encoded by S3 becomes /resources/Book%282013%29.pdf
, which I can download
The comment says that
S3 does not like them for some reason.
But I'm not sure what the reason is? I looked in the PR #35 but there wasn't any mention of the change. I've also looked at the relevant aws documentation, but it says those characters "are generally safe for use in key names"
I'd be happy to put together a PR for the change. If the current behaviour isn't working, I can't imagine changing it will impact anyone?
I'm currently failing to make S3 work at all...
I'll paste my entire model below.
Firstly, I can't always create a new item, which only requires name
to be entered in the initial form. All I get then is "Unknown Error". It seems to occur when creating the first item. A page refresh fixes this.
After this, when trying to upload a file, I almost always get a flash message saying "Field errors", and nothing more. There is nothing logged anywhere, and my pre-save hook doesn't seem to be reached.
I'm not really sure how to get at more detailed errors either... help!
var keystone = require('keystone');
var Types = keystone.Field.Types;
/**
* File Model
* ==========
*/
var File = new keystone.List('File', {
autokey: { path: 'slug', from: 'name', unique: true },
track: true,
});
var storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
path: process.env.S3_BUCKET,
region: process.env.S3_REGION,
headers: {
'x-amz-acl': 'public',
},
},
schema: {
bucket: true, // optional; store the bucket the file was uploaded to in your db
etag: true, // optional; store the etag for the resource
path: true, // optional; store the path of the file in your db
url: true, // optional; generate & store a public URL
},
});
File.add({
name: { type: String, required: true },
file: {
type: Types.File,
storage: storage,
filename: function (item, file) {
return encodeURI(item._id + '-' + item.name);
},
format: function (item, file) {
return '<pre>' + JSON.stringify(file, false, 2) + '</pre>'
+ '<img src="' + file.url + '" style="max-width: 300px">';
},
},
link: { type: Types.Url, note: 'This will be automatically populated once you\'ve uploaded a file.' },
});
File.schema.pre('save', function (next) {
console.log(this);
if (this.file && this.file.url) {
this.link = this.file.url;
}
next();
});
File.defaultColumns = 'name';
File.register();
Like with the CloudinaryImages will there be an option for S3 or is there a current workaround for uploading multiple files.
When specifying the [s3 config].path
property without a /
prefix, the resulting filename at s3 includes the current running directory of the application. I believe this is less than desirable, and possibly a bug.
I believe this is due to the _resolveFilename step here:
keystone-storage-adapter-s3/index.js
Line 107 in 4fb4ce1
keystone-storage-adapter-s3/index.js
Lines 86 to 94 in 4fb4ce1
Code to reproduce:
const keystone = require('keystone');
const s3 = {
bucket: 'foobar',
key: '...',
secret: '...',
path: 'dev',
headers: { 'x-amz-acl': 'public-read' }
}
const storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3,
schema: {
bucket: true, // optional; store the bucket the file was uploaded to in your db
etag: true, // optional; store the etag for the resource
path: true, // optional; store the path of the file in your db
url: true, // optional; generate & store a public URL
},
});
const Types = keystone.Field.Types;
const Client = new keystone.List('Client', {
autokey: { from: 'name', path: 'key', unique: true }
});
Client.add({
name: { type: String, required: true },
image: { type: Types.File, storage },
});
Client.register();
When uploading the image to the Client through the admin, the resulting URL is
https://foobar.s3.amazonaws.com/Users/john/mysrc/[redacted]/[redcated]/app/dev/a5735c91051859368da65af294f11892.png
when it probably should be:
http://foobar.s3.amazonaws.com/dev/a5735c91051859368da65af294f11892.png
Even though the docs specify a /
as the prefix, I myself failed to enter it and it took quite a while to understand what was going on.
Keystone currently supports detecting S3 options in process.env
- see https://github.com/keystonejs/keystone/blob/master/index.js#L75-L77
I think it would be worth continuing to support these, like this: https://github.com/keystonejs/keystone-email/blob/master/lib/transports/mailgun/getSendOptions.js#L7-L12
We may remove support in Keystone for the s3 config
option though..? Would probably be awkward to support and breaks separation of concerns between the packages
Hello, before the S3 storage adapter it was possible to use a pre:upload hook to do stuff with the file before uploading. I'm just wondering if that's still a possibility. Thanks!
Hello,
Very grateful for this adapter, did notice a small problem with the public url:
return 'https://' + bucket + '.s3.amazonaws.com' + absolutePath;
on line 210
all of my public urls are
'https:// 's3.amazonaws.com/' + bucket + '/' +absolutePath;
Am I the only one that has this problem?
I could easily submit a PR for this.
Thanks
Knox doesn't support IAM roles, which is becoming more and more the de-facto authentication inside AWS instead of keys and secrets. See: Automattic/knox#262
I suggest using the native aws-sdk
package instead.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.