mikaak / s3-plugin-webpack Goto Github PK
View Code? Open in Web Editor NEWUploads files to s3 after complete
License: MIT License
Uploads files to s3 after complete
License: MIT License
Getting the following error using the latest version:
95% emit/Users/Dids/Documents/React/<snip>/node_modules/webpack-s3-plugin/dist/s3_plugin.js:190
_this2.getAllFilesRecursive(file).then(function (res) {
^
TypeError: Cannot read property 'getAllFilesRecursive' of undefined
at /Users/Dids/Documents/React/<snip>/node_modules/webpack-s3-plugin/dist/s3_plugin.js:190:23
at FSReqWrap.oncomplete (fs.js:82:15)
For example: if the bucket requested does not exist, the error is logged to the console, but the build does not fail as it should and returns a success (exit code 0
):
ERROR in S3Plugin: NoSuchBucket: The specified bucket does not exist
Child html-webpack-plugin for "index.html":
+ 3 hidden modules
Uploading [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 100% 0.0s
I also see that the output is out of order from what I expect, with upload progress text appearing in seemingly random spots in the log.
Environment:
Example config:
const webpackMerge = require('webpack-merge');
const commonConfig = require('./webpack.prod.js');
const S3Plugin = require('webpack-s3-plugin');
module.exports = webpackMerge(commonConfig, {
plugins: [
new S3Plugin({
exclude: /^.*\.(map|map\.gz)$/,
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-west-2'
},
s3UploadOptions: {
Bucket: 'something-that-doesnt-exist'
},
cloudfrontInvalidateOptions: {
DistributionId: process.env.CLOUDFRONT_DISTRIBUTION_ID,
Items: ['/*']
}
})
]
});
Would be cool to add gzip version of JS and CSS files during the process to offer CloudFront compression for free CloudFront distribution
Getting this error whenever I put the require path in a webpack config.
Running Windows 8
Error: Cannot find module './helpers' at Function.Module._resolveFilename (module.js:337:15) at Function.Module._load (module.js:287:25) at Module.require (module.js:366:17) at require (module.js:385:17) at Object.<anonymous> (C:\Repositories\Clients\VolunteerLegacy\webhost\VolunteerWidgetv3\node_modules\webpack-s3-plugin\dist\s3_plugin.js:110:16) at Module._compile (module.js:435:26) at Object.Module._extensions..js (module.js:442:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:311:12) at Module.require (module.js:366:17) at require (module.js:385:17) at Object.<anonymous> (C:\Repositories\Clients\VolunteerLegacy\webhost\VolunteerWidgetv3\webpack.deploy.config.js:8:16) at Module._compile (module.js:435:26) at Object.Module._extensions..js (module.js:442:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:311:12)
Where line 8 is 'var S3Plugin = require('webpack-s3-plugin');
Hi!
The Amazon JS SDK, https://www.npmjs.com/package/aws-sdk, has a few different ways of letting the developer provide credentials when using their SDK; it could be from the ~/.aws/credentials
folder, or the current profile, or environment variables (http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html)
It would be cool if this module would also enable this, as a fallback if the key and secret key have not been explicitly set. The s3
module uses aws-sdk
itself, so potentially that could work.
For me it is much more convenient to keep my credentials where the are, in the ~/.aws/credentials
, where also AWS's tools look for them.
It's uploading css but not js. Please downgrade to 0.6.7
momentarily while it's being fixed. Not sure how tests passed this!
Hi. Here is my case:
I'm using s3 for hosting multipage application. Since s3 doesn't provide url rewriting feature I use the follow workaround: upload html files (app, admin) without .html extension and add metadata ContentType: text/html
Also I'm going to add cache control attribute for all files excluding html.
Is there ability to set metadata as I described above in the current version? If not, how can i implement it better? Do you plan to add this feature?
Is there any way to delete previous files? I use hash in file names to invalidate only index.html
but every time webpack generates a new hash value I got duplicated files in my S3 bucket. Is there any way to avoid that?
I'll gladly add more information if the bug is not reproducible
Please complete these steps and check these boxes (by putting an x
inside
the brackets) before filing your issue:
Thank you for adhering to this process! This ensures that I can pay attention to issues that are relevant and answer questions faster.
Hello,
I'm not able to use cdnizerCss option, is not working at all. Path for images and fonts remain exactly the same in my css file.
Thanks.
new S3Plugin({
include: /.*\.(eot|svg|ttf|woff|woff2|otf)/,
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_REGION
},
s3UploadOptions: {
Bucket: 'my-bucket'
},
basePathTransform: addSha,
cdnizerCss: {
test: /fonts/,
cdnUrl: "http://test.com"
},
cdnizerOptions: {
defaultCDNBase: 'http://asdf.ca'
}
}),
I don't think these are relevant, but to knock off your checklist super quick:
node version: v5.0.0
s3-plugin-webpack: "webpack-s3-plugin": "^0.9.0"
OS: MacOS 10.12.1 (16B2555)
The README outlines that this plugin supports AWS best practices of for credentials. I am experiencing an issue with this that can be repeated with the following:
[default]
region = us-east-1
aws_access_key_id = YOUR_KEY_ID
aws_secret_access_key = YOUR_ACCESS_KEY
s3options
.When doing this, I receive the following error:
ERROR in S3Plugin: PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
If I create a bucket in us-west-2
, update my configuration to point at it and repeat the test, I receive no error message.
If I specify s3options
with us-east-1
, it works.
The problem is the result of
s3Options: _.merge({}, DEFAULT_S3_OPTIONS, s3Options)
Since s3Options
is empty when credentials are provided via AWS best practices, your defaults always make it into the client.
I have a really quick question; is there an option to sync
the assets up, e.g. clean the S3 bucket before I upload? Just curious how you might handle asset revving to clear out the old assets to keep the bucket clean. Sorry if I missed something obvious, thanks again for your wonderful work on this. Using it in production with great success!
Cheers.
Please complete these steps and check these boxes (by putting an x
inside
the brackets) before filing your issue:
Thank you for adhering to this process! This ensures that I can pay attention to issues that are relevant and answer questions faster.
Thanks so much for the hard work @MikaAK !
I'm on Mac OS w/ a Config looking like this:
config.plugins.push(new S3Plugin({
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1' // Bucket is US Standard
},
s3UploadOptions: {
Bucket: 'my-bucket-name'
},
basePath: 'builds'
}))
However - those Keys are not a root
AWS credential, instead they're attached to a group with this resource policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket-name/*",
"arn:aws:s3:::my-bucket-name"
]
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name/*",
"arn:aws:s3:::my-bucket-name"
]
}
]
}
The upload throws an AccessDenied
error, like so:
ERROR in S3Plugin: AccessDenied: Access Denied
Child html-webpack-plugin for "index.html":
+ 3 hidden modules
Uploading [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆] 47% 4.3s%
Am I missing something simple? I use S3 a lot - and I use that Policy for almost every s3 "uploader" config I have.
Thankyou so much!
Hi,
thank your for your great work with this plugin !
I have an issue but I don't really know exactly where it fails.
The problem is that some extension does not accept ContentEncoding.
Here my environment:
v6.9.3
3.10.10
Plugin information: v0.9.2
The plugin configuration is the following one:
common.plugins.push(
new S3Plugin({
include: /.*\.(css|js|svg|png|jpe?g|gif|ico|pdf)/,
s3Options: s3Options,
s3UploadOptions: {
Bucket: siteConfiguration.parameters.amazon_bucket,
/*
* Images cache-control: 1 day
* Other resources cache-control: 1 week
* Data resources cache-control: 1 month with revalidation
* Scripts and styles cache-control: 1 year
*/
CacheControl(filename) {
var cacheControl = configuration.s3CacheControl.misc;
if (
configuration.s3CacheControl.imagesExtensions
.indexOf(path.extname(filename)) >= 0
) {
cacheControl = configuration.s3CacheControl.images;
} else if (
configuration.s3CacheControl.noCacheExtensions
.indexOf(path.extname(filename)) >= 0
) {
cacheControl = configuration.s3CacheControl.noCache;
} else if (/\.js/.test(filename) || /\.css/.test(filename)) {
cacheControl = configuration.s3CacheControl.stylesAndScripts;
}
return cacheControl;
},
ContentEncoding(filename) {
// gzip only css and js files
if (
/\.js/.test(filename) ||
/\.css/.test(filename) ||
/\.ico/.test(filename)
) {
return 'gzip';
}
},
},
basePathTransform: function () {
return configuration.s3AssetPath;
},
cloudfrontInvalidateOptions:
siteConfiguration.parameters.amazon_cloudfront_distribution_id ?
{
DistributionId: siteConfiguration.parameters.amazon_cloudfront_distribution_id,
Items: [`/${configuration.s3AssetPath}*`],
} :
{},
})
);
I launch webpack as usual and everything worked for me until I've decided to gzip favicon file.
I tried to add a console.log
in the ContentEncoding condition and the output is:
favicon.ico
bundles/mybundle/core.js
bundles/mybundle/helper.js
bundles/mybundle/layout.css
bundles/mybundle/third-party.js
application.js
assets.js
bundler.js
style.css
When I check in the S3 console, the metadata is not filled for the favicon but is filled for both css and js files
Do you need more information ?
Thank you
Hey,
Great work on this. I am using this in a project inside a docker container on top of alpine linux, and this plugin doesn't run as part of my build.
Any ideas why ?
Regards
Please complete these steps and check these boxes (by putting an x
inside
the brackets) before filing your issue:
Thank you for adhering to this process! This ensures that I can pay attention to issues that are relevant and answer questions faster.
Plugin reports an Access Denied error if the bucket has no Upload/Delete permission for "Everyone". No other permission settings seems to be working.
Node version 4.5.0
s3-plugin-webpack version 0.9.2
OS version Windows 8.1
If filing a bug report, please include a list of steps that describe how to
reproduce the bug you are experiencing. Include your config being passed to the S3Plugin
.
using the configuration below, index.html file is updated properly by cdnizer but the file is uploaded to S3 prior to cdnizer completing it's path replacement.
new S3Plugin({
directory: 'dist/',
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1'
},
cdnizerOptions: {
defaultCDNBase: '/' + hash,
files: [ '/app.js' ]
},
s3UploadOptions: {
Bucket: 'mobile.' + process.env.TARGET + '.blah.com'
}
})
I have been trying to use this plugin and although the S3 upload part does work (I can confirm files are uploaded), I get an error and I don't see the Invalidation show up in my AWS Cloudfront Invalidation console for my Cloudfront Distribution.
I made sure to check that I have an IAM role with the CloudFrontFullAccess
policy.
tl;dr files are uploaded to S3 as expected, but there is an UnknownEndpoint
error and no invalidation is recorded in my CloudFront dashboard.
$ node -v
v6.9.4
$ yarn -v
yarn install v0.21.3
[1/4] 🔍 Resolving packages...
success Already up-to-date.
✨ Done in 1.15s.
Using "webpack-s3-plugin": "^1.0.0-rc.0"
on MacOS Sierra 10.12.3
Here is my webpack config
// webpack.config.release.js
const prodConfig = require('./webpack.config.prod');
const S3Plugin = require('webpack-s3-plugin');
const webpackMerge = require('webpack-merge');
module.exports = webpackMerge(prodConfig, {
plugins: [
new S3Plugin({
s3Options: {
accessKeyId: 'xxx',
secretAccessKey: 'xxx',
region: 'us-east-1'
},
s3UploadOptions: {
Bucket: 'xxx'
},
cdnizerOptions: {
defaultCDNBase: 'xxx'
},
cloudfrontInvalidateOptions: {
DistributionId: 'xxx',
Items: ['/index.html']
}
})
]
});
I run yarn run release
"release": "webpack --config ./webpack.config.release.js --progress --profile --bail",
And this is the error I'm getting.
ERROR in S3Plugin: UnknownEndpoint: Inaccessible host: `s3.amazonaws.com'. This service may not be available in the `us-east-1' region.
Child html-webpack-plugin for "index.html":
+ 1 hidden modules
Child favicons-webpack-plugin for "icons/stats.json":
Asset Size Chunks Chunk Names
icons/stats.json 5.63 kB 0 [emitted]
+ 1 hidden modules
Uploading [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 100% 0.0s
✨ Done in 605.86s.
Thanks for making this plugin and let me know if you need anymore information!
Trying to get this plugin working with the latest versions of Webpack, React etc. but keep getting the following error:
ERROR in S3Plugin: Error: Non-file stream objects are not supported with SigV4 in AWS.S3
I get the following error when I'm using AWS_DEFAULT_PROFILE as input.
ERROR in S3Plugin: NoSuchDistribution: The specified distribution does not exist.
It do not happens when I use aws accesskeys and its only for cloudfrontinvaildation it happens. The normal S3 upload works fine with env AWS_DEFAULT_PROFILE.
So I recon its
invalidateCloudfront() {
:
cloudfront.config.update({
accessKeyId: clientConfig.s3Options.accessKeyId,
secretAccessKey: clientConfig.s3Options.secretAccessKey,
})
:
That breaks it, I think this should only be preformed if the accessKeyId and secretAccessKey are given and not when a aws profile is used.
I have set the Content-Type
option as follows, but in my S3
it is being set as: application/x-www-form-urlencoded
ContentType: (fileName) => {
if (/\.js/.test(fileName)) {
return 'application/javascript'
}
if (/\.css/.test(fileName)) {
return 'text/css'
}
}
Environment:
Is this an issue with the s3
package itself?
I'm getting an access denied but my command line tools work executing a similar command of aws s3 sync. What is the best way to debug at this point? Thanks.
We've had issues in the past where new versions of this plugin are unstable and unusable. A test suite would help ensure new revisions still work
Hi, and thanks for submitting a fix for cache invalidation but npm can't find the new version :(
I did look at https://www.npmjs.com/package/webpack-s3-plugin and the 0.9.1 is visible, you have published 0.9.1 in this repo but in https://registry.npmjs.org/webpack-s3-plugin it still say 0.9.0 is the latest (even tho 0.9.1 has a time in the json file).
So I'm guessing that the publishing of 0.9.1 failed some how so could you please try to republish the 0.9.1 version?
/BR
Erik
Step 1.
The license file needs to be updated as part of the transfer into webpack-contrib
.
The current copyright holder ( @MikaAK ) needs to to commit the copyright change to act as "signing over" the project to webpack-contrib
which is part of the JS Foundation.
The LICENSE
needs to be named LICENSE
( no extension ) if it isn't already.
The LICENSE
needs to have the following copy.
DO NOT
change the license prop in the package.json
Copyright JS Foundation and other contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Step 2.
@bebraw & @d3viant0ne need to be added as NPM owners.
Step 3.
As you cannot transfer a repo into an organization unless you have admin level access to it, once the above two steps have been satisfied the repo needs to be transfered to me & I will transfer it into webpack-contrib
How can i upload my dist/index.html?
Hi,
Is it possible to adjust file name when uploading to S3?
My current setup is this:
new CompressionPlugin({
asset: "[path]", // "[path].gz[query]"
algorithm: "gzip",
test: /\.(js|css|map)$/,
// threshold: 10240,
minRatio: 10 // 0.8
}),
new S3Plugin({
s3Options: {
accessKeyId: 'xxx',
secretAccessKey: 'xxx',
region: 'xxx'
},
s3UploadOptions: {
Bucket: 'xxx',
ContentEncoding(fileName) {
if (/\.(js|css|map)$/.test(fileName)) {
return 'gzip';
}
}
},
cloudfrontInvalidateOptions: {
DistributionId: 'xxx',
Items: ["/*"]
}
}),
As you can see, the CompressionPlugin overwrites .js, .css and .map files with their compressed version. Ideally, I would like it to retain the original uncompressed files, but skip them when uploading to S3, and instead upload .js.gz as .js etc. Is this possible to do with your plugin?
Thanks!
My bucket is in ap-northeast-2 which has to set SigV4.
When I build, it cannot upload to S3 with message
ERROR in S3Plugin: Error: Non-file stream objects are not supported with SigV4 in AWS.S3
Do you have plan to add an 'option' for SigV4?
My environment is below:
Node v4.2.3
[email protected]
OSX 10.10.5
Example config
module.exports = {
new S3Plugin({
// s3Options are required
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'ap-northeast-2'
},
s3UploadOptions: {
Bucket: 'my'
}
})
]
};
After upgrading to 0.4.8, during webpack compilation I get the following error:
ERROR in S3Plugin: Error: Invalid or empty files list supplied
We had to rollback to 0.4.5 to have things work
What happened to progress bar? :)
After upgrading to 0.6.7 I've noticed the file name is being prepended a backslash. Up to 0.6.6 the uploaded file was correctly named in my bucket as bundle.js
, whereas now the uploaded file name reads \bundle.js
.
This is how I've got the plugin configured in my webpack.config.js
:
new S3Plugin({
include: /bundle.js/,
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1'
},
s3UploadOptions: {
Bucket: 'dev.bucket'
},
basePath: 'some/path/dev',
directory: path.join(__dirname, 'dist')
})
I'm running node v5.8 in Windows 7.
After build, i got these message in my console
> cross-env NODE_ENV=production webpack --config internals/webpack/webpack.prod.babel.js --color -p
(node:6731) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version.
Uploading [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆∆] 43% 1.0s [816] multi main 28 bytes {0} [built]
+ 1237 hidden modules
Uploading [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 100% 0.0s
npm ERR! Darwin 14.5.0
npm ERR! argv "/usr/local/Cellar/node/6.2.2/bin/node" "/usr/local/bin/npm" "run" "build"
npm ERR! node v6.2.2
npm ERR! npm v3.9.5
npm ERR! code ELIFECYCLE
npm ERR! [email protected] build: `cross-env NODE_ENV=production webpack --config internals/webpack/webpack.prod.babel.js --color -p`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] build script 'cross-env NODE_ENV=production webpack --config internals/webpack/webpack.prod.babel.js --color -p'.
Not sure what's going on under the hood, there is no error log showing.
Please complete these steps and check these boxes (by putting an x
inside
the brackets) before filing your issue:
Thank you for adhering to this process! This ensures that I can pay attention to issues that are relevant and answer questions faster.
I noticed an issue that when defining output.publicPath
in a Webpack configuration, cdnizerOptions.defaultCDNBasePath
will not be reflected in the generated index.html
$ node -v
v6.9.1
const ExtractTextPlugin = require('extract-text-webpack-plugin');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const path = require('path');
const webpack = require('webpack');
const S3Plugin = require('webpack-s3-plugin');
module.exports = {
context: path.resolve(__dirname, 'src'),
entry: {
index: './index.js',
vendor: './vendor.js'
},
output: {
path: path.join(__dirname, '/dest/web/abc'),
publicPath: '/abc',
filename: '[name].[chunkhash].bundle.js',
sourceMapFilename: '[name].[chunkhash].bundle.map'
},
module: {
rules: [{
test: /\.js$/,
exclude: /node_modules/,
use: [{
loader: 'babel-loader',
options: {
presets: [
['es2015', { modules: false }]
]
}
}, {
loader: 'eslint-loader',
options: {
failOnWarning: false,
failOnError: true
}
}]
}, {
test: /\.html$/,
exclude: path.join(__dirname, './src/index.html'),
use: [{
loader: 'html-loader'
}]
}, {
test: /\.less$/,
use: ExtractTextPlugin.extract({
fallback: 'style-loader',
use: ['css-loader', 'less-loader']
})
}, {
test: /\.(jpg|png|gif)$/,
use: [{
loader: 'file-loader'
}]
}, {
test: /\.woff(2)?(\?v=[0-9]\.[0-9]\.[0-9])?$/,
loader: 'url-loader?limit=10000&mimetype=application/font-woff'
}, {
test: /\.(ttf|eot|svg)(\?v=[0-9]\.[0-9]\.[0-9])?$/,
loader: 'file-loader'
}]
},
plugins: [
new webpack.optimize.CommonsChunkPlugin({
name: ['vendor']
}),
new ExtractTextPlugin('[name].[chunkhash].style.css'),
new HtmlWebpackPlugin({
template: './index.html',
chunksSortMode: 'dependency'
}),
new S3Plugin({
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'xxx'
},
s3UploadOptions: {
Bucket: 'xxx'
},
cdnizerOptions: {
defaultCDNBase: 'http://abc123.cloudfront.net'
},
cloudfrontInvalidateOptions: {
DistributionId: 'xxx',
Items: ['/index.html']
}
})
]
};
<!DOCTYPE html>
<html lang="en">
<head>
<base href="/">
<link href="/abc/index.61ccaa401863e6e0122e.style.css" rel="stylesheet"></head>
<body ng-app="ne">
<ne-bootstrap>Loading...</ne-bootstrap>
<script type="text/javascript" src="/abc/vendor.a42ee6ffe37f88d1821d.bundle.js"></script><script type="text/javascript" src="/abc/index.61ccaa401863e6e0122e.bundle.js"></script></body>
</html>
/abc/
would be replaced with the value of defaultCDNBasePath
Note: if the value of output.publicPath
is an empty string, the expected behavior is seen
Seems like there is an issue when uploading to servers in Europe... probably related to this: http://www.sitefinity.com/developer-network/forums/set-up-installation/amazon-s3---must-be-addressed-using-the-specified-endpoint
ERROR in S3Plugin: PermanentRedirect: The bucket you are attempting to access mu
st be addressed using the specified endpoint. Please send all future requests to
this endpoint.
´´´
Hi, I'm having a hard time getting CDNizer setup in a way that makes sense. Since this is webpack, a lot of my images (all of them, probably) are going to be translated to require
s by webpack, so they won't match the default matchers in CDNizer (which looks for CSS urls, img tags, etc.). That in itself is no big deal -- I started setting up a custom matcher, but then ran into an issue. This plugin is running CDNizer on the 'after-emit' plugin point, and the UglifyJS plugin does its work on 'optimize-chunk-assets' -- so CDNizer is trying to do find/replace on source that's already been uglified (assuming you're using the Uglify plugin). So if I were to write a custom matcher for the images I'm requiring, I'd have to write a pattern to match what Uglify spits out -- and that seems fragile, I'm not confident that Uglify is always going to give me e.exports=n.p
, for example.
Since image require
s and the Uglify plugin are in pretty widespread use this seems like a significant issue. Wouldn't it be preferable for CDNizer to work on module source at some point before any post-processing occurs? Or is there something I'm missing -- can CDNizer somehow be used with image require
s and Uglify without issues?
It would be nice to have ability to access to compilation
object under basePathTransform
function.
It would be very useful if you would like to use such fields as compilation.fullHash
, compilation.hash
:
new S3Plugin({
basePath: 'test',
basePathTransform: (basePath, compilation) => {
return basePath + compilation.fullHash;
},
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-west-1'
},
s3UploadOptions: {
Bucket: 'MyBucket'
}
})
As alternative you can use https://github.com/webpack/loader-utils#interpolatename to interpolate basePath*
:
new S3Plugin({
basePath: 'test/[hash]',
basePathTransform: basePath => {
return basePath + '[hash]';
},
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-west-1'
},
s3UploadOptions: {
Bucket: 'MyBucket'
}
})
I'm trying to upload the contents of an entire directory (public/ in my case), which contains subfolders for my css, js etc.
I'm getting the following error:
95% emitfs.js:844
return binding.lstat(pathModule._makeLong(path));
^
Error: ENOENT: no such file or directory, lstat 'bootstrap.min.css'
new S3Plugin({
progress: true,
s3Options: {
accessKeyId: envr.S3_ACCESS_KEY_ID,
secretAccessKey: envr.S3_SECRET_ACCESS_KEY,
region: envr.S3_REGION,
},
s3UploadOptions: {
Bucket: envr.S3_BUCKET
},
cdnizerOptions: {
defaultCDNBase: envr.ASSETS_FQDN
},
cloudfrontInvalidateOptions: {
DistributionId: envr.CF_DISTRIBUTION_ID,
Items: [`/${envr.BUILD_STAMP}/**`]
},
basePathTransform: () => envr.BUILD_STAMP
})
For some reason, the above config doesn't seem to properly replace imported file base paths.
import CustomerLandingVideoMp4 from '../../assets/images/landing/customerLandingVideo.mp4';
^ This still resolves to
http://localhost:8002/7a173da26e30eae2eec7cc2aad967c74.jpg
Any ideas?
Getting "TypeError: undefined is not a function" when trying to use the plugin with node 0.12.10.
node_modules/webpack-s3-plugin/dist/s3_plugin.js:170
return fPath.endsWith(PATH_SEP) ? fPath : fPath + PATH_SEP;
TypeError: undefined is not a function
at S3Plugin.addSeperatorToPath (node_modules/webpack-s3-plugin/dist/s3_plugin.js:170:20)
Apparently endsWith isn't yet available in this version of the v8 engine (3.28.71.19) that node 0.12.10 uses, or so I presume.
To improve backwards compatibility, would it be possible to use lodash's endsWith method in the several places in the s3-plugin-webpack codebase where String.prototype.endsWith (and startsWith) is used? Lodash is already included, so this should be a very minor change. Happy to submit a PR if you're ok with this.
Thanks!
Please complete these steps and check these boxes (by putting an x
inside
the brackets) before filing your issue:
It would be nice to disable progress
via options:
new S3Plugin({
progress: false,
// or
silent: true,
})
Just wondering had a look at the code doesn't seem it has the capability
As part of Cloudfront, if you specify a Content-Length header, you can automatically compress (gzip) your content.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
Any thoughts on how this could be achieved?
Upload works and I can see the files but webpack errors out with Must provide accessKeyId, secretAccessKey
During the node security check found one vulnerability.
(+) 1 vulnerabilities found
Name Installed Patched Path More Info
semver 2.2.1 >=4.3.2 [email protected] > [email protected] > [email protected] > [email protected] https://nodesecurity.io/advisories/31
I've opened the PR at the end of this chain, hopefully it will be merged shortly.
github/shahata/jsdelivr-cdn-data#7
Dependencies:
node v7.8.0
OS X 10.11.2
Step to reproduce:
nsp check
I see that I am not the first who is trying to solve this issue with jsdelivr-cdn-data
. I would suggest to give it a try, though.
node v7.3.0, s3-plugin-webpack v0.9.2, Mac OS 10.12.2
First of all: thanks for writing this great plugin 👍 . Much appreciated :).
When entering a string in basePathTransform, it is not working since it's expecting a function.
var dateFormat = require('dateformat');
new S3Plugin({
exclude: /.*\.scss$/,
s3Options: {
accessKeyId: '...',
secretAccessKey: '...',
region: '...'
},
s3UploadOptions: {
Bucket: '...'
},
basePathTransform: dateFormat(new Date(), 'yyyy-mm-dd-HH-ss'),
});
When I rewrite the dateFormat part to a promise it works:
var timestamp = function() {
return new Promise(function(resolve, reject) {
resolve(dateFormat(new Date(), 'yyyy-dd-HH-mm-ss'));
})
}
I was expecting that a string was okay since the documentation says:
basePathTransform
: transform the base path to add a folder name. Can return a promise or a string
But perhaps I misunderstood :)
My webpack build actually produces images, stylesheets, javascripts and html, however with this config:
new S3Plugin({
include: /.*/,
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.REGION
},
s3UploadOptions: {
Bucket: process.env.BUCKET,
ACL: 'public-read'
}
})
it uploads only css and js files, is there something wrong with what I'm doing?
Default options set public-read ACL on every uploaded object: https://github.com/MikaAK/s3-plugin-webpack/blob/master/src/helpers.js#L10.
Object ACLs override S3 bucket policies, unless they are explicitally "Deny" (see https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-auth-workflow-object-operation.html). It pretty much opens the bucket to the world regardless of the policy users have set and this is mentioned nowhere. AWS bucket policy examples (https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html provides "Allow" examples mostly, so this likely causes security holes in buckets using default policies from the docs.
Please complete these steps and check these boxes (by putting an x
inside
the brackets) before filing your issue:
Thank you for adhering to this process! This ensures that I can pay attention to issues that are relevant and answer questions faster.
Node: v6.0.0 32bits (Windows 10)
s3-plugin-webpack: 0.7.3
When using the plugin with the options basePath
set, the files get correctly uploaded, but all the files in root directory have a path separator prepended to it's key.
Example:
new S3Plugin({
// normal upload options
basePath: 'stage',
})
This will be uploaded as:
Bucket
└──stage
├──\index.html
├──\styles.css
├──\images
│ └──logo.png
└──\scripts
├──main.js
└──admin.js
I was able to fix the issue bypassing the call to transformBasePath, but I'm not sure about the side effects of that...
I want to make publicPath the default for defaultCDNBase
if not provided. This will match other loader strategies such as in file-loader
I've been trying to use this with both the the options on this page and with the following:
new S3Plugin({
// s3Options are required
s3Options: {
accessKeyId: 'ACCESSKEY',
secretAccessKey: 'secretAccessKey',
},
s3UploadOptions: {
Bucket: 'myBucket',
ACL: 'public-read'
}
})
In both cases I get an error
if (!this.cdnizerOptions.files) this.cdnizerOptions.files = [];
^
TypeError: Cannot read property 'files' of undefined
at new S3Plugin (c:\Users\Fredrik\Documents\LBC\react\twitch-visualizer\node
_modules\webpack-s3-plugin\dist\s3_plugin.js:96:29)
at Object.<anonymous> (c:\Users\Fredrik\Documents\LBC\react\twitch-visualize
r\webpack-production.config.js:30:5)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at module.exports (c:\Users\Fredrik\Documents\LBC\react\twitch-visualizer\no
de_modules\webpack\bin\convert-argv.js:80:13)
at Object.<anonymous> (c:\Users\Fredrik\Documents\LBC\react\twitch-visualize
r\node_modules\webpack\bin\webpack.js:54:40)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
at startup (node.js:119:16)
npm ERR! [email protected] prod: `webpack -p --config webpack-production.c
onfig.js --progress --colors`
npm ERR! Exit status 8
npm ERR!
npm ERR! Failed at the [email protected] prod script.
npm ERR! This is most likely a problem with the Twitch-Visualizer package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! webpack -p --config webpack-production.config.js --progress --color
s
npm ERR! You can get their info via:
npm ERR! npm owner ls Twitch-Visualizer
npm ERR! There is likely additional logging output above.
npm ERR! System Windows_NT 6.2.9200
npm ERR! command "c:\\Program Files\\nodejs\\node.exe" "c:\\Program Files\\nodej
s\\node_modules\\npm\\bin\\npm-cli.js" "run" "prod"
npm ERR! cwd c:\Users\Fredrik\Documents\LBC\react\twitch-visualizer
npm ERR! node -v v0.10.32
npm ERR! npm -v 1.4.28
npm ERR! code ELIFECYCLE
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! c:\Users\Fredrik\Documents\LBC\react\twitch-visualizer\npm-debug.lo
g
npm ERR! not ok code 0
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.