reggionick / s3-deploy Goto Github PK
View Code? Open in Web Editor NEWEasily deploy a static website to AWS S3 and invalidate CloudFront distribution
License: MIT License
Easily deploy a static website to AWS S3 and invalidate CloudFront distribution
License: MIT License
This throws an error about npx not found.
Error: There was an error when attempting to execute the process '/usr/local/bin/npx'. This may indicate the process failed to start. Error: spawn /usr/local/bin/npx ENOENT
From the s3-deploy
documentation https://www.npmjs.com/package/s3-deploy#optional-parameters
an optional parameter can be configured in order set the Cache-Control: max-age=X
header to an object in S3.
The parameter in s3-deploy
is known as --cache X
The following PR addresses the ability to set the max-age #66
Here's a snippet from the build log. The build succeeds despite the error.
...
Unexpected error: (Forbidden) null, aborting upload for assets/img/images/ris.png
Upload finished
AccessDenied: Access Denied
at Request.extractError (/home/runner/.npm/_npx/55[94](https://github.com/EdgeNode/frontend/runs/6738611211?check_suite_focus=true#step:6:95)7e5ced8dc5f0/node_modules/aws-sdk/lib/services/s3.js:711:35)
at Request.callListeners (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/request.js:686:14)
at Request.transition (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/request.js:688:12)
at Request.callListeners (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
code: 'AccessDenied',
...
According to
https://github.com/import-io/s3-deploy/issues/33#issuecomment-462415575
the --private flag of s3-deploy is needed if files should be uploaded to S3 buckets with static website hosting enabled
Otherwise a Upload error: AccessDenied: Access Denied
happens.
Hi,
I have a monorepo setup with the folder at packages/dashboard/build
.
But it seems to be deploying packages/dashboard
for some reason since yesterday.
And I haven't updated anything in the past few days
I getting this error when running the example code in the READ.ME
TypeError: region.split is not a function
at generateRegionPrefix (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/region_config.js:6:22)
at derivedKeys (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/region_config.js:13:22)
at Object.configureEndpoint (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/region_config.js:38:14)
at features.constructor.initialize (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/service.js:72:45)
at features.constructor.Service [as constructor] (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/service.js:59:10)
at features.constructor (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/util.js:637:24)
at new features.constructor (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/util.js:637:24)
at features.constructor.Service [as constructor] (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/service.js:50:17)
at new features.constructor (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/aws-sdk/lib/util.js:637:24)
at _callee3$ (/home/runner/.npm/_npx/55947e5ced8dc5f0/node_modules/s3-deploy/dist/deploy.js:406:22)****
I pulled your repo down and nothing is jumping out as an obvious fix. All of my secrets match the pattern you created in the example.
The error says region and I double checked to make sure my region secret was right and it is. I also purposefully made it incorrect that didn't seem to make a difference.
Any idea what might be happening here?
Hi Maintainers,
Thanks for creating a very handy GH action! I am currently using it to deploy my static site to an s3 bucket. However I have two cloudfront caches for the site. One cache is for domain.com and the other being www.domain.com. Is there a way to use this action to invalidate both caches?
Thanks!
currently we are uploading with no-cache
so etag
and last-modified
headers wont have an effect.
Can we make the no-cache optional through a configuration variable?
Hi, we're using your action at @myparcelnl but you haven't updated the v3 tag or pushed new tags since v3.2.0. The latest tagged version is still on Node 12 which logs deprecation warnings. I don't want to use action@master, maybe you could tag a new version?
In case you're looking to automate versioning of your action(s), we have an action for it: myparcelnl/actions/update-tags@v3
I have several sites in the same bucket (basically one site per environment), so every CloudFront looks at some folder in the bucket.
Will be helpful if we could parametrize the destination folder inside the bucket.
Hello. I'm new to Github action and this workflow.
I want to upload all files of repository so I tried to configure folder option with values like "/" "./", ""
But nothing happened and no error returned.
I tried using s3-deploy installed on local on repository root with same option. It was successful.
What should I do?
File Matches, skipped favicon.ico
File Matches, skipped manifest.json
File Matches, skipped logo512.png
File Matches, skipped static/js/runtime-main.5a93116f.js.map
File Matches, skipped logo192.png
Upload error: AccessDenied: Access Denied
at Request.extractError (/home/runner/.npm/_npx/2757/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3.js:837:35)
at Request.callListeners (/home/runner/.npm/_npx/2757/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/home/runner/.npm/_npx/2757/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/home/runner/.npm/_npx/2757/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/home/runner/.npm/_npx/2757/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/request.js:22:10)
I double checked the same API key has S3 PUT and I can do upload from my computer using the same key, however, the github action run generate the error message.
Where should I start to debug?
Hi! How do we add Cache-Control: no-cache
? There is noCache
parameter but it adds no-store
and must-revalidate
which is not good in all use cases. Looking at [email protected], they have --cacheControl parameter for custom use cases. Maybe we can add a cacheControl parameter?
I'm not sure if it is not uploading the files or the files are just not being updated.
I run the action on github, nothing happens (it was working before), when I run my local script with the aws cli I have, it works.
using this: reggionick/s3-deploy@v1
I've recently been receiving the following errors when uploading files:
/usr/local/bin/npx [email protected] ./** --bucket --region af-south-1 --cwd ./ --etag --gzip xml,html,htm,js,css,ttf,otf,svg,txt --deleteRemoved --private
Deploying files: [
'.',
'./asset-manifest.json',
'./assets',
'./assets/images',
'./assets/images/flags',
...
'./assets/images/flags/GT.svg',
... 212 more items
]
► Target S3 bucket: true (af-south-1 region)
► Gzip: [
'xml', 'html',
'htm', 'js',
'css', 'ttf',
'otf', 'svg',
'txt'
]
► E-Tag: true
► Private: true
► Deleting removed files
TypeError: str.indexOf is not a function
at Object.validateARN [as validate] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/util.js:1008:25)
at Object.isArnInParam (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3util.js:12:25)
at features.constructor.setupRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3.js:144:56)
at features.constructor.addAllRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:276:10)
at features.constructor.makeRequest (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:203:10)
at features.constructor.svc.<computed> [as headObject] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:685:23)
at /home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:188:12
at new Promise (<anonymous>)
at sync (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:178:10)
at _callee2$ (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:303:18)
TypeError: str.indexOf is not a function
at Object.validateARN [as validate] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/util.js:1008:25)
at Object.isArnInParam (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3util.js:12:25)
at features.constructor.setupRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3.js:144:56)
at features.constructor.addAllRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:276:10)
at features.constructor.makeRequest (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:203:10)
at features.constructor.svc.<computed> [as headObject] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:685:23)
at /home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:188:12
at new Promise (<anonymous>)
at sync (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:178:10)
at _callee2$ (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:303:18)
TypeError: str.indexOf is not a function
at Object.validateARN [as validate] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/util.js:1008:25)
at Object.isArnInParam (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3util.js:12:25)
at features.constructor.setupRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3.js:144:56)
at features.constructor.addAllRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:276:10)
at features.constructor.makeRequest (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:203:10)
at features.constructor.svc.<computed> [as headObject] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:685:23)
at /home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:188:12
at new Promise (<anonymous>)
at sync (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:178:10)
at _callee2$ (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:303:18)
...
TypeError: str.indexOf is not a function
at Object.validateARN [as validate] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/util.js:1008:25)
at Object.isArnInParam (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3util.js:12:25)
at features.constructor.setupRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/services/s3.js:144:56)
at features.constructor.addAllRequestListeners (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:276:10)
at features.constructor.makeRequest (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:203:10)
at features.constructor.svc.<computed> [as listObjectsV2] (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/node_modules/aws-sdk/lib/service.js:685:23)
at /home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:88:12
at new Promise (<anonymous>)
at listAllKeys (/home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:87:10)
at /home/runner/.npm/_npx/1880/lib/node_modules/s3-deploy/dist/deploy.js:111:5
Upload finished
3489 lines of output
The job finishes successfully, but new files are missing in my S3 bucket.
My workflow is set up like so:
- uses: reggionick/s3-deploy@v3
with:
folder: build
bucket: ${{ needs.terraform_dev.outputs.bucket }}
dist-id: ${{ needs.terraform_dev.outputs.distribution_id }}
invalidation: /
delete-removed: true
bucket-region: ${{ needs.terraform_apply_dev.outputs.bucket_region }}
private: true
Github actions for Node 12 are being deprecated and will be replaced by Node 16 one. Lets upgrade the node version to Node 16.
Node.js 12 actions are deprecated. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/. Please update the following actions to use Node.js 16: reggionick/s3-deploy
As per the documentation, distribution id is required. At the moment, I don't need CF for one of my projects, so I don't want to invalidate anything. This results in very ugly tracebacks.
we have a quite complex subdomain logic and thus multiple distributions that we need to expire while deploying or build.
there is no distribution-ids: ["1", "2", "3"]
which could have been handy.
is there a downside to supporting such argument?
i saw that internally, the call goes to deploy
which then takes only one distribution-id. how to handle it from there?
I keep getting this error
Error: There was an error when attempting to execute the process '/opt/hostedtoolcache/node/12.20.0/x64/bin/npx'. This may indicate the process failed to start. Error: spawn /opt/hostedtoolcache/node/12.20.0/x64/bin/npx ENOENT
I'm a bit unsure what is causing it. I have copied and pasted the yaml on the README pretty much verbatim and used my credentials. I can see there is a related closed issue #13 but it was not resolved.
To whom it may concern.
I came across this just last weeks – three months after joining a new project realizing, that our S3 grew to an unreasonable size, containing assets from the first deployments.
The output on the runner seemed fine at first glance, printing output lines like this
Deleting files: [
'assets/abc.js',
'assets/efgs.js,
...
]
Deleting chunk: 1000
Deleting chunk: 916
<Next Job output>
Yet, the S3 still contained those files which were supposed to be deleted.
Unfortunately the upstream package s3-deploy seems to be removed from Github with the last publish 5 years. Not a good sign a should be a red flag for a package like the one in the repo here, which seems still to be having active commits.
Anyway, I looked into the code of the upstream package available here and it seems that there a two things wrong with the implementation
deleteRemoved(s3Client, options.globbedFiles, options)
is missing a yield
, I guess this will lead to an early return from the process, leaving many files un-deletedyield
in front of the call, the Promise in deleteRemoved
is resolving after the first batch, so I don't see how this should work.I assume this Github action functionality was tested, but maybe with a smaller set of items, maybe this issue becomes only apparent after accumulating in the thousands? For our case it definitely doesn't work and understandably so, when looking at that library. I will probably implement this myself with simple AWS cli commands, because for our SPA we don't expect more than a page full of files.
In the README Usage example:
uses: reggionick/s3-deploy@v3
with:
folder: build
bucket: ${{ secrets.S3_BUCKET }}
dist-id: ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }}
It is missing the required bucket-region:
- do you think this should be added?
The dist-id
isn't required - maybe it could be removed? (or maybe keep that as it's likely useful to most users?)
In the docs, we see comma separated glob options it which is completely incorrect, correct one should be OR condition like:
files-to-include: '{.*/**,**}'
The input code mentioned in the action script is -> core.getInput('files-to-include')
Thus, the input defined in actions.yml should be 'files-to-include' instead of 'filesToInclude' because it is throwing warning while deploying. And also in all the places for documentation purposes.
P.S. - Also, update the node version.
Hi,
Is there a way to disable invalidation?
Thanks!
Having trouble with pushing objects after implementing OAC on the bucket. Bucket policy allows push for the role that it assumes
Upload error: AccessDenied: Access Denied
Hello,
Would it be possible to add a files-to-exclude option? :) I want to be able to upload everything except for a specific directory.
Thanks!
Hi, I would like to know exactly what this error means.
Is about any permission I do not have on S3?
I gave permissions AmazonS3FullAccess and CloudFrontFullAccess to that github user.
This is my config:
with:
folder: /
bucket: ***
bucket-region: ***
dist-id: ***
invalidation: /*
delete-removed: false
no-cache: true
private: false
Many thanks!!
Error: EACCES: permission denied, scandir '/boot/efi'
at Object.readdirSync (fs.js:1043:3)
at GlobSync._readdir (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:286:41)
at GlobSync._readdirInGlobStar (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:265:20)
at GlobSync._readdir (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:274:17)
at GlobSync._processGlobStar (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:348:22)
at GlobSync._process (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:128:10)
at GlobSync._processGlobStar (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:381:10)
at GlobSync._process (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:128:10)
at GlobSync._processGlobStar (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:381:10)
at GlobSync._process (/home/runner/.npm/_npx/1502/lib/node_modules/s3-deploy/node_modules/glob/sync.js:128:10)
When deploying, if credentials aren't loaded, the action still counts as a pass. Took us by surprise, as the step "passed". After poking around at Cloudfront it became obvious invalidations weren't happening...and when finally looking in the logs of the deploy action it became very apparent that nothing was happening, but we still had our big green check marks.
Unexpected error: (CredentialsError) Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1, aborting upload for asset-manifest.json
Unexpected error: (CredentialsError) Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1, aborting upload for index.html
Unexpected error: (CredentialsError) Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1, aborting upload for manifest.json
Unexpected error: (CredentialsError) Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1, aborting upload for precache-
Hi, nice module, thanks!
Is it possible to specify a destination path/folder on the S3 bucket instead of root /
Thanks!
Documentation says "noCache" but should be "no-cache"
Hi,
I've taken the example workflow and looked to deploy it on my repo but had the following error.
Error: Cannot find module '/home/runner/work/markharwood.info/markharwood.info/node_modules/s3-deploy/bin/s3-deploy' at Function.Module._resolveFilename (internal/modules/cjs/loader.js:966:15) at Function.Module._load (internal/modules/cjs/loader.js:842:27) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12) at internal/main/run_main_module.js:17:47 { code: 'MODULE_NOT_FOUND', requireStack: [] }
Hi,
Only a warning:
[email protected] using legacy dependencies and the AWS SDK v2, that will be into maintenance mode in 2023.
Great job.
Hugs
Hello!
Thank you for this great action!
However, it would be extremely helpful if someone would provide an IAM policy example with minimal required permissions for this action to work properly.
Thanks.
Hi,
I have an issue with this plugin :
event with delete-removed set to true, files are not deleted, I have to do it manually. For example app.1234.js stay here, even after the plugin push the new version app.5678.js. Of the app.1234.js is not in the folder I push.
The key/secret I use has full rights access to s3. Anyone knows what could do this ?
many thanks
/usr/local/bin/npx [email protected] ./** --bucket *** --region *** --cwd . --distId *** --etag --gzip xml,html,htm,js,css,ttf,otf,svg,txt --invalidate / --noCache --deleteRemoved true
Error: There was an error when attempting to execute the process '/usr/local/bin/npx'. This may indicate the process failed to start. Error: spawn /usr/local/bin/npx ENOENT
suggestion: add GitHub Action CodeQL scan to run on every Pull Request for this repo:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.