GithubHelp home page GithubHelp logo

aws-samples / cloudfront-authorization-at-edge Goto Github PK

View Code? Open in Web Editor NEW
455.0 18.0 153.0 4.55 MB

Protect downloads of your content hosted on CloudFront with Cognito authentication using cookies and Lambda@Edge

Home Page: https://aws.amazon.com/blogs/networking-and-content-delivery/authorizationedge-using-cookies-protect-your-amazon-cloudfront-content-from-being-downloaded-by-unauthenticated-users/

License: MIT No Attribution

TypeScript 94.07% HTML 2.43% CSS 1.71% JavaScript 1.78%

cloudfront-authorization-at-edge's Introduction

CloudFront authorization@edge

This repo accompanies the blog post.

In that blog post a solution is explained, that puts Cognito authentication in front of (S3) downloads from CloudFront, using Lambda@Edge. JWTs are transferred using cookies to make authorization transparent to clients.

The sources in this repo implement that solution.

The purpose of this sample code is to demonstrate how Lambda@Edge can be used to implement authorization, with Cognito as identity provider (IDP). Please treat the code as an illustration––thoroughly review it and adapt it to your needs, if you want to use it for serious things.

TL;DR

Architecture

(More detailed diagrams and explanation in the blog post)

How to deploy

The solution can be deployed to your AWS account with a few clicks, from the Serverless Application Repository.

More deployment options below: Deploying the solution

Alternative: use HTTP headers

This repo is the "sibling" of another repo here on aws-samples (authorization-lambda-at-edge). The difference is that the solution in that repo uses http headers (not cookies) to transfer JWTs. While also a valid approach, the downside of it is that your Web App (SPA) needs to be altered to pass these headers, as browsers do not send these along automatically (which they do for cookies).

Alternative: build an Auth@Edge solution yourself, using NPM library cognito-at-edge

The repo here contains a complete Auth@Edge solution, i.e. predefined Lambda@Edge code, combined with a CloudFormation template and various CloudFormation custom resources that enable one-click deployment. This CloudFormation template has various parameters, to support multiple use cases (e.g. bring your own User Pool or CloudFront distribution).

You may want to have full control and implement an Auth@Edge solution yourself. In that case, the NPM library cognito-at-edge, may be of use to you. It implements the same functionalities as the solution here, but wrapped conveniently in an NPM package, that you can easily include in your Lambda@Edge functions.

Repo contents

This repo contains (a.o.) the following files and directories:

Lambda@Edge functions in src/lambda-edge:

  • check-auth: Lambda@Edge function that checks each incoming request for valid JWTs in the request cookies
  • parse-auth: Lambda@Edge function that handles the redirect from the Cognito hosted UI, after the user signed in
  • refresh-auth: Lambda@Edge function that handles JWT refresh requests
  • sign-out: Lambda@Edge function that handles sign-out
  • http-headers: Lambda@Edge function that sets HTTP security headers (as good practice)
  • rewrite-trailing-slash: Lambda@Edge function that appends "index.html" to paths that end with a slash (optional use, intended for static site hosting, controlled via parameter RewritePathWithTrailingSlashToIndex, see below)
  • shared: Utility functions used by several Lambda@Edge functions

CloudFormation custom resources in src/cfn-custom-resources:

  • us-east-1-lambda-stack: Lambda function that implements a CloudFormation custom resource that makes sure the Lambda@Edge functions are deployed to us-east-1 (which is a CloudFront requirement, see below.)
  • react-app: A sample React app that is protected by the solution. It uses AWS Amplify Framework to read the JWTs from cookies. The directory also contains a Lambda function that implements a CloudFormation custom resource to build the React app and upload it to S3
  • static-site: A sample static site (see SPA mode or Static Site mode?) that is protected by the solution. The directory also contains a Lambda function that implements a CloudFormation custom resource to upload the static site to S3
  • user-pool-client: Lambda function that implements a CloudFormation custom resource to update the User Pool client with OAuth config
  • user-pool-domain: Lambda function that implements a CloudFormation custom resource to lookup the User Pool's domain, at which the Hosted UI is available
  • lambda-code-update: Lambda function that implements a CloudFormation custom resource to inject configuration into the lambda@Edge functions and publish versions
  • generate-secret: Lambda function that implements a CloudFormation custom resource that generates a unique secret upon deploying

Other files and directories:

Deploying the solution

Option 1: Deploy through the Serverless Application Repository

The solution can be deployed with a few clicks from the Serverless Application Repository.

Option 2: Deploy by including the Serverless Application in your own CloudFormation template or CDK code

See ./example-serverless-app-reuse

Option 3: Deploy with SAM CLI

Pre-requisites

  1. Download and install Node.js
  2. Download and install AWS SAM CLI
  3. Of course you need an AWS account and necessary permissions to create resources in it. Make sure your AWS credentials can be found during deployment, e.g. by making your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY available as environment variables.
  4. You need an existing S3 bucket to use for the SAM deployment. Create an empty bucket.
  5. Ensure your system includes a Unix-like shell such as sh, bash, zsh, etc. (i.e. Windows users: please enable/install "Linux Subsystem for Windows" or Cygwin or something similar)

Deployment

NOTE: Run the deployment commands below in a Unix-like shell such as sh, bash, zsh, etc. (i.e. Windows users: please run this in "Linux Subsystem for Windows" or in Cygwin or something similar)

  1. Clone this repo git clone https://github.com/aws-samples/cloudfront-authorization-at-edge && cd cloudfront-authorization-at-edge
  2. Install dependencies: npm install
  3. TypeScript compile and run Webpack: npm run build
  4. Run SAM build. sam build
  5. Run SAM package: sam package --output-template-file packaged.yaml --s3-bucket <Your SAM bucket>
  6. Run SAM deploy: sam deploy --s3-bucket <Your SAM bucket> --stack-name <Your Stack Name> --capabilities CAPABILITY_IAM --parameter-overrides EmailAddress=<your email>

Providing an email address (as above in step 6) is optional. If you provide it, a user will be created in the Cognito User Pool that you can sign-in with.

Option 4: Deploy as is, then test a custom application

You may want to see how your existing application works with the authentication framework before investing the effort to integrate or automate. One approach involves creating a full deploy from one of the deploy options above, then dropping your application into the bucket that's created. There are a few points to be aware of:

  • If you want your application to load by default instead of the sample REACT single page app (SPA), you'll need to rename the sample REACT's index.html and ensure your SPA entry page is named index.html. The renamed sample REACT's page will still work when specifically addressed in a URL.
  • It's also fine to let your SPA have its own page name, but you'll need to remember to test with its actual URL, e.g. if you drop your SPA entry page into the bucket as myapp.html your test URL will look like https://SOMECLOUDFRONTURLSTRING.cloudfront.net/myapp.html
  • Make sure none of your SPA filenames collide with the REACT app. Alternately just remove the REACT app first -- but sometimes it's nice to keep it in place to validate that authentication is generally working.

You may find that your application does not render properly -- the default Content Security Policy (CSP) in the CloudFormation parameter may be the issue. As a quick test you can either remove the "Content-Security-Policy":"..." parameter from the CloudFormation's HttpHeaders parameter, or substitute your own. Leave the other headers in the parameter alone unless you have a good reason.

I already have a CloudFront distribution, I just want to add auth

Deploy the solution (e.g. from the Serverless Application Repository) while setting parameter CreateCloudFrontDistribution to false. This way, only the Lambda@Edge functions will de deployed in your account. You'll also get a User Pool and Client (unless you're bringing your own). Then you can wire the Lambda@Edge functions up into your own CloudFront distribution. Create a behavior for all path patterns (root, RedirectPathSignIn, RedirectPathSignOut, RedirectPathAuthRefresh, SignOutUrl) and configure the corresponding Lambda@Edge function in each behavior.

The CloudFormation Stack's Outputs contain the Lambda Version ARNs that you can refer to in your CloudFront distribution.

See this example on how to do it: ./example-serverless-app-reuse/reuse-auth-only.yaml

When following this route, also provide parameter AlternateDomainNames upon deploying, so the correct redirect URL's can be configured for you in the Cognito User Pool Client.

I already have an S3 bucket, I want to use that one

You can use a pre-existing S3 bucket (e.g. from another region), by providing the buckets regional endpoint domain through parameter "S3OriginDomainName" upon deploying. In this case it's a good practice to also use a CloudFront Origin Access Identity, so you don't have to make your bucket public. If you indeed have a CloudFront Origin Access Identity (make sure to grant it access in the bucket policy), specify its ID in parameter "OriginAccessIdentity".

Alternatively, go for the more barebone deployment, so you can do more yourself––i.e. reuse your bucket. Refer to scenario: I already have a CloudFront distribution, I just want to add auth.

I want to use another origin behind the CloudFront distribution

You can use a pre-existing HTTPS origin (e.g. https://example.com), by providing the origins domain name (e.g. example.com) through parameter "CustomOriginDomainName" upon deploying. If you want to make sure requests to your origin come from this CloudFront distribution only (you probably do), configure a secret HTTP header that your custom origin can check for, through parameters "CustomOriginHeaderName" and "CustomOriginHeaderValue".

Alternatively, go for the more barebone deployment, so you can do more yourself––i.e. bring your own origins. Refer to scenario: I already have a CloudFront distribution, I just want to add auth.

I already have a Cognito User Pool, I want to reuse that one

You can use a pre-existing Cognito User Pool (e.g. from another region), by providing the User Pool's ARN as a parameter upon deploying. Make sure you have already configured the User Pool with a domain for the Cognito Hosted UI. In this case, also specify a pre-existing User Pool Client ID.

If the pre-existing User Pool is in the same AWS account, the solution's callback URLs wil be added to the User Pool Client you provide automatically. Also, the User Pool's domain and the Client's secret (in static site mode only) will automatically be looked up.

If the pre-existing User Pool is in another AWS account:

  • Also specify parameter UserPoolAuthDomain, with the domain name of the existing User Pool, e.g. my-domain-name.auth.<region>.amazoncognito.com
  • Also specify parameter UserPoolClientSecret (only needed if EnableSPAMode is set to false, i.e. for static site mode)
  • Make sure to add the redirect URIs to the pre-existing User Pool Client in the other account, otherwise users won't be able to log in ("redirect mismatch"). The redirect URIs you'll need to enter are:
    • For callback URL: https://${domain-name-of-your-cloudfront-distribution}${value-you-specified-for-RedirectPathSignIn-parameter}
    • For sign-out URL: https://${domain-name-of-your-cloudfront-distribution}${value-you-specified-for-RedirectPathSignOut-parameter}
  • Ensure the existing User Pool Client is configured to allow the scopes you provided for parameter OAuthScopes

I want to use a social identity provider

You should use the UserPoolGroupName parameter, to specify a group that users must be a member of in order to access the site.

Without this UserPoolGroupName, the lambda@edge functions will allow any confirmed user in the User Pool access to the site. When an identity provider is added to the User Pool, anybody that signs in though the identity provider is immediately a confirmed user. So with a social identity provider where anyone can create an account, this means anyone can access the site you are trying to protect.

With the UserPoolGroupName parameter defined, you will need to add each user to this group before they can access the site.

If the solution is creating the User Pool, it will create the User Pool Group too. If the solution is creating the User Pool and a default user (via the EmailAddress parameter), then this user will be added User Pool Group.

If you are using a pre-existing User Pool, you will need to make a group that has a name matching the UserPoolGroupName.

Deployment region

You can deploy this solution to any AWS region of your liking (that supports the services used). If you choose a region other than us-east-1, this solution will automaticaly create a second CloudFormation stack in us-east-1, for the Lambda@Edge functions. This is because Lambda@Edge must be deployed to us-east-1, this is a CloudFront requirement. Note though that this is a deployment concern only (which the solution handles automatically for you), Lambda@Edge will run in all Points of Presence globally.

SPA mode or Static Site mode?

The default deployment mode of this sample application is "SPA mode" - which entails some settings that make the deployment suitable for hosting a SPA such as a React/Angular/Vue app:

  • The User Pool client does not use a client secret, as that would not make sense for JavaScript running in the browser
  • The cookies with JWTs are not "http only", so that they can be read and used by the SPA (e.g. to display the user name, or to refresh tokens)
  • 404's (page not found on S3) will return index.html, to enable SPA-routing

If you do not want to deploy a SPA but rather a static site, then it is more secure to use a client secret and http-only cookies. Also, SPA routing is not needed then. To this end, upon deploying, set parameter EnableSPAMode to false (--parameter-overrides EnableSPAMode="false"). This will:

  • Enforce use of a client secret
  • Set cookies to be http only by default (unless you've provided other cookie settings explicitly)
  • Skip deployment of the sample React app. Rather a sample index.html is uploaded, that you can replace with your own pages
  • Skip setting up the custom error document mapping 404's to index.html (404's will instead show the plain S3 404 page)
  • Set the refresh token's path explicitly to the refresh path, "/refreshauth" instead of "/" (unless you've provided other cookie settings explicitly), and thus the refresh token will not be sent to other paths (more secure and more performant)

In case you're choosing Static Site mode, it might make sense to set parameter RewritePathWithTrailingSlashToIndex to true (--parameter-overrides RewritePathWithTrailingSlashToIndex="true"). This will append index.html to all paths that include a trailing slash, so that e.g. when the user goes to /some/sub/dir/, this is translated to /some/sub/dir/index.html in the request to S3.

Deploying changes to the react-app or static-site

To deploy changes to the react-app or static-site after successful inital deployment, you'll need to upload your react-app or static-site changes directly to the S3 bucket (with a utility like s3-spa-upload). Making changes to the code only and re-deploying with SAM will not pick up those code changes to be deployed to the S3 bucket. See Issue # 96 for an alternative to force your code changes to deploy.

Cookie compatibility

The cookies that this solution sets, are compatible with AWS Amplify––which makes this solution work seamlessly with AWS Amplify.

Niche use case: If you want to use this solution as an Auth@Edge layer in front of AWS Elasticsearch Service with Cognito integration, you need cookies to be compatible with the cookie-naming scheme of that service. In that case, upon deploying, set parameter CookieCompatibilty to "elasticsearch".

If choosing compatibility with AWS Elasticsearch with Cognito integration:

  • Set parameter EnableSPAMode to "false", because AWS Elasticsearch Cognito integration uses a client secret.
  • Set parameters UserPoolArn and UserPoolClientId to the ARN and ID of the pre-existing User Pool and Client, that you've configured your Elasticsearch domain with.

Additional Cookies

You can provide one or more additional cookies that will be set after succesfull sign-in, by setting the parameter AdditionalCookies. This may be of use to you, to dynamically provide configuration that you can read in your SPA's JavaScript.

Deleting the stack

When deleting the stack in the normal way, some of the Lambda@Edge functions may end up in DELETE_FAILED state, with an error similar to this:

An error occurred (InvalidParameterValueException) when calling the DeleteFunction operation: Lambda was unable to delete arn:aws:lambda:us-east-1:12345:function:LambdaFunctionName:1 because it is a replicated function. Please see our documentation for Deleting Lambda@Edge Functions and Replicas.

Simply wait a few hours and try the delete of the nested stack again, then it works. This is a development opportunity in Lambda@Edge and not something we can influence unfortunately: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-delete-replicas.html

Contributing to this repo

If you want to contribute, please read CONTRIBUTING, and note the hints below.

Declaration of npm dependencies

The sources that are not webpacked but rather run through sam build should have their dependencies listed in their own package.json files––to make sam build work properly.

For the sources that are webpacked this doesn't matter.

License Summary

This sample code is made available under a modified MIT license. See the LICENSE file.

cloudfront-authorization-at-edge's People

Contributors

bartogabriel avatar butkiewiczp avatar cartermeyers avatar dependabot[bot] avatar jesseadams avatar jmcgeheeiv avatar jpeddicord avatar leooo avatar matthijspiek avatar mikereinhold avatar ninitoas avatar njk00 avatar ottokruse avatar peter-at-work avatar pierresouchay avatar pmilliotte avatar rpattcorner avatar sambler avatar scytacki avatar solp-aleios avatar thetrevdev avatar tonybranfort avatar troyready avatar voodoogq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudfront-authorization-at-edge's Issues

Deploy to different regions

Hi I need to deploy some parts to another region such as Cognito. Currently I have to deploy everything to us-east-1 due to the hard constraint on Lambda@Edge.

Is there anyway to get around this? Maybe by copying Lambdas from one region to another? I've tried breaking the CloudFormation up in 2 (with Lambdas and Cloudfront in one) and the others in another script then using !ImportValue but that's messy.

SPA and http-only cookies

I have a SPA application and it makes calls to an API. I had some doubts based on the examples.

Can the token be refreshed silently, using a SPA and http-only cookies?
Why does the SPA example use cookies shared with the client? In the event that it can be silently refreshed using http-only, I see no reason to expose the token and refresh token to js attacks.

Is the 'dummy-origin' removable, using the 'protected-bucket' instead?

Are there any downfalls to completely omitting the dummy-origin from the distribution and instead just pointing it directly at the protected-bucket?

In another related issue the following was mentioned:

The dummy origin "example.org" can remain there, as it is the origin behind the Lambda@Edge functions for parseAuth and such; they will always respond to the request instead of allowing the request to pass through to the origin. Hence "dummy origin", it is there because a CloudFront behavior needs to have an origin, but requests will never be forwarded to it.

The above makes me wonder why a dummy-origin is needed if the requests will never be forwarded to it?

Thank you!

Only initial lambda version (1) works

For some reason, if I update the lambdas after the initial deployment and the versions update to 2 (or higher) this solution fails with a very generic error suggesting that there is an issue with the lambda permissions or the lambda configuration. If I drop the entire stack, and deploy the same change at version 1 it seems to work fine. I am using CreateCloudFrontDistribution=false

Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self'

Deploying this to a working cloudfront distribution is causing inline javascript to fail. HTML page renders properly but javascript code does not. All scripts are local to the s3 bucket not from any external urls. Any help is much appreciated.

Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' https://code.jquery.com https://stackpath.bootstrapcdn.com". Either the 'unsafe-inline' keyword, a hash ('sha256-sSYP07KSXyZ0gQue3jDWS5nZaWUz7V/V0566xsjP90k='), or a nonce ('nonce-...') is required to enable inline execution.

Random 'Invalid query string' after first login with example

I executed the stateless application install, and created a new user, and when I log in, most of the time I'm presented with the following error:

ERROR: Error: Invalid query string. Your query string should include parameters "state" and "code"

the state parameter is missing from the call. If I press the provided "refresh" link, sometimes it works and I'm passed into the applications success screen. Is there any way to fix this?

After logging in, I always land on https://CFID.cloudfront.net/parseauth?code=f3c82b7a-3c4f-4386-9cb7-19b894cd3698 with Bad Request - ERROR: Error: Invalid query string. Your query string should include parameters "state" and "code"

Expired Token not getting refreshed

I wonder if anyone else is having this issue, if after having the site up for the duration of the IDTokens life, and visiting the site again (in a separate tab, or closing the window, etc), they find that they are still returned the expired token, as if it's cached? Only doing a refresh on the page, makes the token be re-generated. In the failed state, the app is delivered the IDToken with the old stale cookie value.

I've been trying to adjust configurations of the CloudFront caching policy to no success, after every expiration, the page is allowed to load without the proper authentication (old cookies), and any resources requiring authentication outside of CloudFront (Service calls into API Gateway using the token) fail, because the token is expired.

Difficulties Reusing UserPool

Having some problems reusing a userpool in a migration scenario.

@ottokruse For context, we have existing installations with a V1 auth@edge backend. I've read the advice that migrating from v1 to v2 is discouraged if not unfeasible, but we have existing application installations using v1 infrastructure with substantial artifacts we need to preserve.

We have no problem creating new installations with a@e v2, but manual migration of the artifacts is difficult and error prone and we're looking to future-proof for other breaking changes.

Our migration strategy is:

  • CloudFormation Export key artifacts from our existing stacks, most importantly the UserPool itself with its users, and a wide range of application specific artifacts
  • Build a new v2 installation CFN template that imports the above artifacts with a cross-stack import, and then:
    • Creates its own CognitoApplication Client on the imported UserPool
    • Creates its own Cloudfront and related artifacts
    • Invokes auth@edge v2.x with references to the original user pool and existing client

The only change I've made is to remove the DependsOn: UserPoolDomain because it was created in the exporting stack and is guaranteed to exist.

The relevant invocation of a@e looks like:

  LambdaEdgeProtection:
    Type: AWS::Serverless::Application
    # DependsOn:
    #   - UserPoolDomain
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-authorization-at-edge
        SemanticVersion: !Ref SemanticVersion
      Parameters:
        CreateCloudFrontDistribution: "false"
        HttpHeaders: !Ref HttpHeaders
        UserPoolArn:
          Fn::ImportValue: !Sub ${ImportStackName}-UserPoolArn
        UserPoolClientId: !Ref UserPoolClient

Seems to me this should work fine. But I'm seeing:

Embedded stack arn:aws:cloudformation:us-east-1:111111111111:stack/testj6b-LambdaEdgeProtection-X0O0IFFNH3FR/feefc9d0-1e1b-11eb-a2c8-0a726edfb5e3 was not successfully created: The following resource(s) failed to create: [LambdaCodeUpdateHandlerRole, UserPoolDomainLookup].

To the best of my knowledge the missing parms (SPAMode, etc) are defaultable and therefore not needed -- and not present in the working v1 config or in working v2 standalone configs that are not used in migration.

Can you advise me how I might debug this configuration or problem, given that it doesn't look like CFN is returning much in the way of information? The UserPoolArn looks reasonable and is of the form:
arn:aws:cognito-idp:us-east-1:111111111111:userpool/us-east-1_ABCDEFGHI

I'm concerned about the domain lookup especially because a UserPool can have only one domain as far as I can tell. We're just using the Cognito default domain.

As an alternative, do you have any advice on actual inplace migration from V1 to V2? Sounds difficult.

Headers configuration reference looks to be invalid

Hey, I've just deployed these functions but in order to get them to work I had to make one minor change, I'm not sure if there was an issue with the way I'm deploying it or it's a genuine issue.

The headers in the http-headers function are loaded from the configuration.json but then when setting the headers into the correct key/val format it seemed to be using the entire JSON rather than just the httpHeaders section of the JSON.

https://github.com/aws-samples/cloudfront-authorization-at-edge/blob/master/src/lambda-edge/http-headers/index.ts#L18

I changed the JSON.parse() in the above URL to pass only the httpHeaders key into the asCloudFrontHeaders() function and everything worked perfectly.

Should the configuration.json in the http-headers function only contain the headers anyway rather than the whole configuration that's the same as all the other functions or is this a legit bug?

Thank you!

Change the User Pool

Hi!

I need to change the User Pool of the already created application on my one. What are the proper ways to do this?

[Question] Does this work with the default Cognito UI?

I don't know if I have done something stupid - but I have extracted the lambda logic out and integrated it into a very basic
Cloudfront hosted website.

Everything works as expected, unless you are a new user trying to register using the hosted UI, when once you have filled in the form you are sent to /error page on the hosted UI. The user is added to the pool.

So I cant understand whats going on! The logic for stopping, signing out, refreshing etc all work fine. Is this meant for use with the hosted Cognito UI?

Cheers

Missing dependency adm-zip

Summary: Fresh clone following the build steps for Option 2 fails on third step. "adm-zip" dependency is missing.

From CONTRIBUTING.md

"A reproducible test case or series of steps": git clone then npm i then npm run build
"The version of our code being used": Master as of this commit
"Any modifications you've made relevant to the bug": No modifications made.
"Anything unusual about your environment or deployment": Here's a useful output gatsbyjs's CLI produced about my environment...

System:
OS: Linux 5.0 Ubuntu 18.04.3 LTS (Bionic Beaver)
CPU: (4) x64 Intel(R) Core(TM) i3-7100H CPU @ 3.00GHz
Shell: 4.4.20 - /bin/bash
Binaries:
Node: 10.10.0 - ~/.nvm/versions/node/v10.10.0/bin/node
npm: 6.9.0 - ~/.nvm/versions/node/v10.10.0/bin/npm
Languages:
Python: 3.6.3 - /home/public/.pyenv/shims/python
Browsers:
Chrome: 76.0.3809.100
Firefox: 68.0.2
npmGlobalPackages:
gatsby-cli: 2.7.8

[Feature request] Handle custom JWT expiration setting (eg 5 minutes)

Using semantic version 1.2.1

Doing some timeout work and believe there might be an issue between auth@edge and the (very) new AWS feature to set a custom value on token expiration, documented here.

We start with a working angular SPA and default values. Verify operation and log out.

Now we change the identity and access tokens to 5 minutes and save:

image

On relogin we get the cognito email/password challenge, and then:
image

So that's not encouraging. We reset the timeouts to 60m and can log in fine with the "try again".

We can then reset the cognito to 5m but it's ineffective for testing (does not time out after 5m), probably because the app already has the 60m cookie.

Thoughts?

Using Cognito from a different account

I'm attempting to run the Cognito provider from another account but getting a permissions error from the UserPoolDomainLookup resource. Is there a way to disable this resource if I'm using a substack OR is there another suggestion you might have.

Cannot teardown lambda@edge when using custom distribution

I have a stack that has several lambdas for lambda@edge on a cloudfront distribution. The distribution is in stack A. I've launched this application as stack B and did not use the cloudfront distribution included with it.

I can't delete stack B because stack A references the lambdas in the stack as imports. So I teardown stack A. Now I go to tear down stack B. I can't because the lambdas are replicated for lambda@edge. I get an error when I try to manually delete the lambdas.

An error occurred when deleting your function: Lambda was unable to delete arn:aws:lambda:redion:accountId:function:stack-name-ParseAuthHandler-someid:1 because it is a replicated function. Please see our documentation for Deleting Lambda@Edge Functions and Replicas.

I read the documentation, it says to go to my distribution (stack A) and delete the lambda. Well, that distribution was already deleted. I can't do that.

I reached out to support to help me delete these lambdas. I think this is a bug. I know it's probably not possible for you to fix this. So, assuming I'm correct and this is a bug, and that you have no ability to actually prevent this behavior, I'd like to suggest some features and changes.

Feature:
Output the lambda arns to SSM so I can reference them that way instead of using ImportValue in my own stack. You could do this in addition to the output values.

Request:
Add a not in the documentation that this can happen and advise users us the SSM parameters and teardown this stack first (my stack B) before attempting to teardown stack A (my cloudfront distro).

Access denied from sample SPA

Hello,

I deployed the sample app from the AWS Serverless Application Repository.
Upon navigating to the URL for the CloudFront distribution, I log into my Cognito User Pool, update my temporary password, and I get a 403 Access Denied.

I checked into the /parseAuth endpoint, seems there's some issue here. My Lambda logs show nothing for the parseAuth function (it says there was an error loading Log Streams). I tried hitting parseAuth directly in my browser by supplying the code and state like this:

https://{cloudfrontID}.cloudfront.net/parseauth?code={CODE}&state={STATE}

The result was:

Bad Request
ERROR: Error: HTTP POST to https://auth-1fdb0555.auth.us-east-1.amazoncognito.com/oauth2/token failed

Is this some issue with the parseAuth Lambda function, or something else entirely?

Should refresh_auth validate the nonce hmac?

Thanks for this example; it's been really helpful for me as I try to get a deeper understanding of oauth.

From my reading of the code, it appears that the refresh_auth doesn't ever actually use the nonce hmac cookie to validate that either the original nonce (cookie) or the querystring nonce was actually signed with the nonce signing secret. Since there is a test that those nonces match, it seems like we are concerned with some kind of verification there.

I may be misreading the code or missing something about oauth, but I wanted to ask the question just in case.

Login bypass: index.html reachable without login

My understanding is that nothing on the S3 bucket should be reachable without being authenticated. However, as far as I can tell the authentication check only really happens for GET requests.

After setting up this demo from the AWS Serverless Application Repository:

$ curl -X GET d1irs8pwnygew6.cloudfront.net/                                  
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>CloudFront</center>
</body>
</html>
$ curl -X POST d1irs8pwnygew6.cloudfront.net/                                  
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="shortcut icon" href="/favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><title>Protected Single Page App</title><link href="/static/css/main.80b9fb4f.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script src="/static/js/runtime-main.8f658356.js"></script><script src="/static/js/2.a948a276.chunk.js"></script><script src="/static/js/main.cd8094a6.chunk.js"></script></body></html>

I am able to bypass the login with any HTTP methods other than GET (even non-existing ones).

I don't know if any other resource in the S3 bucket is reachable (probably not since index.html is served for every path), but I think even just serving index.html for an SPA could be a security issue.

Is this something that can easily be fixed?

Not working with Federated SAML Auth

This works fine with the user pool. When I switched to federated SAML authentication (by changing the config in the Cognito), the Login with Corporate button appears but doesn't show anything -- no IDP sign-in page. I have handcrafted the login link using the tutorials on the https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-saml-idp.html and I can log in, but then redirect URI to /preauth gives error as there is no state parameter. I tried to pass the state parameter but then the login page doesn't appear at all.

Have you tested this with the Federated SAML Auth? I am blocked and any help would be great.
It would be great if you can test and update the docs on how to integrate with the Federated IDP using SAML -- most folks like us using SPA could easily deploy their apps on S3 and use this shim to protect those SPAs.

Thanks

Serverless Application not available in SAR

Is this application no longer published in the SAR?

I have tested from multiple accounts using administrative roles (with access to serverlessrepo), and am unable to access the application as specified in the README: spa-authorization-at-edge, nor the application specified in the reuse-auth-only.yaml template cloudfront-lambda-edge-cognito-auth

An error occurred (AccessDeniedException) when calling the CreateCloudFormationTemplate operation: User: *** is not authorized to perform: serverlessrepo:CreateCloudFormationTemplate on resource: arn:aws:serverlessrepo:us-east-1:520945424137:applications/spa-authorization-at-edge

An error occurred (AccessDeniedException) when calling the CreateCloudFormationTemplate operation: User: *** is not authorized to perform: serverlessrepo:CreateCloudFormationTemplate on resource: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-lambda-edge-cognito-auth

I can follow the process to build myself, but initial deployment and testing would be easier if the SAR application was accessible.

DeprecationWarning: Buffer() is deprecated due to security and usability issues in check-auth

Hello,

I am getting following error in check-auth lambda function, Using latest code with node js 12.x version.

2020-07-17T11:37:44.524Z e3eb0320-90bb-4930-a588-f56ed4f273de ERROR (node:8) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.

Is this some issue with reference lib in check-auth Lambda function ? Please suggest.

Thanks,
Dilip

Error 413 after successful login

Dear developers,

thank you for the great login functionality! I successfully deployed the solution using one of my existing CloudFront distributions. However, I am struggling to do the same for another distribution. After logging in, I receive an error 413 because the URL string is longer than the maximum permitted by CloudFront. For some reason, the value of the state parameter is longer than 9000 characters. I don't understand why this happens. The settings of both CloudFront distributions are identical, except they point to different S3 buckets and use different lambda functions (all of them were created using your template). Both S3 buckets contain a different SPA. All lambda functions point to the same user pool. CNAMES were configured for both distributions and in the cognito app settings. Do you have any idea what could be going wrong?

All the best.
Ben

Question about Origins

Thanks a lot for such a complete and useful work! When looking at the example of reusing auth with my own Cloudfront distribution, I see the following:

        Origins:
          - DomainName: example.org
            Id: dummy-origin
            CustomOriginConfig:
              OriginProtocolPolicy: match-viewer
          - DomainName: example.com
            Id: protected-origin
            CustomOriginConfig:
              OriginProtocolPolicy: match-viewer

Are we supposed to replace example.org and example.com there with other values? Also, are we supposed to add our own S3 Bucket where we'll be deploying our SPA?

Thanks again,
Sammy

Change cognito auth domain

Hello!

I am interested in the next thing. I've specified my custom domain (auth.example.com) for Cognito authorization. I want to use one for my Serverless application. Is there a way to solve this task?

[Question] Redirect to localhost for application authentication debugging

@ottokruse
We're working on the problem of bootstrapping authentication in an SPA. By bootstrapping I mean supplying the SPA with the basic configuration information it needs to make AWS API calls without having to compile information into the code. The key piece of configuration information needed is the bucket that the SPA is running in and the cognito identity pool that gives the app its internal role/permissions.

The debug cycle is very difficult, as currently you need to:

  1. Recompile the app with debug statements
  2. Reupload the complied SPA to S3
  3. Invalidate
  4. Test
  5. Repeat

We have a version of the (node) app that runs locally on 8080 and would like to be able to redirect to the local instance instead of the S3 hosted version -- and do so with the entire authentication chain intact . That is, since we're examining the results of authentication, we want the entire authorization-at-edge framework to execute as usual, then instead of sending the final redirect call to the index.html in the bucket, sending it to the localhost:8080/index.html url.

From my reading of the blog, the place to do the change in the final destination is the checkConfig handler. Is that right?

When I inspect the lambda code in the repository there's an easy and obvious place: the checkauth index.ts line 18:

const domainName = request.headers['host'][0].value;

suggesting that a simple change to set domainName to 'localhost:8080' should do the job.

  1. Is this understanding correct? Is there a better way to accomplish the goal of local debugging with authentication active?
  2. I then tried to simply update the lambda version in place. However when I do so I do not see index.ts, but rather a bundle.js

I'm figuring the bundle is a compressed webpack of index.ts so I thought no problem, as generally a webpack obfuscation just removes spaces and punctuation. But when I examine bundle.ts the code seems to bear no relationship to checkAuth:index.ts, and in fact does not reference the domainName variable I want to change.

I'd like to avoid forking the entire application, as that would be a whole new toolchain to learn and set up - part of the appeal was consuming an existing app.

Do you have any thoughts on how to modify checkAuth for local debugging (if my approach seems viable in the first place) or how to approach the overall goal?

[Enhancement] Redirect trailing / pages to index.html

Some CMSes (like hugo) are using links with trailing slashes with the expectation of index.html path extension semantics. E.g. example.com/test/ is expected to return example.com/test/index.html CloudFront does not support this oob on anything but the root level. However, index.html extensions in subfolders is the usual behavior of the S3 website functionality. Here we cannot resort to using S3 web hosting, since we don't want to publish an S3 site.

I solved this in my basic-auth solution with a short snippet like the second solution here:
https://stackoverflow.com/questions/49082709/redirect-to-index-html-for-s3-subfolder

It should be a quick fix. I can do it if its accepted.

Invalid login token. Token expired Issues and Unwanted Local Storage in Brave Browser

@ottokruse, this is a resolved issue, but has information that might help others, so I'm not immediately closing it. Feel free to do so whenever it seems right.

In testing we discovered a persistent situation where an a@edge installation using a Cloudfront alternative domain would fail with an error of the form Invalid login token. Token expired:.

  • The alternative domain had previously been part of another a@edge install
  • The problem did not occur with a new alternate domain
  • The problem did not occur with no alternate domain, e.g. raw CloudFront URL
  • The problem could be resolved by removing the alternate domain and reverting to Cloudfront URL (by cloudformation update, so that everything in a@edge was kept in sync)

Turns out the solution is simple ...

  • a@edge creates a long lived ID token that is stored in a browser's local storage.
  • The token is stored under the domain of the web application -- in the case the alternate domain --, not the AWS or Cognito domain, which can be confusing
  • The token is not exactly a cookie, it's a local storage item, sometimes called Web Storage
  • Finally, and most fun, not all browsers clear local storage when you select the "Clear all cookies" item. Brave is one such browser

So, if this happens to you, find the ID token 'cookie' in local storage associated with the alternate domain, and delete it. Here's one example:

image

Interestingly, clearing this token also seems to resolve the failed logout seen in #94, not sure why.

ZIP does not support timestamps before 1980

I ran into this issue with babel: babel/babel#12125

I don't understand how updating yarn is going to fix old files in the babel npm packages without a new release of babel. So I'd guess this will be broken until a release. But perhaps there is some other way to get it work more cleanly.

My temporary work around is to run:
find .aws-sam/build/ReactAppHandler/node_modules/ -mtime +16000 -print -exec touch {} \;
between steps 4 and 5 in the readme.

When trying to track this down I see this kind of issue seems to happen every so often with various packages. It'd be really nice if the AWS tools could do something with the zip library to just ignore the dates on the files. Or at least they could catch the error and print out the file name of the file that has the problem.

In case it is useful for the AWS Sam team here is the stack trace:

Unable to export
Traceback (most recent call last):
  File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 270, in export
    self.do_export(resource_id, resource_dict, parent_dir)
  File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 286, in do_export
    uploaded_url = upload_local_artifacts(resource_id, resource_dict, self.PROPERTY_NAME, parent_dir, self.uploader)
  File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 155, in upload_local_artifacts
    return zip_and_upload(local_path, uploader)
  File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 172, in zip_and_upload
    with zip_folder(local_path) as (zip_file, md5_hash):
  File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 188, in zip_folder
    zipfile_name = make_zip(filename, folder_path)
  File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 206, in make_zip
    zf.write(full_path, relative_path)
  File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 1730, in write
    zinfo = ZipInfo.from_file(filename, arcname)
  File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 530, in from_file
    zinfo = cls(arcname, date_time)
  File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 357, in __init__
    raise ValueError('ZIP does not support timestamps before 1980')
ValueError: ZIP does not support timestamps before 1980

Insert Content Security Policy Headers

We're working our way thru adapting this promising approach to securing an Angular JS app. In replacing the sample REACT application we are finding our CSS blocked, and it looks as if it's a problem with content security policy headers. There's good information in adding headers here, but because of the complexity of the authentication solution it's not clear where in the chain of requests and responses to add code to create the headers.

Do you have a suggestion or recommendation?

And a request ... it would be great to have a little documentation on discussion on taking the deployed solution to the next step and using it for an actual app, including issues like this.

Thanks for a promising (if complex) solution!

Export significant stack outputs to allow cross-stack referencing

Exporting more of the stack's outputs would allow users to more easily integrate their own CF stacks with the application. I'm particularly interested in S3Bucket and UserPoolId. These would allow me to:

  1. Configure my stack to deploy my own front-end assets to the existing S3 bucket, and

  2. Create an ID pool on top of the existing user pool

respectively. As-is, if I want to do these things via CF I need to manually create mappings for the bucket and user pool.

[Question] Accessing S3 bucket from within the SPA?

Hi,

Sorry if this is slightly off topic, but I was wondering if it is possible to access the private S3 bucket hosting the SPA, with a jquery .get function from within the SPA?

Background:

  • I'm trying to setup authentication to access a private S3 bucket, and using this open source repository (https://github.com/rufuspollock/s3-bucket-listing) to list all items within the S3 bucket once authenticated. (I have also opened a ticket there rufuspollock/s3-bucket-listing#101, where I go through some of the steps I've done).

  • I've run the cloudformation template, and replaced the react SPA, with the index.html mentioned in the above ticket, and logging in works great, and I can nearly see the entire index.html page, but it doesn't display anything in the S3 bucket itself.

I have some theories about this, but wasn't sure who to ask, so hoping someone could point me in the right direction.

  • In the list.js, it uses a $.get(s3_rest_url), however this returns an empty string (from what I can tell), and I think this is due to the list.js not passing in the authentication tokens required to access the S3 bucket (in the same way the user needed to sign in to access the index.html).

If this is the case, would a solution be to download the cookies from the browser post logging by the user, in the list.js file and pass one of the auth tokens in the $.get(s3_rest_url) so that it can access the S3 bucket?

Please let me know if you need any more information, and any advice would be greatly appreciated!

Thank you.

Migration to 2.0 Problem -- RedirectUrisSignOut

Working on migrating our application to a@e 2.0 to look further at #81 and other features, and run into a roadblock. The demonstration stack for 2.0 that illustrates how to define Cognito UserPool etc. in the parent stack works fine but ...

I've pulled the Cognito User Pool and other related artifacts into the main stack, and successfully created all artifacts until we get to the calls to the nested a@e stack. In creating the framework I see:

Embedded stack arn:aws:cloudformation:us-east-1:REDACTED:stack/ae200d-LambdaEdgeProtection-L97QEYQVDZLR/4cb7b300-ef9e-11ea-ace7-126c97cb5bc1 was not successfully created: Cannot export output RedirectUrisSignOut. Exported values must not be empty or whitespace-only.

My call to the nested stack which worked fine in 1.2 has changed only in minor ways ... I've added parameters for the UserPoolArn and UserPoolClientId I've created in the parent stack, just as the a@e example for 2.0 does. The failing call looks like this:

  LambdaEdgeProtection:
    Type: AWS::Serverless::Application
    DependsOn: UserPoolDomain
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-authorization-at-edge
        SemanticVersion: 2.0.0
      Parameters:
        CreateCloudFrontDistribution: "false"
        HttpHeaders: !Ref HttpHeaders
        UserPoolArn: !GetAtt UserPool.Arn
        UserPoolClientId: !Ref UserPoolClient

A visit through the code shows that RedirectUrisSignOut is mainly referenced in src/cfn-custom-resources/user-pool-client/index.ts and in the main template, suggesting that maybe I've got the dependencies wrong, but they seem obvious ... an explicit depends on the UserPoolDomain, and implicit ones for the UserPoolClient and UserPool.Arn. All we're really doing here is adding the HttpHeaders parameter from the main template.

Another possibility -- because this example does not use custom domains, but just relies on the CloudFront URL, as does the example, is some problem with the sentinel values -- but I left them alone, as they are in the example.

Any idea what might be going on?

/signoff logs and debugging

We're having some problems debugging the /signout functionality, which isn't described quite as thoroughly in the blog as the authorization scenarios.

Let me start with my understanding of the /signout chain, which may be incorrect:

  1. App or user hits the /signout URL
  2. Cloudfront intercepts and sends request to the lambda@edge configured for the /signout Behaviour
  3. Lambda@edge neuters/kills authentication cookies and invokes logout on Cognito
  4. Cognito app client finds the signout URL and redirects there (actually set to the front page of the app)
  5. CheckAuth or ParseAuth (I forget which) intercept the redirect, fail because neutered cookies, and send to Cognito login

What we see is:

  • App signs in successfully
  • App signs out by redirecting to the /signout behaviour configured in CloudFront, e.g. var url = "https://"+host+'/signout'; window.location = url
  • Cloudfront points at a versioned lambda@edge for signout. The console shows no versioning on that lambda, which may just mean that versioning on the edge doesn't show in the GUI ... or it may be a problem
  • In the App we get a confusing range of failures from a CSP fail referencing a nonexistent Cognito image to (when allowing that in policy) a CORB block Cross-Origin Read Blocking (CORB) blocked cross-origin response https://auth-2dc...
  • A simple refresh brings up Cognito authentication screen and all is well

The immediate issue is that I can't seem to see the CloudWatch logs for the /signout lambda@edge. The lambda is present in the function list, and accessing its CW logs the easy way via console/monitoring tab/view logs in cloudwatch yields The specific log group: /aws/lambda/...-LambdaEdgeProte-SignOutHandler-12345 does not exist in this account or region.

A closer look at the CW logs directly specifying the Lambda function name yields some logs but they are old with respect to the run in question and contain only start/end messages anyway.

I've seen the 'logs don't exist' scenario in a few different debug situations and wonder how this happens?
I also note that all of the lambda@edge functions for this application have the same version. We regularly update the lambdas in debugging, usually modifying the CSP headers. Do all the lambdas update in sync with any change to the a@edge?

Looking for hints to debug these kinds of scenarios where a lambda might be the issue but it leaves no logs we can reach to see what happened.

Logged a call with AWS, who believe this is an a@edge issue.

[Question] - use of "CookieSettings" for custom data

@ottokruse
We're working on a well known problem of providing configuration data to an S3 SPA, while trying to keep the "cloudfront-authorization-at-edge" (CFAAE ) an intact authorization 'black box'.

The problem in a nutshell is that a SPA in S3 has no way to acquire configuration unless it's compiled in -- not even the name of the bucket it resides in.

One of our developers came up with a great idea -- since our CloudFormation that invokes the CFAAE knows everything we need to pass through to the SPA, we could insert the CFN data in a cookie using the "CookieSettings" parameter to the SAM application, like this example for a bucketname:

  LambdaEdgeProtection:
    Type: AWS::Serverless::Application
    Properties:
      Location:
        ApplicationId: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-authorization-at-edge
        SemanticVersion: !Ref SemanticVersion
      Parameters:
        CreateCloudFrontDistribution: "false"
        HttpHeaders: !Ref HttpHeaders
        AlternateDomainNames: !Join [ ",", !Ref AlternateDomainNames ]
        CookieSettings: "{\"bucketName\":\"hardcoded\", \"idToken\": null,\"accessToken\": null,\"refreshToken\": null,\"nonce\": null}"

That worked brilliantly in the sense that it dynamically altered the CookieSettings in each Lambda@Edge's configuration.json, for example:

  "cookieSettings": {
    "bucketName: "hardcoded",
    "idToken": null,
    "accessToken": null,
    "refreshToken": null,
    "nonce": null
  },

but it had no effect whatever on the app ... the bucketName cookie was never created or passed to the application, although all the others were.

Which makes me think I'd misunderstood the use of cookie settings ... perhaps they are a way to set behavioural metadata on already-defined cookies, and defining a new cookie would require modifying (all?) the lambdas to recognize and propagate the cookie end to end. Which is a place we do not want to go, per the 'black box' constraint.

I tried one more time, and attempted to sneak a value into a predefined cookie, in this case the accessToken:

CookieSettings: "{\"idToken\": null,\"accessToken\": \"bucketname\",\"refreshToken\": null,\"nonce\": null}"

but the bucketname just gets replaced by the real access token, as one might expect.

Is there a way to pass custom values into cookies all the way through the framework to the SPA that I'm just not seeing?

If not, that would be a terrific feature, as almost every SPA has need of some data that is available in the installing cloudFormation script. But I'm all on the application side right now ...

R.

How to deploy updated react-app code with SAM CLI

Changes that I've made in the react-app (src\cfn-custom-resources\react-app\src) are not getting deployed after successful initial deployment.

Steps I've taken:

  1. Deploy successfully for first time using using SAM CLI (Option 2 in README) straight from this git repo witout any changes (npm install, npm run build, sam build --use container, sam package..., sam deploy..). After this deploy, it's working as I'd expect - I can sign in & see the sample react app page.
  2. Make minor change to src\cfn-custom-resources\react-app\src\App.js (just an update to explanation text to test code deploy)
  3. Attempt to deploy update using same steps as 1 (npm install, npm run build, sam build --use container, sam package..., sam deploy..). All complete without errors.
  4. Clear the CloudFront cache (just to make sure) : aws cloudfront create-invalidation --distribution-id E16D.. --paths "/*"

After the update is deployed with # 3 I can see:

  • The only updated sam zip package in the s3 build bucket is the react-app handler (as I would expect) and I can see the change from # 2 in that zip file.
  • Stack shows UPDATE_COMPLETE and stack events specifically shows ReactAppHandler UPDATE_COMPLETE with times matching update from # 3
  • ReactAppHandler lambda fn has an updated time of when I executed # 3
  • No log entries in CloudFront logs for ReactAppHandler when # 3 was executed. The only log entries are from when # 1 was executed (logs checked > 1 hour after # 3 completed)
  • react-app s3 bucket has no new files including no new files in statics/js/ - all are from the # 1 deployment.

I'm guessing this is something obvious I'm missing - thanks for pointing me in the right direction.

state value is not json in parse-auth

I have setup whole stack using cloudformation and its works great.

At one point i got this as state value eyJub25jZSI6Ii4xQUZkZVVONVQuZn5 VXciLCJyZXF1ZXN0ZWRVcmkiOiIvIn0=

when i tried to console.log(Buffer.from(stateval, 'base64').toString()) i get this value

{"nonce":".1AFdeUN5T.f~U]��\]Y\�Y\�H���ȟ which is not a json and it errors at

https://github.com/aws-samples/cloudfront-authorization-at-edge/blob/master/src/lambda-edge/parse-auth/index.ts#L20

What is the root cause for this issue?

Request for an example of token refresh after timeout in SPA?

Would love to avoid reinventing this particular wheel, and clearly many of us have already implemented a stale token detection and token refresh or warn action in a javascript based SPA. But I can't seem to find one although I bet one is around somewhere in issues or elsewhere. If anybody has one to share, would you please post a link here?

Current state is the obvious one:

  • Everything works
  • Time passes and (presumably) the token eventually expires
  • The user tries to do something and it fails in odd ways
  • The user refreshes the screen and all is well again

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.