aws-samples / cloudfront-authorization-at-edge Goto Github PK
View Code? Open in Web Editor NEWProtect downloads of your content hosted on CloudFront with Cognito authentication using cookies and Lambda@Edge
License: MIT No Attribution
Protect downloads of your content hosted on CloudFront with Cognito authentication using cookies and Lambda@Edge
License: MIT No Attribution
Would love to avoid reinventing this particular wheel, and clearly many of us have already implemented a stale token detection and token refresh or warn action in a javascript based SPA. But I can't seem to find one although I bet one is around somewhere in issues or elsewhere. If anybody has one to share, would you please post a link here?
Current state is the obvious one:
Having some problems reusing a userpool in a migration scenario.
@ottokruse For context, we have existing installations with a V1 auth@edge backend. I've read the advice that migrating from v1 to v2 is discouraged if not unfeasible, but we have existing application installations using v1 infrastructure with substantial artifacts we need to preserve.
We have no problem creating new installations with a@e v2, but manual migration of the artifacts is difficult and error prone and we're looking to future-proof for other breaking changes.
Our migration strategy is:
The only change I've made is to remove the DependsOn: UserPoolDomain because it was created in the exporting stack and is guaranteed to exist.
The relevant invocation of a@e looks like:
LambdaEdgeProtection:
Type: AWS::Serverless::Application
# DependsOn:
# - UserPoolDomain
Properties:
Location:
ApplicationId: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-authorization-at-edge
SemanticVersion: !Ref SemanticVersion
Parameters:
CreateCloudFrontDistribution: "false"
HttpHeaders: !Ref HttpHeaders
UserPoolArn:
Fn::ImportValue: !Sub ${ImportStackName}-UserPoolArn
UserPoolClientId: !Ref UserPoolClient
Seems to me this should work fine. But I'm seeing:
Embedded stack arn:aws:cloudformation:us-east-1:111111111111:stack/testj6b-LambdaEdgeProtection-X0O0IFFNH3FR/feefc9d0-1e1b-11eb-a2c8-0a726edfb5e3 was not successfully created: The following resource(s) failed to create: [LambdaCodeUpdateHandlerRole, UserPoolDomainLookup].
To the best of my knowledge the missing parms (SPAMode, etc) are defaultable and therefore not needed -- and not present in the working v1 config or in working v2 standalone configs that are not used in migration.
Can you advise me how I might debug this configuration or problem, given that it doesn't look like CFN is returning much in the way of information? The UserPoolArn looks reasonable and is of the form:
arn:aws:cognito-idp:us-east-1:111111111111:userpool/us-east-1_ABCDEFGHI
I'm concerned about the domain lookup especially because a UserPool can have only one domain as far as I can tell. We're just using the Cognito default domain.
As an alternative, do you have any advice on actual inplace migration from V1 to V2? Sounds difficult.
Team
I don't want to trigger multiple lambda. Can we do all stuff in single lambda & can we get authorization header after login to Cognito to Cloudfront
I am referring below example, but i am not getting header after successful log in from AWS Cognito
Or can share reference code in Node js for parse-auth
I have a SPA application and it makes calls to an API. I had some doubts based on the examples.
Can the token be refreshed silently, using a SPA and http-only cookies?
Why does the SPA example use cookies shared with the client? In the event that it can be silently refreshed using http-only, I see no reason to expose the token and refresh token to js attacks.
I have setup whole stack with cloud-formation ,
at times i get this error Nonce mismatch
with Try Again
What might be leading to this experience ?
Summary: Fresh clone following the build steps for Option 2 fails on third step. "adm-zip" dependency is missing.
From CONTRIBUTING.md
"A reproducible test case or series of steps": git clone
then npm i
then npm run build
"The version of our code being used": Master as of this commit
"Any modifications you've made relevant to the bug": No modifications made.
"Anything unusual about your environment or deployment": Here's a useful output gatsbyjs's CLI produced about my environment...
System:
OS: Linux 5.0 Ubuntu 18.04.3 LTS (Bionic Beaver)
CPU: (4) x64 Intel(R) Core(TM) i3-7100H CPU @ 3.00GHz
Shell: 4.4.20 - /bin/bash
Binaries:
Node: 10.10.0 - ~/.nvm/versions/node/v10.10.0/bin/node
npm: 6.9.0 - ~/.nvm/versions/node/v10.10.0/bin/npm
Languages:
Python: 3.6.3 - /home/public/.pyenv/shims/python
Browsers:
Chrome: 76.0.3809.100
Firefox: 68.0.2
npmGlobalPackages:
gatsby-cli: 2.7.8
Thanks a lot for such a complete and useful work! When looking at the example of reusing auth with my own Cloudfront distribution, I see the following:
Origins:
- DomainName: example.org
Id: dummy-origin
CustomOriginConfig:
OriginProtocolPolicy: match-viewer
- DomainName: example.com
Id: protected-origin
CustomOriginConfig:
OriginProtocolPolicy: match-viewer
Are we supposed to replace example.org
and example.com
there with other values? Also, are we supposed to add our own S3 Bucket where we'll be deploying our SPA?
Thanks again,
Sammy
When the token expires and refresh the new token, shouldn't the new refresh token also be updated?
Currently only the id_token and access_token are updated:
https://github.com/aws-samples/cloudfront-authorization-at-edge/blob/master/src/lambda-edge/refresh-auth/index.ts#L43
I executed the stateless application install, and created a new user, and when I log in, most of the time I'm presented with the following error:
ERROR: Error: Invalid query string. Your query string should include parameters "state" and "code"
the state parameter is missing from the call. If I press the provided "refresh" link, sometimes it works and I'm passed into the applications success screen. Is there any way to fix this?
After logging in, I always land on https://CFID.cloudfront.net/parseauth?code=f3c82b7a-3c4f-4386-9cb7-19b894cd3698 with Bad Request - ERROR: Error: Invalid query string. Your query string should include parameters "state" and "code"
Thanks for this example; it's been really helpful for me as I try to get a deeper understanding of oauth.
From my reading of the code, it appears that the refresh_auth doesn't ever actually use the nonce hmac cookie to validate that either the original nonce (cookie) or the querystring nonce was actually signed with the nonce signing secret. Since there is a test that those nonces match, it seems like we are concerned with some kind of verification there.
I may be misreading the code or missing something about oauth, but I wanted to ask the question just in case.
Hi!
I need to change the User Pool of the already created application on my one. What are the proper ways to do this?
Hello!
I am interested in the next thing. I've specified my custom domain (auth.example.com) for Cognito authorization. I want to use one for my Serverless application. Is there a way to solve this task?
Hello,
I am getting following error in check-auth lambda function, Using latest code with node js 12.x version.
2020-07-17T11:37:44.524Z e3eb0320-90bb-4930-a588-f56ed4f273de ERROR (node:8) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
Is this some issue with reference lib in check-auth Lambda function ? Please suggest.
Thanks,
Dilip
E.g. this could be the Cognito redirect:
"error_description" and "error" in the Query string identify this situation
The HTML Error response should then display this error, instead of "Cannot parse query string"
Some CMSes (like hugo) are using links with trailing slashes with the expectation of index.html
path extension semantics. E.g. example.com/test/
is expected to return example.com/test/index.html
CloudFront does not support this oob on anything but the root level. However, index.html
extensions in subfolders is the usual behavior of the S3 website functionality. Here we cannot resort to using S3 web hosting, since we don't want to publish an S3 site.
I solved this in my basic-auth solution with a short snippet like the second solution here:
https://stackoverflow.com/questions/49082709/redirect-to-index-html-for-s3-subfolder
It should be a quick fix. I can do it if its accepted.
I ran into this issue with babel: babel/babel#12125
I don't understand how updating yarn is going to fix old files in the babel npm packages without a new release of babel. So I'd guess this will be broken until a release. But perhaps there is some other way to get it work more cleanly.
My temporary work around is to run:
find .aws-sam/build/ReactAppHandler/node_modules/ -mtime +16000 -print -exec touch {} \;
between steps 4 and 5 in the readme.
When trying to track this down I see this kind of issue seems to happen every so often with various packages. It'd be really nice if the AWS tools could do something with the zip library to just ignore the dates on the files. Or at least they could catch the error and print out the file name of the file that has the problem.
In case it is useful for the AWS Sam team here is the stack trace:
Unable to export
Traceback (most recent call last):
File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 270, in export
self.do_export(resource_id, resource_dict, parent_dir)
File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 286, in do_export
uploaded_url = upload_local_artifacts(resource_id, resource_dict, self.PROPERTY_NAME, parent_dir, self.uploader)
File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 155, in upload_local_artifacts
return zip_and_upload(local_path, uploader)
File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 172, in zip_and_upload
with zip_folder(local_path) as (zip_file, md5_hash):
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 188, in zip_folder
zipfile_name = make_zip(filename, folder_path)
File "/usr/local/Cellar/aws-sam-cli/1.4.0/libexec/lib/python3.7/site-packages/samcli/lib/package/artifact_exporter.py", line 206, in make_zip
zf.write(full_path, relative_path)
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 1730, in write
zinfo = ZipInfo.from_file(filename, arcname)
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 530, in from_file
zinfo = cls(arcname, date_time)
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.7/lib/python3.7/zipfile.py", line 357, in __init__
raise ValueError('ZIP does not support timestamps before 1980')
ValueError: ZIP does not support timestamps before 1980
I have a stack that has several lambdas for lambda@edge on a cloudfront distribution. The distribution is in stack A. I've launched this application as stack B and did not use the cloudfront distribution included with it.
I can't delete stack B because stack A references the lambdas in the stack as imports. So I teardown stack A. Now I go to tear down stack B. I can't because the lambdas are replicated for lambda@edge. I get an error when I try to manually delete the lambdas.
An error occurred when deleting your function: Lambda was unable to delete arn:aws:lambda:redion:accountId:function:stack-name-ParseAuthHandler-someid:1 because it is a replicated function. Please see our documentation for Deleting Lambda@Edge Functions and Replicas.
I read the documentation, it says to go to my distribution (stack A) and delete the lambda. Well, that distribution was already deleted. I can't do that.
I reached out to support to help me delete these lambdas. I think this is a bug. I know it's probably not possible for you to fix this. So, assuming I'm correct and this is a bug, and that you have no ability to actually prevent this behavior, I'd like to suggest some features and changes.
Feature:
Output the lambda arns to SSM so I can reference them that way instead of using ImportValue in my own stack. You could do this in addition to the output values.
Request:
Add a not in the documentation that this can happen and advise users us the SSM parameters and teardown this stack first (my stack B) before attempting to teardown stack A (my cloudfront distro).
https://github.com/aws-samples/cloudfront-authorization-at-edge/tree/master/src/lambda-edge
Do you have a version or plans to maintain a generic version of lambda-edge auth which can work with other Identity Providers like
Azure AD, Azure B2C, auth0 ..
We're having some problems debugging the /signout functionality, which isn't described quite as thoroughly in the blog as the authorization scenarios.
Let me start with my understanding of the /signout chain, which may be incorrect:
What we see is:
var url = "https://"+host+'/signout'; window.location = url
Cross-Origin Read Blocking (CORB) blocked cross-origin response https://auth-2dc...
The immediate issue is that I can't seem to see the CloudWatch logs for the /signout lambda@edge. The lambda is present in the function list, and accessing its CW logs the easy way via console/monitoring tab/view logs in cloudwatch yields The specific log group: /aws/lambda/...-LambdaEdgeProte-SignOutHandler-12345 does not exist in this account or region.
A closer look at the CW logs directly specifying the Lambda function name yields some logs but they are old with respect to the run in question and contain only start/end messages anyway.
I've seen the 'logs don't exist' scenario in a few different debug situations and wonder how this happens?
I also note that all of the lambda@edge functions for this application have the same version. We regularly update the lambdas in debugging, usually modifying the CSP headers. Do all the lambdas update in sync with any change to the a@edge?
Looking for hints to debug these kinds of scenarios where a lambda might be the issue but it leaves no logs we can reach to see what happened.
Logged a call with AWS, who believe this is an a@edge issue.
For some reason, if I update the lambdas after the initial deployment and the versions update to 2 (or higher) this solution fails with a very generic error suggesting that there is an issue with the lambda permissions or the lambda configuration. If I drop the entire stack, and deploy the same change at version 1 it seems to work fine. I am using CreateCloudFrontDistribution=false
This works fine with the user pool. When I switched to federated SAML authentication (by changing the config in the Cognito), the Login with Corporate button appears but doesn't show anything -- no IDP sign-in page. I have handcrafted the login link using the tutorials on the https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-saml-idp.html and I can log in, but then redirect URI to /preauth gives error as there is no state parameter. I tried to pass the state parameter but then the login page doesn't appear at all.
Have you tested this with the Federated SAML Auth? I am blocked and any help would be great.
It would be great if you can test and update the docs on how to integrate with the Federated IDP using SAML -- most folks like us using SPA could easily deploy their apps on S3 and use this shim to protect those SPAs.
Thanks
Dear developers,
thank you for the great login functionality! I successfully deployed the solution using one of my existing CloudFront distributions. However, I am struggling to do the same for another distribution. After logging in, I receive an error 413 because the URL string is longer than the maximum permitted by CloudFront. For some reason, the value of the state
parameter is longer than 9000 characters. I don't understand why this happens. The settings of both CloudFront distributions are identical, except they point to different S3 buckets and use different lambda functions (all of them were created using your template). Both S3 buckets contain a different SPA. All lambda functions point to the same user pool. CNAMES were configured for both distributions and in the cognito app settings. Do you have any idea what could be going wrong?
All the best.
Ben
This will speed up all access to the Cognito hosted domain because of the use of the CloudFront high-speed network
Using semantic version 1.2.1
Doing some timeout work and believe there might be an issue between auth@edge and the (very) new AWS feature to set a custom value on token expiration, documented here.
We start with a working angular SPA and default values. Verify operation and log out.
Now we change the identity and access tokens to 5 minutes and save:
On relogin we get the cognito email/password challenge, and then:
So that's not encouraging. We reset the timeouts to 60m and can log in fine with the "try again".
We can then reset the cognito to 5m but it's ineffective for testing (does not time out after 5m), probably because the app already has the 60m cookie.
Thoughts?
Are there any downfalls to completely omitting the dummy-origin
from the distribution and instead just pointing it directly at the protected-bucket
?
In another related issue the following was mentioned:
The dummy origin "example.org" can remain there, as it is the origin behind the Lambda@Edge functions for parseAuth and such; they will always respond to the request instead of allowing the request to pass through to the origin. Hence "dummy origin", it is there because a CloudFront behavior needs to have an origin, but requests will never be forwarded to it.
The above makes me wonder why a dummy-origin
is needed if the requests will never be forwarded to it?
Thank you!
@ottokruse, this is a resolved issue, but has information that might help others, so I'm not immediately closing it. Feel free to do so whenever it seems right.
In testing we discovered a persistent situation where an a@edge installation using a Cloudfront alternative domain would fail with an error of the form Invalid login token. Token expired:
.
Turns out the solution is simple ...
So, if this happens to you, find the ID token 'cookie' in local storage associated with the alternate domain, and delete it. Here's one example:
Interestingly, clearing this token also seems to resolve the failed logout seen in #94, not sure why.
We're working our way thru adapting this promising approach to securing an Angular JS app. In replacing the sample REACT application we are finding our CSS blocked, and it looks as if it's a problem with content security policy headers. There's good information in adding headers here, but because of the complexity of the authentication solution it's not clear where in the chain of requests and responses to add code to create the headers.
Do you have a suggestion or recommendation?
And a request ... it would be great to have a little documentation on discussion on taking the deployed solution to the next step and using it for an actual app, including issues like this.
Thanks for a promising (if complex) solution!
@ottokruse
We're working on a well known problem of providing configuration data to an S3 SPA, while trying to keep the "cloudfront-authorization-at-edge" (CFAAE ) an intact authorization 'black box'.
The problem in a nutshell is that a SPA in S3 has no way to acquire configuration unless it's compiled in -- not even the name of the bucket it resides in.
One of our developers came up with a great idea -- since our CloudFormation that invokes the CFAAE knows everything we need to pass through to the SPA, we could insert the CFN data in a cookie using the "CookieSettings" parameter to the SAM application, like this example for a bucketname:
LambdaEdgeProtection:
Type: AWS::Serverless::Application
Properties:
Location:
ApplicationId: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-authorization-at-edge
SemanticVersion: !Ref SemanticVersion
Parameters:
CreateCloudFrontDistribution: "false"
HttpHeaders: !Ref HttpHeaders
AlternateDomainNames: !Join [ ",", !Ref AlternateDomainNames ]
CookieSettings: "{\"bucketName\":\"hardcoded\", \"idToken\": null,\"accessToken\": null,\"refreshToken\": null,\"nonce\": null}"
That worked brilliantly in the sense that it dynamically altered the CookieSettings in each Lambda@Edge's configuration.json
, for example:
"cookieSettings": {
"bucketName: "hardcoded",
"idToken": null,
"accessToken": null,
"refreshToken": null,
"nonce": null
},
but it had no effect whatever on the app ... the bucketName
cookie was never created or passed to the application, although all the others were.
Which makes me think I'd misunderstood the use of cookie settings ... perhaps they are a way to set behavioural metadata on already-defined cookies, and defining a new cookie would require modifying (all?) the lambdas to recognize and propagate the cookie end to end. Which is a place we do not want to go, per the 'black box' constraint.
I tried one more time, and attempted to sneak a value into a predefined cookie, in this case the accessToken:
CookieSettings: "{\"idToken\": null,\"accessToken\": \"bucketname\",\"refreshToken\": null,\"nonce\": null}"
but the bucketname just gets replaced by the real access token, as one might expect.
Is there a way to pass custom values into cookies all the way through the framework to the SPA that I'm just not seeing?
If not, that would be a terrific feature, as almost every SPA has need of some data that is available in the installing cloudFormation script. But I'm all on the application side right now ...
R.
Exporting more of the stack's outputs would allow users to more easily integrate their own CF stacks with the application. I'm particularly interested in S3Bucket
and UserPoolId
. These would allow me to:
Configure my stack to deploy my own front-end assets to the existing S3 bucket, and
Create an ID pool on top of the existing user pool
respectively. As-is, if I want to do these things via CF I need to manually create mappings for the bucket and user pool.
archiver is a more well maintained and (hopefully) will allow adding files to the zip without corrupting it (like adm-zip does)
Deploying this to a working cloudfront distribution is causing inline javascript to fail. HTML page renders properly but javascript code does not. All scripts are local to the s3 bucket not from any external urls. Any help is much appreciated.
Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' https://code.jquery.com https://stackpath.bootstrapcdn.com". Either the 'unsafe-inline' keyword, a hash ('sha256-sSYP07KSXyZ0gQue3jDWS5nZaWUz7V/V0566xsjP90k='), or a nonce ('nonce-...') is required to enable inline execution.
I have setup whole stack using cloudformation and its works great.
At one point i got this as state value eyJub25jZSI6Ii4xQUZkZVVONVQuZn5 VXciLCJyZXF1ZXN0ZWRVcmkiOiIvIn0=
when i tried to console.log(Buffer.from(stateval, 'base64').toString())
i get this value
{"nonce":".1AFdeUN5T.f~U]��\]Y\�Y\�H���ȟ
which is not a json and it errors at
What is the root cause for this issue?
I feel like I'm probably missing something obvious, but it looks like some of the expiration logic in _generateCookieHeaders references keys that don't exist.
I.e. the cookies object is defined with certain keys just above it, and they don't include keys like spa-auth-edge-nonce
.
Working on migrating our application to a@e 2.0 to look further at #81 and other features, and run into a roadblock. The demonstration stack for 2.0 that illustrates how to define Cognito UserPool etc. in the parent stack works fine but ...
I've pulled the Cognito User Pool and other related artifacts into the main stack, and successfully created all artifacts until we get to the calls to the nested a@e stack. In creating the framework I see:
Embedded stack arn:aws:cloudformation:us-east-1:REDACTED:stack/ae200d-LambdaEdgeProtection-L97QEYQVDZLR/4cb7b300-ef9e-11ea-ace7-126c97cb5bc1 was not successfully created: Cannot export output RedirectUrisSignOut. Exported values must not be empty or whitespace-only.
My call to the nested stack which worked fine in 1.2 has changed only in minor ways ... I've added parameters for the UserPoolArn and UserPoolClientId I've created in the parent stack, just as the a@e example for 2.0 does. The failing call looks like this:
LambdaEdgeProtection:
Type: AWS::Serverless::Application
DependsOn: UserPoolDomain
Properties:
Location:
ApplicationId: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-authorization-at-edge
SemanticVersion: 2.0.0
Parameters:
CreateCloudFrontDistribution: "false"
HttpHeaders: !Ref HttpHeaders
UserPoolArn: !GetAtt UserPool.Arn
UserPoolClientId: !Ref UserPoolClient
A visit through the code shows that RedirectUrisSignOut is mainly referenced in src/cfn-custom-resources/user-pool-client/index.ts
and in the main template, suggesting that maybe I've got the dependencies wrong, but they seem obvious ... an explicit depends on the UserPoolDomain, and implicit ones for the UserPoolClient and UserPool.Arn. All we're really doing here is adding the HttpHeaders parameter from the main template.
Another possibility -- because this example does not use custom domains, but just relies on the CloudFront URL, as does the example, is some problem with the sentinel values -- but I left them alone, as they are in the example.
Any idea what might be going on?
Hi friends,
Any chance support can be added to redirect to a custom sign in page?
I'd rather use my own page instead of the Cognito UI hosted option.
Hi,
Sorry if this is slightly off topic, but I was wondering if it is possible to access the private S3 bucket hosting the SPA, with a jquery .get
function from within the SPA?
Background:
I'm trying to setup authentication to access a private S3 bucket, and using this open source repository (https://github.com/rufuspollock/s3-bucket-listing) to list all items within the S3 bucket once authenticated. (I have also opened a ticket there rufuspollock/s3-bucket-listing#101, where I go through some of the steps I've done).
I've run the cloudformation template, and replaced the react SPA, with the index.html
mentioned in the above ticket, and logging in works great, and I can nearly see the entire index.html
page, but it doesn't display anything in the S3 bucket itself.
I have some theories about this, but wasn't sure who to ask, so hoping someone could point me in the right direction.
$.get(s3_rest_url)
, however this returns an empty string (from what I can tell), and I think this is due to the list.js
not passing in the authentication tokens required to access the S3 bucket (in the same way the user needed to sign in to access the index.html
).If this is the case, would a solution be to download the cookies from the browser post logging by the user, in the list.js
file and pass one of the auth tokens in the $.get(s3_rest_url)
so that it can access the S3 bucket?
Please let me know if you need any more information, and any advice would be greatly appreciated!
Thank you.
Hello,
I deployed the sample app from the AWS Serverless Application Repository.
Upon navigating to the URL for the CloudFront distribution, I log into my Cognito User Pool, update my temporary password, and I get a 403 Access Denied.
I checked into the /parseAuth endpoint, seems there's some issue here. My Lambda logs show nothing for the parseAuth function (it says there was an error loading Log Streams). I tried hitting parseAuth directly in my browser by supplying the code and state like this:
https://{cloudfrontID}.cloudfront.net/parseauth?code={CODE}&state={STATE}
The result was:
Bad Request
ERROR: Error: HTTP POST to https://auth-1fdb0555.auth.us-east-1.amazoncognito.com/oauth2/token failed
Is this some issue with the parseAuth Lambda function, or something else entirely?
@ottokruse
We're working on the problem of bootstrapping authentication in an SPA. By bootstrapping I mean supplying the SPA with the basic configuration information it needs to make AWS API calls without having to compile information into the code. The key piece of configuration information needed is the bucket that the SPA is running in and the cognito identity pool that gives the app its internal role/permissions.
The debug cycle is very difficult, as currently you need to:
We have a version of the (node) app that runs locally on 8080 and would like to be able to redirect to the local instance instead of the S3 hosted version -- and do so with the entire authentication chain intact . That is, since we're examining the results of authentication, we want the entire authorization-at-edge framework to execute as usual, then instead of sending the final redirect call to the index.html in the bucket, sending it to the localhost:8080/index.html url.
From my reading of the blog, the place to do the change in the final destination is the checkConfig handler. Is that right?
When I inspect the lambda code in the repository there's an easy and obvious place: the checkauth index.ts line 18:
const domainName = request.headers['host'][0].value;
suggesting that a simple change to set domainName to 'localhost:8080' should do the job.
I'm figuring the bundle is a compressed webpack of index.ts so I thought no problem, as generally a webpack obfuscation just removes spaces and punctuation. But when I examine bundle.ts the code seems to bear no relationship to checkAuth:index.ts, and in fact does not reference the domainName variable I want to change.
I'd like to avoid forking the entire application, as that would be a whole new toolchain to learn and set up - part of the appeal was consuming an existing app.
Do you have any thoughts on how to modify checkAuth for local debugging (if my approach seems viable in the first place) or how to approach the overall goal?
Hi Team,
First, thank you very much for your all your work:)
Hey, I've just deployed these functions but in order to get them to work I had to make one minor change, I'm not sure if there was an issue with the way I'm deploying it or it's a genuine issue.
The headers in the http-headers
function are loaded from the configuration.json
but then when setting the headers into the correct key/val format it seemed to be using the entire JSON rather than just the httpHeaders
section of the JSON.
I changed the JSON.parse()
in the above URL to pass only the httpHeaders
key into the asCloudFrontHeaders()
function and everything worked perfectly.
Should the configuration.json
in the http-headers
function only contain the headers anyway rather than the whole configuration that's the same as all the other functions or is this a legit bug?
Thank you!
Using Cognito client secret
And without SPA routing (the error document for 404 and default root object)
Hi I need to deploy some parts to another region such as Cognito. Currently I have to deploy everything to us-east-1 due to the hard constraint on Lambda@Edge.
Is there anyway to get around this? Maybe by copying Lambdas from one region to another? I've tried breaking the CloudFormation up in 2 (with Lambdas and Cloudfront in one) and the others in another script then using !ImportValue
but that's messy.
I'm attempting to run the Cognito provider from another account but getting a permissions error from the UserPoolDomainLookup resource. Is there a way to disable this resource if I'm using a substack OR is there another suggestion you might have.
Is this application no longer published in the SAR?
I have tested from multiple accounts using administrative roles (with access to serverlessrepo), and am unable to access the application as specified in the README: spa-authorization-at-edge
, nor the application specified in the reuse-auth-only.yaml
template cloudfront-lambda-edge-cognito-auth
An error occurred (AccessDeniedException) when calling the CreateCloudFormationTemplate operation: User: *** is not authorized to perform: serverlessrepo:CreateCloudFormationTemplate on resource: arn:aws:serverlessrepo:us-east-1:520945424137:applications/spa-authorization-at-edge
An error occurred (AccessDeniedException) when calling the CreateCloudFormationTemplate operation: User: *** is not authorized to perform: serverlessrepo:CreateCloudFormationTemplate on resource: arn:aws:serverlessrepo:us-east-1:520945424137:applications/cloudfront-lambda-edge-cognito-auth
I can follow the process to build myself, but initial deployment and testing would be easier if the SAR application was accessible.
I wonder if anyone else is having this issue, if after having the site up for the duration of the IDTokens life, and visiting the site again (in a separate tab, or closing the window, etc), they find that they are still returned the expired token, as if it's cached? Only doing a refresh on the page, makes the token be re-generated. In the failed state, the app is delivered the IDToken with the old stale cookie value.
I've been trying to adjust configurations of the CloudFront caching policy to no success, after every expiration, the page is allowed to load without the proper authentication (old cookies), and any resources requiring authentication outside of CloudFront (Service calls into API Gateway using the token) fail, because the token is expired.
It would be great if this app can be wired into an existing user pool.
Changes that I've made in the react-app (src\cfn-custom-resources\react-app\src
) are not getting deployed after successful initial deployment.
Steps I've taken:
npm install
, npm run build
, sam build --use container
, sam package...
, sam deploy..
). After this deploy, it's working as I'd expect - I can sign in & see the sample react app page.src\cfn-custom-resources\react-app\src\App.js
(just an update to explanation text to test code deploy)npm install
, npm run build
, sam build --use container
, sam package...
, sam deploy..
). All complete without errors.aws cloudfront create-invalidation --distribution-id E16D.. --paths "/*"
After the update is deployed with # 3 I can see:
ReactAppHandler
UPDATE_COMPLETE with times matching update from # 3ReactAppHandler
lambda fn has an updated time of when I executed # 3ReactAppHandler
when # 3 was executed. The only log entries are from when # 1 was executed (logs checked > 1 hour after # 3 completed)I'm guessing this is something obvious I'm missing - thanks for pointing me in the right direction.
My understanding is that nothing on the S3 bucket should be reachable without being authenticated. However, as far as I can tell the authentication check only really happens for GET
requests.
After setting up this demo from the AWS Serverless Application Repository:
$ curl -X GET d1irs8pwnygew6.cloudfront.net/
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>CloudFront</center>
</body>
</html>
$ curl -X POST d1irs8pwnygew6.cloudfront.net/
<!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="shortcut icon" href="/favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><title>Protected Single Page App</title><link href="/static/css/main.80b9fb4f.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script src="/static/js/runtime-main.8f658356.js"></script><script src="/static/js/2.a948a276.chunk.js"></script><script src="/static/js/main.cd8094a6.chunk.js"></script></body></html>
I am able to bypass the login with any HTTP methods other than GET
(even non-existing ones).
I don't know if any other resource in the S3 bucket is reachable (probably not since index.html
is served for every path), but I think even just serving index.html
for an SPA could be a security issue.
Is this something that can easily be fixed?
It would be great if this app can be wired into an existing S3 bucket.
That would then make it possible to use an S3 bucket in another region than us-east-1
I don't know if I have done something stupid - but I have extracted the lambda logic out and integrated it into a very basic
Cloudfront hosted website.
Everything works as expected, unless you are a new user trying to register using the hosted UI, when once you have filled in the form you are sent to /error
page on the hosted UI. The user is added to the pool.
So I cant understand whats going on! The logic for stopping, signing out, refreshing etc all work fine. Is this meant for use with the hosted Cognito UI?
Cheers
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.