GithubHelp home page GithubHelp logo

Comments (25)

mrmachine avatar mrmachine commented on August 20, 2024 1

Thanks for your help. Yes. My use cases are:

  • I need to implement a progressive migration from S3 to R2 (to reduce bandwidth charges and avoid a window where some files will be unavailable during a full sync migration) for a private bucket that is accessed via presigned URLs.

  • I also need to implement a "staging" bucket with read-through from a "production" bucket so that writes do not affect production data, and all production data is available to staging environments on-demand via presigned URLs. We currently use S3 replication for this which maintains a full replica (overkill).

Progressive migration is is on CloudFlare's roadmap, but is not yet implemented and it is uncertain how the implementation will work. E.g. when public buckets landed, it is limited to whole buckets. There could be similar unexpected caveats with CF's progressive migration.

Kian's s3-to-r2-migration takes care of the progressive migration, but effectively makes the bucket public.

Kian pointed me to your Denoflare example, and it was the only example I could find of a CF worker validating presigned URLs. (Thanks!)

My plan is to modify your r2-presigned-url-worker example to include a re-implementation of Kian's s3-to-r2 logic as a fallback, and rely on your presigned URL validation.

In testing this, I got Kian's worker and your example both working independently, but I noticed that r2-presigned-url-worker appears to assume the path addressing style. Even though it appears to not actually do anything with the bucket name it extracts from the path (beyond logging), it is still required to get the right key.

This makes it a bit awkward to plan the domain where my buckets (multiple) be available.

  • I could use a path style at a domain like r2.example.com for multiple distinct buckets, with the bucket name in the path, except for the fact that r2-presigned-url-worker has only a single r2 bucket binding. So I need a different worker on a different domain for each bucket. E.g. bucket1.example.com/bucket1/path/to/file... and bucket2.example.com/bucket2/path/to/file... which seems redundant.
  • I could use a virtual style (if r2-presigned-url-worker would support it) and two separate domains like bucket1.example.com/path/to/file... and bucket2.example.com/path/to/file....

Even with virtual style addressing, I believe there would still be a limitation where the domain where I deploy the worker must exactly match the bucket name. So I can't have R2 buckets named example-production and example-staging and make them available at r2.example.com and r2.staging.example.com.

I am not sure if the bucket name given in either virtual or path style is actually used in the validation, or if it is just used for routing. Either way, I believe that this limitation could be removed if r2-presigned-url-worker would strip the bucket name from both virtual and path style URLs when it gets the key, and either discard it (if not needed for validation) or using it instead of the actual R2 bucket name in its presigned URL validation.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024 1

Also, CF's implementation also has the limitation where credentials grant access to all buckets. With your implementation, we can create multiple buckets (for different clients) and give them different access credentials.

from denoflare.

JustinGrote avatar JustinGrote commented on August 20, 2024

For what it's worth I'm getting this error even with a simple hello world.

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024

Hmm, the fact that it's failing on caches might indicate it's a problem when running on Deno 1.26+, which was only recently released (I'm still on 1.25.4).

Deno introduced their own global caches implementation in 1.26, which is probably clashing with the polyfill I had to do before.

Try running it on Deno 1.25.4 in the meantime (deno upgrade --version 1.25.4), and I'll try to get a fix for 1.26 out shortly.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

I installed deno via brew install deno on macOS, and it doesn't seem to want to let me downgrade:

deno upgrade --version 1.25.4
error: You do not have write permission to "/opt/homebrew/bin/deno"

Same even with sudo (which I believe should not be needed).

I also tried the

can be deployed as is to a custom domain in a zone you own with a single push command

from the 0.5.0 release notes, which fails with an authentication error (but I'm pretty sure my account ID and API token is correct):

$ denoflare push https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/examples/r2-presigned-url-worker/worker.ts \
    --name ... \
    --r2-bucket-binding bucket:$BUCKET \
    --secret-binding credentials:$USER_ACCESS_KEY_ID:$USER_SECRET_ACCESS_KEY \
    --custom-domain $DOMAIN \
    --account-id $CF_ACCOUNT_ID \
    --api-token $CF_API_TOKEN

bundling ... into bundle.js...
bundle finished (process) in 72ms
computed bindings in 0ms
putting module-based worker ... (66.1kb) (14.5kb compressed)
error: Uncaught (in promise) Error: putScript failed: status=400, errors=10000 Authentication error
    { code: 10000, message: "Authentication error" }
    at execute (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/common/cloudflare_api.ts:1063:15)
    at async putScript (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/common/cloudflare_api.ts:96:13)
    at async buildAndPutScript (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli_push.ts:83:9)
    at async Object.push [as handler] (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli_push.ts:117:5)
    at async CliCommand.routeSubcommand (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli_command.ts:104:13)
    at async https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli.ts:33:5

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

Managed to downgrade to 1.25.4. That fixed the denoflare serve error, but not the single push command (authentication) error.

But I'm now having trouble passing through the config to denoflare serve:

$ denoflare serve https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/examples/r2-presigned-url-worker/worker.ts \
    --r2-bucket-binding "bucket:$BUCKET" \
    --secret-binding "credentials:$USER_ACCESS_KEY_ID:$USER_SECRET_ACCESS_KEY" \
    --account-id "$CF_ACCOUNT_ID" \
    --api-token "$CF_API_TOKEN"

Compiling https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli-webworker/worker.ts into worker contents...
⚠️  ️Deno requests write access to "/var/folders/2j/ymfzssrx66gg87hypqbw1ptr0000gn/T/". Run again with --allow-write to bypass this prompt.
   Allow? [y/n (y = yes allow, n = no deny)]  y
Bundled https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli-webworker/worker.ts (process) in 1383ms
runScript: https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/examples/r2-presigned-url-worker/worker.ts
Bundled https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/examples/r2-presigned-url-worker/worker.ts (process) in 47ms
worker: start
Error running script Error: verifyToken failed: status=400, errors=6003 Invalid request headers
    at execute (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/common/cloudflare_api.ts:1063:15)
    at async verifyToken (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/common/cloudflare_api.ts:576:13)
    at async Function.parseAccountAndCredentials (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/api_r2_bucket.ts:31:29)
    at async computeR2BucketProvider (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/worker_manager.ts:139:44)
    at async WorkerManager.run (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/worker_manager.ts:111:34)
    at async runScript (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli_serve.ts:165:21)
    at async createLocalRequestServer (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli_serve.ts:171:13)
    at async Object.serve [as handler] (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli_serve.ts:188:32)
    at async CliCommand.routeSubcommand (https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli_command.ts:104:13)
    at async https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/cli/cli.ts:33:5

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

Seems to be the --r2-bucket-binding "bucket:$BUCKET" option that is breaking it. But I've double and triple checked the bucket name, account ID and API token.

Without that option denoflare serve ... does run, but I can't get it to believe that any of my presigned URLs are valid:

isPresignedUrlAuthorized error: Invalid X-Amz-Signature: ...

I'm not sure if it's because of the missing bucket option. But I've tried to generate presigned URLs like so:

$ AWS_ACCESS_KEY_ID="$USER_ACCESS_KEY_ID" AWS_SECRET_ACCESS_KEY="$USER_SECRET_ACCESS_KEY" aws s3 presign s3://$BUCKET/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg --endpoint-url "https://$CF_ACCOUNT_ID.r2.cloudflarestorage.com"

which generates a live URL in which I change the domain and port to test, and:

$ AWS_ACCESS_KEY_ID="$USER_ACCESS_KEY_ID" AWS_SECRET_ACCESS_KEY="$USER_SECRET_ACCESS_KEY" aws s3 presign s3://$BUCKET/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg --endpoint-url http://localhost:3030

which generates a local URL. But both fail with the same error.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

I created a custom API token with these permissions (per docs for r2-public-read worker):

image

With that change, the --r2-bucket-binding ..."bucket:$BUCKET" option no longer causes an error.

I was able to successfully deploy the worker:

$ denoflare push https://raw.githubusercontent.com/skymethod/denoflare/v0.5.8/examples/r2-presigned-url-worker/worker.ts \
    --name "$WORKER_NAME" \
    --r2-bucket-binding "bucket:$BUCKET" \
    --secret-binding "credentials:$USER_ACCESS_KEY_ID:$USER_SECRET_ACCESS_KEY" \
    --custom-domain "$DOMAIN" \
    --account-id "$CF_ACCOUNT_ID" \
    --api-token "$CF_API_TOKEN"

When generating presigned URLs I also changed the --endpoint-url option to https://$DOMAIN (the domain where the worker is deployed), and it works!

$ AWS_ACCESS_KEY_ID="$USER_ACCESS_KEY_ID" AWS_SECRET_ACCESS_KEY="$USER_SECRET_ACCESS_KEY" aws s3 presign "s3://$BUCKET/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg" --endpoint-url "https://$DOMAIN"

But it still does not work locally via denoflare serve and a presigned URL generated with --endpoint-url http://localhost:3030

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

Also, how do I get rid of the bucket prefix in the resulting presigned URL path?

$ AWS_ACCESS_KEY_ID="$USER_ACCESS_KEY_ID" AWS_SECRET_ACCESS_KEY="$USER_SECRET_ACCESS_KEY" aws s3 presign "s3://$BUCKET/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg" --endpoint-url "https://$DOMAIN"
https://$DOMAIN/$BUCKET/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=3173c4beebe4c40c6a96b223b000a6cb%2F20221007%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221007T052945Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=51b461d60fda15e2280630e3cd59b5ff1d7cbf001f15c37388c44826e03232a7

If I literally just remove it from the s3://$BUCKET/... argument, the resulting URL returns a 404.

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024

Alright, so it looks like there are two issues here:

  1. the caches clash, which affects every worker in denoflare serve on Deno 1.26+
    Just committed a fix for this, and published a new v0.5.9 release that includes it (since it affects everyone using the latest stable version of Deno).
  2. An option on the presigned worker to omit the bucket part of the path-based url.
    Since this just affects this worker, it can easily be tweaked post a Denoflare release. @mrmachine: can you confirm the official Cloudflare presigned url support does not work for you, and you really need Denoflare's version? It was written prior to CF supporting it natively, but it can still be useful for using creds that are not CF creds.

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024

Ah, thanks for the detail - I think I get where you're going. So ideally you'd deploy the worker to multiple subdomains, each bound to a separate backing bucket, but perhaps want the subdomain to differ from the bucket name.

Then use vhost style presigned urls, using the "vanity" subdomain, not the underlying bucket name. I wonder, does aws s3 presign even support using custom hostnames? Iirc even for vhost sigs it's require to hang off of the canonical region.amazonaws.com root.

You can provide an --endpoint-url, perhaps that's actually all we need to check the sig on the backend, not the bucket name per se...

If so, I should be able to support this with a single new optional text binding to the existing worker, called like virtualCustomHostname. If set, this would be verified against the incoming request url and used in the sig check, but the underlying bucket binding would still be used for serving the data back out.

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024

Indeed, computeExpectedAwsSignature only needs the full url, not the bucket name per se, so this should be totally doable

from denoflare.

JustinGrote avatar JustinGrote commented on August 20, 2024

FWIW 0.5.9 fixed my deployments. Thanks!

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024

FWIW 0.5.9 fixed my deployments. Thanks!

Glad to hear it! Denoflare is between a rock and a hard place, chasing two relatively moving targets: Deno (breaking changes in every major and sometimes minor releases) and the production Cloudflare runtime (new services in particular can change quite a bit week to week).

So I appreciate anything you see before I do, please do not hesitate to let me know if you see something that doesn't look right. Often I personally will not upgrade to the .0 Deno releases right away - since I've now been bitten too many times by showstopper bugs in new major releases.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

So ideally you'd deploy the worker to multiple subdomains, each bound to a separate backing bucket, but perhaps want the subdomain to differ from the bucket name.

Then use vhost style presigned urls, using the "vanity" subdomain, not the underlying bucket name. I wonder, does aws s3 presign even support using custom hostnames? Iirc even for vhost sigs it's require to hang off of the canonical region.amazonaws.com root.

You got it, exactly.

AWS does allow you to use any DNS compatible bucket name and point that DNS record to S3 via CNAME -- https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#VirtualHostingCustomURLs

So in theory I believe it should be possible to generate a presigned URL somehow that points to the vanity CNAME (the whole "DNS compatible" bucket name).

However a Google search reveals that in practice this is not really possible because tools that generate presigned URLs assume virtual addressing style will be a $BUCKET_NAME.s3.amazonaws.com subdomain, rather than a full custom domain, and therefore they disallow any otherwise valid DNS name with a . in it because S3's wildcard certificate will only work for first level *.s3.amazonaws.com subdomains.

That said, AWS does support virtual addressing style with a single level subdomain which is applied as a prefix to the endpoint URL, and as you suggested we can provide a vanity bucket name which is a subdomain to the provided --endpoint-url.

Example:

  • R2 bucket named acme-production

  • r2-presigned-url-worker deployed to vanity domain files.acme.com with vanity access key id and secret access key, and vanity bucket name of files and an r2 bucket binding for acme-production

  • Generate vanity presigned URLs with:

    $ aws configure set default.s3.addressing_style virtual
    $ AWS_ACCESS_KEY_ID="$VANITY_ACCESS_KEY_ID" \
        AWS_SECRET_ACCESS_KEY="$VANITY_SECRET_ACCESS_KEY" \
        bash -c 'aws s3 presign "s3://files/path/to/file.ext" --endpoint-url "https://acme.com"'
    
    https://files.acme.com/path/to/file.ext?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=6ff412844208dd86ba8596a7358ec952%2F20221008%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221008T114945Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=ebbf1db08c652436ae6862e05096574a15921879b028a8c44a2007f1846d234e
  • Worker validates vanity credentials and proxies request via acme-production r2 bucket binding

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024
  • Generate vanity presigned URLs with:

So I can't seem to generate vhost style urls with aws s3 presign at all. Using your exact command line (on a mac), I get:

https://acme.com/files/path/to/file.ext?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<elided>%2F20221008%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221008T180703Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=<elided>

Which version of the aws cli are you using?

Mine is: aws-cli/2.5.8 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

I'm using aws-cli/2.4.23 Python/3.9.10 Darwin/21.6.0 source/arm64 prompt/off

It seems I had previously enabled virtual addressing style via config:

aws configure set default.s3.addressing_style virtual

When I remove that from my ~/.aws/config I get the same results as you. So it seems the AWS_S3_ADDRESSING_STYLE='virtual' env var from my command was a mistake that did nothing. I'm not sure how to set the on the command line, but the config file seems to work.

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024

Ah ok - too bad there's no equivalent command line option in aws s3 presign.

I just implemented support for this use case in the Denoflare presigned worker and in thedenoflare r2 presign cli command. Would you mind testing them out before I tag an official release?

You can use the latest commit hash 824beb75c5de1770a7957a517b8e89fb47552d0d to grab the worker code where you were using a version tag before (to test the worker changes), and install denoflare using a commit hash instead of a release tag (to test denoflare r2 presign).

e.g. to install denoflare as of a specific commit:

deno install --unstable --allow-read --allow-net --allow-env --allow-run --name denoflare --force \
https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/cli/cli.ts

The worker now takes a new optional env binding called virtualHostname. Would set to files.acme.com in your example.

Should work with aws s3 presign with a custom endpoint and vhost-style requests as configured above.

The denoflare r2 presign command now takes three new options: --endpoint-url, --access-key, and --secret-key. These can be used when pointing to a non-r2 s3 endpoint (like the presigned worker).

Would look something like this in your example:

$ denoflare r2 presign files /path/to/file.ext --endpoint-origin https://acme.com \
--access-key $VANITY_ACCESS_KEY_ID \
--secret-key $VANITY_SECRET_ACCESS_KEY

Note the files in the bucket-name argument in this case is your custom subdomain name and doesn't have to match the underlying bucket name.

It should produce presigned urls of the form: https://files.acme.com/path/to/file.ext?X-Amz-Algorithm=... as you're expecting.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

Thanks for this! I've just tested it, and found a few issues...

  1. There is a small difference between the URLs produced by aws s3 presign and denoflare r2 presign (the double slash at the beginning of the path):
$ AWS_ACCESS_KEY_ID="$WORKER_ACCESS_KEY_ID" \
    AWS_SECRET_ACCESS_KEY="$WORKER_SECRET_ACCESS_KEY" \
    bash -c 'aws s3 presign "s3://$WORKER_BUCKET_NAME/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg" --endpoint-url "$WORKER_SCHEME://$WORKER_ENDPOINT"'

http://files.lvh.me:3030/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=3173c4beebe4c40c6a96b223b000a6cb%2F20221010%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221010T235717Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=d1c86d977d3cfbae5c550e440c58a026d0c3c5947715a13f865ba00e7bcbf145

$ denoflare r2 presign \
    "$WORKER_BUCKET_NAME" \
    /media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg \
    --access-key $WORKER_ACCESS_KEY_ID \
    --endpoint-origin "http://$WORKER_ENDPOINT" \
    --secret-key $WORKER_SECRET_ACCESS_KEY

http://files.lvh.me:3030//media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=3173c4beebe4c40c6a96b223b000a6cb%2F20221011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20221011T000253Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=5d6902218251023a34e205643c7c17cef323202ada17871a820a0c683d5d0f96

I guess I am supposed to omit the leading / from the key name with denoflare r2 presign? No problem, just something to be aware of coming from aws s3 presign which includes the bucket name and key in one URL. Maybe we could strip the leading /?

  1. virtualHostname is not detected for endpoints on a non-standard port (e.g. denoflare serve).

After running the worker locally via:

$ denoflare serve hello-local
    --account-id "$CF_ACCOUNT_ID" \
    --api-token "$CF_API_TOKEN" \
    --r2-bucket-binding "bucket:$WORKER_R2_BUCKET_NAME" \
    --secret-binding "credentials:$WORKER_ACCESS_KEY_ID:$WORKER_SECRET_ACCESS_KEY" \
    --text-binding "virtualHostname:$WORKER_BUCKET_NAME.$WORKER_ENDPOINT"

When I try to open the denoflare r2 presign URL, both with and without the leading / in the key name, I get this error in the browser:

Error: Unexpected status 403, code=AccessDenied, message=Access Denied
    at throwIfUnexpectedStatus (https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/common/r2/r2.ts:147:11)
    at async getOrHeadObject (https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/common/r2/get_head_object.ts:36:5)
    at async getObject (https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/common/r2/get_head_object.ts:6:12)
    at async ApiR2Bucket.get (https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/cli/api_r2_bucket.ts:60:21)
    at async https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/common/rpc_r2_bucket.ts:39:34
    at async RpcChannel.receiveMessage (https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/common/rpc_channel.ts:41:36)
    at async Worker.worker.onmessage (https://raw.githubusercontent.com/skymethod/denoflare/824beb75c5de1770a7957a517b8e89fb47552d0d/cli/worker_manager.ts:70:17)

And this in the terminal log:

GET http://files.lvh.me:3030/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=3173c4beebe4c40c6a96b223b000a6cb%2F20221011%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20221011T002829Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=d99ce616fb42c3a800968874c2cfcbbe09b90ba446de31bbacad4bb773e7ef38
{"accessKeyIds":["3173c4beebe4c40c6a96b223b000a6cb"],"flags":[]}
request headers:
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
accept-encoding: gzip, deflate
accept-language: en-AU,en-GB;q=0.9,en;q=0.8,en-US;q=0.7
cf-connecting-ip: 167.179.156.36
cf-ray: 9b8dbd81d7efbaee
connection: keep-alive
dnt: 1
host: files.lvh.me:3030
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.53
{"style":"path","hostname":"files.lvh.me","bucketName":"media","keyEncoded":"dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg","key":"dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg"}
Authorized request from 3173c4beebe4c40c6a96b223b000a6cb
bucket.get dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg {}

The issue appears to be caused by { hostname, ... } = new URL(url), because URL.hostname does not include the port. When I changed this to { host, ... } = new URL(url) and changed virtualHostname to virtualHost, this issue was fixed.

  1. The aws s3 presign URL does not validate. I get unauthorized in the browser and this terminal log:
GET http://files.lvh.me:3030/media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=3173c4beebe4c40c6a96b223b000a6cb%2F20221011%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20221011T004603Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=d877c43ea5b3e9f392a3a0883c5d8b13286d7d0d0b2d746c4c8cfa64108fdcd1
{"accessKeyIds":["3173c4beebe4c40c6a96b223b000a6cb"],"flags":[]}
request headers:
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
accept-encoding: gzip, deflate
accept-language: en-AU,en-GB;q=0.9,en;q=0.8,en-US;q=0.7
cf-connecting-ip: 167.179.156.36
cf-ray: 90b1c7b323ab14c7
connection: keep-alive
dnt: 1
host: files.lvh.me:3030
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36 Edg/105.0.1343.53
{"style":"vhost","host":"files.lvh.me:3030","bucketName":"files","keyEncoded":"media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg","key":"media/dd/product/images/0aff9ccd1acc3212a8bc3e43cf70a78d.jpg"}
expectedSignature !== xAmzSignature: 54617b87ae025366c572524c41d1738e88c10a1455400a5889bdf830e90b00c5 !== d877c43ea5b3e9f392a3a0883c5d8b13286d7d0d0b2d746c4c8cfa64108fdcd1
isPresignedUrlAuthorized error: Invalid X-Amz-Signature: d877c43ea5b3e9f392a3a0883c5d8b13286d7d0d0b2d746c4c8cfa64108fdcd1

This includes some additional logging added by me, which indicates the expectedSignature !== xAmzSignature.

  1. Both aws s3 presign and denoflare r2 presign URLs return unauthorized in the browser when deployed to CF via:
denoflare push hello-local \
    --account-id "$CF_ACCOUNT_ID" \            
    --api-token "$CF_API_TOKEN" \                                        
    --custom-domain "$WORKER_BUCKET_NAME.$WORKER_ENDPOINT" \
    --name "$WORKER_BUCKET_NAME.$WORKER_ENDPOINT-r2-presigned-url-worker" \
    --r2-bucket-binding "bucket:$WORKER_R2_BUCKET_NAME" \
    --secret-binding "credentials:$WORKER_ACCESS_KEY_ID:$WORKER_SECRET_ACCESS_KEY" \
    --text-binding "virtualHost:$WORKER_BUCKET_NAME.$WORKER_ENDPOINT"

Not sure if or how I can access more detailed logging for the deployed worker. I tried the "real-time logs" option in CF Workers page, but I never saw anything come through.

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024
  1. The key name doesn't actually include the leading slash, it's consistent in the denoflare r2 tools, and consistent in what the cf api returns etc, so just something you'll have to translate if you're breaking down a projection url (also "/" is a valid key name/prefix!)

  2. I didn't test non-standard ports, it's a good point, I think virtualHost makes sense, I'll change it. The logging includes the problem there: {"style":"path"," ie it did not detect it as a vhost-style request.

  3. Maybe the non-standard port thing again? I'll see if I can repro that. The aws s3 presign urls worked for me.

  4. Hmm, again hard to say without more logging. Both worked for me testing from a browser pointing to the new worker deployed to cf.

Check out webtail.denoflare.dev, I built it as an easier way to view those real-time logs coming back from workers. The Denoflare cli also has a tail command.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

Sorry, got distracted. I intend to test this further soon hopefully with more logging.

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024

These changes are included in today's tagged release v0.5.10.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

@johnspurlock-skymethod

I have finally gotten around to testing this. Apologies for the very long delay.

I don't think the change to virtualHost (from virtualHostname) made it into your release. I have local modifications that change virtualHostname to virtualHost (and also const { host, pathname, searchParams } = new URL(url);) and this fixes vhost style detection with denoflare serve ... on port 3030. I could submit a PR, if you'd still like to make this change?

Also, I believe the reason for the Error: Unexpected status 403, code=AccessDenied, message=Access Denied error after the worker successfully authorized the vanity bucket name and credentials was caused by not having assigned sufficient permissions to my Cloudflare API token.

I had already configured permissions per your R2 public read example, but I had to add Workers R2 Storage:Edit as well. I tested the API token permissions with the denoflare r2 list-buckets command which only worked after adding the new permission.

So now, the URL generated by denoflare r2 presign works. I am able to read the file from the backing bucket via the worker with vanity bucket name, access key ID and secret access key. (Yay!)

However, I still get unauthorized for the URL generated with aws s3 presign. My extra logging still says:

isPresignedUrlAuthorized error: Invalid X-Amz-Signature: 7b8f9610d8106b27f97d9f13c35c24923b4dbcb099c784aff538e8bf53ed3481

I tried to deploy to Cloudflare where there would be no port number in the host to confuse things, but I still get unauthorized for both aws s3 presign and denoflare r2 presign URLs when deployed to Cloudflare. Even though denoflare r2 presign works via denoflare serve on port 3030.

I tried using denoflare tail $WORKER_NAME --format pretty but I get no output when I hit the Cloudflare worker's URL:

image

I also checked the log stream via Cloudflare Workers but it also shows no output:

image

Should I be seeing the same logging that I see locally from denoflare serve ... when I view the log stream in Cloudflare or use denoflare tail ... ?

I get the same unauthorized response from the worker, so it seems the vanity bucket name and credentials or X-Amz-Signature might be failing rather than a failure to read the R2 backing bucket.

from denoflare.

mrmachine avatar mrmachine commented on August 20, 2024

Not sure what happened. I tried redeploying to Cloudflare. Both my local version (which accepts virtualHost) and your v0.5.11 (which accepts virtualHostname)... and both work with URLs generated by aws s3 presign and denoflare r2 presign... So the only outstanding questions are:

  • Should I expect to see the console.log() outputs from the worker via denoflare tail and via real-time logs (at Cloudflare)?
  • Would you like me to submit a PR to change virtualHostname to virtualHost to better support denoflare serve on a non-standard port?

from denoflare.

johnspurlock-skymethod avatar johnspurlock-skymethod commented on August 20, 2024
  • All console.log calls should appear in tailing, yes (sometimes there is lag - cf actually redeploys your worker behind the scenes when toggling tail vs not)
  • All PRs are welcome! Can you please remind me of the context in the PR as this thread is pretty gigantic at this point

from denoflare.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.