GithubHelp home page GithubHelp logo

Range header not signable about aws4fetch HOT 18 OPEN

mhart avatar mhart commented on August 25, 2024 1
Range header not signable

from aws4fetch.

Comments (18)

mhart avatar mhart commented on August 25, 2024 1

@robtimmer @kyranb @alukach Apologies for leaving it so long, but if you're still using this library (or can remember), can you tell me what errors you're encountering?

In the testing I've tried for S3, there's no error if the range header is included but isn't signed. Is it some other tool that's expecting this?

It was removed back in #5 to support CDN use cases.

I'll be publishing a new v1.1 version that allows ppl to specify their own unsignable headers – and I'm considering a v2 that actually removes the range header from the default list of unsignable headers (ie, it would be signed by default again) – but I need to know what problems it's actually causing because so far I'm unable to reproduce.

from aws4fetch.

kyranb avatar kyranb commented on August 25, 2024

Also running into this error

from aws4fetch.

robtimmer avatar robtimmer commented on August 25, 2024

Hi @kyranb @drappier-charles, have a look at my fork. I've committed a fix if you are interested. https://github.com/robtimmer/aws4fetch/commits/master

from aws4fetch.

alukach avatar alukach commented on August 25, 2024

I'm a bit late to this discussion, but can one of you help me understand the basis of the issue?

I am currently successfully making requests to S3 in a manner similar to:

new AwsClient(creds).fetch(
  "https://my-bucket.s3.amazonaws.com/a-file.json",
  {
    method: "GET",
    body: null,
    headers: {
      range: "bytes=16777216-",
    }
  }
)

Would you expect that to work?

The reason that I wound up on this ticket is because my signatures started to fail when I added a accept-encoding header:

new AwsClient(creds).fetch(
  "https://my-bucket.s3.amazonaws.com/a-file.json",
  {
    method: "GET",
    body: null,
    headers: {
      range: "bytes=16777216-",
      "accept-encoding": "gzip",
    }
  }
)

I'm struggling to make sense of why.

from aws4fetch.

robtimmer avatar robtimmer commented on August 25, 2024

@mhart For default S3 it should work. Only when doing multipart upload/ downloads it needs a Range-header to be present (files downloaded in parts, e.g. bytes 0-342423 after that 342423-574884), if I remember correctly.

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

@robtimmer I actually meant it worked when I included the range header (and didn't sign it). I can do multipart downloads no problem.

What platform were you using this lib on when it errored, do you remember?

from aws4fetch.

alukach avatar alukach commented on August 25, 2024

@mhart I'm pretty confused about what is going on with my issue. It may be a part of this issue, or it may be something entirely different.

I have a HonoJS API with some test endpoints:

const app = new Hono<{ Bindings: Env }>();

const app = new Hono<{ Bindings: Env }>();

app.get("/test1", async (c) => {
  return new AwsClient({
    accessKeyId: c.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: c.env.AWS_SECRET_ACCESS_KEY,
    region: "us-west-2",
  }).fetch("https://s3-event-bucket-test.s3.amazonaws.com/user_data.json", {
    method: "GET",
    body: null,
    headers: {
      range: "bytes=0-5",
    },
  });
});

app.get("/test2", async (c) => {
  return new AwsClient({
    accessKeyId: c.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: c.env.AWS_SECRET_ACCESS_KEY,
    region: "us-west-2",
  }).fetch("https://s3-event-bucket-test.s3.amazonaws.com/user_data.json", {
    method: "GET",
    body: null,
    headers: {
      "accept-encoding": "gzip",
      range: "bytes=0-5",
    },
  });
});

app.get("/test3", async (c) => {
  return new AwsClient({
    accessKeyId: c.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: c.env.AWS_SECRET_ACCESS_KEY,
    region: "us-west-2",
  }).fetch("https://aws4fetch-err-example.s3.amazonaws.com/data");
});

I wrote my original comment after noticing that a range request (eg /test1) would function but as soon as I added the accept-encoding, it failed (eg /test2

➤ curl http://localhost:8787/test1
{
  "A%
➤ curl http://localhost:8787/test2
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIAOBSCURED</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
20221009T053354Z
20221009/us-west-2/s3/aws4_request
42b48badfba9efdfd095e2f0531b2216bb4cf350eb3be2e719b37c13c6c36518</StringToSign><SignatureProvided>62838b16df14856b7cd38e20fc654ae479327fb5819dbfc04713197ae662aec5</SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 32 32 31 30 30 39 54 30 35 33 33 35 34 5a 0a 32 30 32 32 31 30 30 39 2f 75 73 2d 77 65 73 74 2d 32 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 34 32 62 34 38 62 61 64 66 62 61 39 65 66 64 66 64 30 39 35 65 32 66 30 35 33 31 62 32 32 31 36 62 62 34 63 66 33 35 30 65 62 33 62 65 32 65 37 31 39 62 33 37 63 31 33 63 36 63 33 36 35 31 38</StringToSignBytes><CanonicalRequest>GET
/user_data.json

accept-encoding:gzip, identity
host:s3-event-bucket-test.s3.amazonaws.com
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20221009T053354Z

accept-encoding;host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD</CanonicalRequest><CanonicalRequestBytes>47 45 54 0a 2f 75 73 65 72 5f 64 61 74 61 2e 6a 73 6f 6e 0a 0a 61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 3a 67 7a 69 70 2c 20 69 64 65 6e 74 69 74 79 0a 68 6f 73 74 3a 73 33 2d 65 76 65 6e 74 2d 62 75 63 6b 65 74 2d 74 65 73 74 2e 73 33 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 55 4e 53 49 47 4e 45 44 2d 50 41 59 4c 4f 41 44 0a 78 2d 61 6d 7a 2d 64 61 74 65 3a 32 30 32 32 31 30 30 39 54 30 35 33 33 35 34 5a 0a 0a 61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 3b 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3b 78 2d 61 6d 7a 2d 64 61 74 65 0a 55 4e 53 49 47 4e 45 44 2d 50 41 59 4c 4f 41 44</CanonicalRequestBytes><RequestId>NBJWGEJ2QZ51TC9V</RequestId><HostId>Wch/vGpPh5S6lnj8GwThlziDlkaNlIonpEloNajiEv/tu6L908Hamhd3ijKkkFrWj+oRSvFzT40=</HostId></Error>%         

However, in writing this comment, I'm left really scratching my head as I'm somehow hitting these SignatureDoesNotMatch errors even with the most basic request on a recently created bucket with a public file:

➤ aws s3 mb s3://aws4fetch-err-example
➤ echo 🚀 | aws s3 cp - s3://aws4fetch-err-example/data --acl public-read
➤ curl http://localhost:8787/test3
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIA5Q2DAUVI6BFTMLF2</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
20221009T141218Z
20221009/us-west-2/s3/aws4_request
9d5aed65d4ee4f6ce55f3204713339d8b3b0ea90975622f3f868034022602e61</StringToSign><SignatureProvided>ddc72979e1941802cbad093ed3f19693bcd9b22d54611cc95d5a23f847c553a3</SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 32 32 31 30 30 39 54 31 34 31 32 31 38 5a 0a 32 30 32 32 31 30 30 39 2f 75 73 2d 77 65 73 74 2d 32 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 39 64 35 61 65 64 36 35 64 34 65 65 34 66 36 63 65 35 35 66 33 32 30 34 37 31 33 33 33 39 64 38 62 33 62 30 65 61 39 30 39 37 35 36 32 32 66 33 66 38 36 38 30 33 34 30 32 32 36 30 32 65 36 31</StringToSignBytes><CanonicalRequest>GET
/data

host:aws4fetch-err-example.s3-us-west-2.amazonaws.com
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20221009T141218Z

host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD</CanonicalRequest><CanonicalRequestBytes>47 45 54 0a 2f 64 61 74 61 0a 0a 68 6f 73 74 3a 61 77 73 34 66 65 74 63 68 2d 65 72 72 2d 65 78 61 6d 70 6c 65 2e 73 33 2d 75 73 2d 77 65 73 74 2d 32 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 55 4e 53 49 47 4e 45 44 2d 50 41 59 4c 4f 41 44 0a 78 2d 61 6d 7a 2d 64 61 74 65 3a 32 30 32 32 31 30 30 39 54 31 34 31 32 31 38 5a 0a 0a 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3b 78 2d 61 6d 7a 2d 64 61 74 65 0a 55 4e 53 49 47 4e 45 44 2d 50 41 59 4c 4f 41 44</CanonicalRequestBytes><RequestId>NAMGPE8R9MR455ES</RequestId><HostId>DQeMWIjkIikUEfYMYhQ+wLNoffBz0GTCD+CzEQ/IXBU/2j7c5pLDxS9nIDeW+9CPBZOyUK0c8xk=</HostId></Error>%

So I really have no idea what is going on. It's feeling like I'm doing something wrong, but /test3 makes me totally unsure as to what that could be. All buckets and objects were created using the same AWS account.

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

@alukach interesting – I notice that S3 is telling you the header needs to match accept-encoding:gzip, identity – I wonder if that's Cloudflare (or whatever provider you're using) adding the extra header on the fetch.

Maybe try:

    headers: {
      "accept-encoding": "gzip, identity",
      range: "bytes=0-5",
    },

Also, you only need to create AwsClient once – you don't need to create it per request (you could hold it in a global variable, or perhaps Hono has a way to store objects beyond a single request)

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

@alukach also I think the <CanonicalRequest> details from the error message were truncated in the last part of your message (for /test3) – not sure exactly what's going on there

from aws4fetch.

alukach avatar alukach commented on August 25, 2024

I notice that S3 is telling you the header needs to match accept-encoding:gzip, identity – I wonder if that's Cloudflare (or whatever provider you're using) adding the extra header on the fetch.

Yes, I think that there may be something to this idea. When I do add the identity header myself, it appears that it gets double-added:

app.get("/test2", async (c) => {
  return new AwsClient({
    accessKeyId: c.env.AWS_ACCESS_KEY_ID,
    secretAccessKey: c.env.AWS_SECRET_ACCESS_KEY,
    region: "us-west-2",
  }).fetch("https://s3-event-bucket-test.s3.amazonaws.com/user_data.json", {
    method: "GET",
    body: null,
    headers: {
      "accept-encoding": "gzip, identity",
      range: "bytes=0-5",
    },
  });
});
➤ curl http://localhost:8787/test2
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIAOBSCURED</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
20221009T141522Z
20221009/us-west-2/s3/aws4_request
5e3bc4735e22db6bc1889ad0d52c779950181eeb5896e1be6485350b80c539ad</StringToSign><SignatureProvided>b3eaad4ac4e882ff22fb283b1202d380f6d96aaf992d8af013c652e8a7c6f06d</SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 32 32 31 30 30 39 54 31 34 31 35 32 32 5a 0a 32 30 32 32 31 30 30 39 2f 75 73 2d 77 65 73 74 2d 32 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 35 65 33 62 63 34 37 33 35 65 32 32 64 62 36 62 63 31 38 38 39 61 64 30 64 35 32 63 37 37 39 39 35 30 31 38 31 65 65 62 35 38 39 36 65 31 62 65 36 34 38 35 33 35 30 62 38 30 63 35 33 39 61 64</StringToSignBytes><CanonicalRequest>GET
/user_data.json

accept-encoding:gzip, identity, identity
host:s3-event-bucket-test.s3.amazonaws.com
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20221009T141522Z

accept-encoding;host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD</CanonicalRequest><CanonicalRequestBytes>47 45 54 0a 2f 75 73 65 72 5f 64 61 74 61 2e 6a 73 6f 6e 0a 0a 61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 3a 67 7a 69 70 2c 20 69 64 65 6e 74 69 74 79 2c 20 69 64 65 6e 74 69 74 79 0a 68 6f 73 74 3a 73 33 2d 65 76 65 6e 74 2d 62 75 63 6b 65 74 2d 74 65 73 74 2e 73 33 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 55 4e 53 49 47 4e 45 44 2d 50 41 59 4c 4f 41 44 0a 78 2d 61 6d 7a 2d 64 61 74 65 3a 32 30 32 32 31 30 30 39 54 31 34 31 35 32 32 5a 0a 0a 61 63 63 65 70 74 2d 65 6e 63 6f 64 69 6e 67 3b 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3b 78 2d 61 6d 7a 2d 64 61 74 65 0a 55 4e 53 49 47 4e 45 44 2d 50 41 59 4c 4f 41 44</CanonicalRequestBytes><RequestId>6HASSH6RFKNK6N61</RequestId><HostId>nlZbinSJ/a6c0PefnmJNuAnJ6w8V0Xk/JxD1MO/3074rhZ8CFOLlbOYmAJOkJL46aRoYPB5mVdU=</HostId></Error>%   

How do you typically test this library? I'm happy to run some tests to remove Cloudflare from the equation but I'm not sure of the simplest path to do that. Do you typically run it in the browser? I struggled calling the lib directly from node.js (which seems to be outside the target of this package).

Also, you only need to create AwsClient once

Thanks, my code makes more sense in my actual use-case (using unique credentials per-request).

also I think the details from the error message were truncated in the last part of your message (for /test3) – not sure exactly what's going on there

Confirmed, I think that was a copy/paste issue on my side. Fixed now.

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

@alukach I can see you're using different hosts. If your bucket is in the us-west-2 region, shouldn't your host be aws4fetch-err-example.s3.us-west-2.amazonaws.com?

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#virtual-hosted-style-access

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

Actually, if you're using Cloudflare, it will always be accept-encoding: gzip – regardless of if you specify it or not.

https://github.com/cloudflare/miniflare/blob/9265fa4d53262ce66278b91f2001b090e10696e8/packages/http-server/src/index.ts#L117-L123

I'd just remove the header altogether and try again. I'm not sure what's appending the identity though

from aws4fetch.

kyranb avatar kyranb commented on August 25, 2024

@mhart I ran into the issue while using aws4fetch in a cloudflare worker, connecting to Backblaze B2's S3 compatible API.

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

@kyranb ah ok, thanks. I guess Backblaze must require that the range header is signed.

from aws4fetch.

alukach avatar alukach commented on August 25, 2024

Actually, if you're using Cloudflare, it will always be accept-encoding: gzip – regardless of if you specify it or not.

So are you saying that Cloudflare will alter the outgoing requests made by the fetch API?

from aws4fetch.

mhart avatar mhart commented on August 25, 2024

Actually, if you're using Cloudflare, it will always be accept-encoding: gzip – regardless of if you specify it or not.

So are you saying that Cloudflare will alter the outgoing requests made by the fetch API?

Correct

from aws4fetch.

robtimmer avatar robtimmer commented on August 25, 2024

@robtimmer I actually meant it worked when I included the range header (and didn't sign it). I can do multipart downloads no problem.

What platform were you using this lib on when it errored, do you remember?

I was using CloudFlare workers (without any framework like HonoJS)

from aws4fetch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.