GithubHelp home page GithubHelp logo

libs3's Issues

Build fails with `format-overflow` warning on almalinux:9 with `-Werror`

This warning was added in gcc 7 at the latest and yet I have not seen it trip on any platforms until now.

This build was using gcc-toolset-12.

From the log...

INFO -   stderr: src/s3.c: In function ‘listBucketCallback’:
src/s3.c:1230:35: error: ‘%4llu’ directive writing between 4 and 17 bytes into a region of size 16 [-Werror=format-overflow=]
 1230 |                 sprintf(sizebuf, "%4lluK",
      |                                   ^~~~~
src/s3.c:1230:34: note: directive argument in the range [0, 18014398509481983]
 1230 |                 sprintf(sizebuf, "%4lluK",
      |                                  ^~~~~~~~
src/s3.c:1230:17: note: ‘sprintf’ output between 6 and 19 bytes into a destination of size 16
 1230 |                 sprintf(sizebuf, "%4lluK",
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~
 1231 |                         ((unsigned long long) content->size) / 1024ULL);
      |                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[1]: *** [GNUmakefile:222: build/obj/s3.o] Error 1
make[1]: *** Waiting for unfinished jobs....
INFO -          
ERROR - build failed
Building [libs3]

Originally reported here: irods/externals#185

Ceph S3 is complaining on range copy

When doing a range copy to Ceph S3, the server is complaining about not having the Content-Length header. According to AWS documentation this header is not required.

Using AWS or Minio this works.

Look at updating libs3 to (conditionally?) add it.

> PUT <snip>/f2?partNumber=2&uploadId=<snip>
Host: <snip>
User-Agent: Mozilla/4.0 (Compatible; s3; libs3 4.1; Linux x86_64)
Accept: */*
Range: bytes=67108864-134217727
Authorization: <snip>
x-amz-date: 20201207T185949Z
x-amz-copy-source: <snip>/f1
x-amz-copy-source-range: bytes=67108864-134217727
x-amz-content-sha256: <snip>
Expect: 100-continue
^M
< HTTP/1.1 411 Length Required
< Content-Length: 278
< x-amz-request-id: <snip>
< Accept-Ranges: bytes
< Content-Type: application/xml

S3StatusErrorQuotaExceeded not listed as retryable.

S3_status_is_retryable() does not list S3StatusErrorQuotaExceeded as a retryable error.

At least in the case of Oracle S3 this occurs when there are too many simultaneous requests which in that case would be retryable. Maybe in others it wouldn't.

Consider adding this to the list.

Range is set improperly for S3_copy_object_range

  • v4
  • master

The end range for the following should be params->startByte + params->byteCount - 1.

Bytes in the middle are being written more than once and then it attempts to write one byte extra.

libs3/src/request.c

Lines 399 to 403 in 59b6237

if (params->byteCount > 0) {
headers_append(1, "x-amz-copy-source-range: bytes=%lld-%lld",
(unsigned long long) params->startByte,
(unsigned long long) (params->startByte + params->byteCount) );
}

Timeout when complete multipart upload takes a long time.

When I upload a very large file (>30GiB) to a MinIO server on my machine, I am getting timeouts on complete multipart upload while MinIO is putting the parts together. In my case it takes about 18 minutes to complete the multipart. This is likely due to a non-production level MinIO server setup.

Nevertheless, this does indicate a need to allow the complete multipart operation longer to complete.

All of the libs3 operations have a timeout parameter (ms) which is passed directly to the CURLOPT_TIMEOUT_MS curl option.

However, there are two other settings that will cause a timeout in this scenario: CURLOPT_LOW_SPEED_LIMIT and CURLOPT_LOW_SPEED_TIME. The latter is set to 15 seconds in libs3. Because complete multipart is not sending a lot of data in the response to the complete multipart (I believe it does send some spaces to keep alive) the data passed will be below the limit and the 15 second timeout will trigger.

To resolve this, I propose that the CURLOPT_LOW_SPEED* settings only be set if there is a non-zero timeout set. (Zero means no timeout.) So if there is no timeout set, use the low speed options to trigger a timeout. If there is a timeout set, just use that timeout.

Add glacier support

Update HEAD to read and store glacier related parameters.

  • x-amz-storage-class or x-goog-storage-class
  • x-amz-restore or x-goog-restore

Add RestoreObject support.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.