irods / libs3 Goto Github PK
View Code? Open in Web Editor NEWThis project forked from bji/libs3
License: Other
This project forked from bji/libs3
License: Other
FujiFilm uses 64 bytes instead of 32. Set it to 128 bytes just in case that becomes necessary in the future.
Update the partContentLength
in S3_upload_part from int to int64_t so that parts of size > 2^31 may be uploaded.
This is to fix irods/irods_resource_plugin_s3#2104.
This goes with irods/irods_resource_plugin_s3#2025
This warning was added in gcc 7 at the latest and yet I have not seen it trip on any platforms until now.
This build was using gcc-toolset-12
.
From the log...
INFO - stderr: src/s3.c: In function ‘listBucketCallback’:
src/s3.c:1230:35: error: ‘%4llu’ directive writing between 4 and 17 bytes into a region of size 16 [-Werror=format-overflow=]
1230 | sprintf(sizebuf, "%4lluK",
| ^~~~~
src/s3.c:1230:34: note: directive argument in the range [0, 18014398509481983]
1230 | sprintf(sizebuf, "%4lluK",
| ^~~~~~~~
src/s3.c:1230:17: note: ‘sprintf’ output between 6 and 19 bytes into a destination of size 16
1230 | sprintf(sizebuf, "%4lluK",
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
1231 | ((unsigned long long) content->size) / 1024ULL);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[1]: *** [GNUmakefile:222: build/obj/s3.o] Error 1
make[1]: *** Waiting for unfinished jobs....
INFO -
ERROR - build failed
Building [libs3]
Originally reported here: irods/externals#185
Oracle S3 seems to not like the x-amz-metadata-directive set to REPLACE on an object copy if there is no additional metadata added.
Setting it to COPY works fine and I've tested it with minIO, CEPH, and Amazon.
When doing a range copy to Ceph S3, the server is complaining about not having the Content-Length header. According to AWS documentation this header is not required.
Using AWS or Minio this works.
Look at updating libs3 to (conditionally?) add it.
> PUT <snip>/f2?partNumber=2&uploadId=<snip>
Host: <snip>
User-Agent: Mozilla/4.0 (Compatible; s3; libs3 4.1; Linux x86_64)
Accept: */*
Range: bytes=67108864-134217727
Authorization: <snip>
x-amz-date: 20201207T185949Z
x-amz-copy-source: <snip>/f1
x-amz-copy-source-range: bytes=67108864-134217727
x-amz-content-sha256: <snip>
Expect: 100-continue
^M
< HTTP/1.1 411 Length Required
< Content-Length: 278
< x-amz-request-id: <snip>
< Accept-Ranges: bytes
< Content-Type: application/xml
S3_status_is_retryable() does not list S3StatusErrorQuotaExceeded as a retryable error.
At least in the case of Oracle S3 this occurs when there are too many simultaneous requests which in that case would be retryable. Maybe in others it wouldn't.
Consider adding this to the list.
The end range for the following should be params->startByte + params->byteCount - 1.
Bytes in the middle are being written more than once and then it attempts to write one byte extra.
Lines 399 to 403 in 59b6237
When I upload a very large file (>30GiB) to a MinIO server on my machine, I am getting timeouts on complete multipart upload while MinIO is putting the parts together. In my case it takes about 18 minutes to complete the multipart. This is likely due to a non-production level MinIO server setup.
Nevertheless, this does indicate a need to allow the complete multipart operation longer to complete.
All of the libs3 operations have a timeout parameter (ms) which is passed directly to the CURLOPT_TIMEOUT_MS curl option.
However, there are two other settings that will cause a timeout in this scenario: CURLOPT_LOW_SPEED_LIMIT and CURLOPT_LOW_SPEED_TIME. The latter is set to 15 seconds in libs3. Because complete multipart is not sending a lot of data in the response to the complete multipart (I believe it does send some spaces to keep alive) the data passed will be below the limit and the 15 second timeout will trigger.
To resolve this, I propose that the CURLOPT_LOW_SPEED* settings only be set if there is a non-zero timeout set. (Zero means no timeout.) So if there is no timeout set, use the low speed options to trigger a timeout. If there is a timeout set, just use that timeout.
On platforms which use OpenSSL 3.0, many functions and types have been deprecated or hidden which are still in use. We need to use a flag to make these functions and types visible for building in such a way that is compatible with 1.1.0.
See the OpenSSL documentation for more information:
https://www.openssl.org/docs/man3.0/man7/openssl_user_macros.html#OPENSSL_API_COMPAT
Update HEAD to read and store glacier related parameters.
Add RestoreObject support.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.