taylorfinnell / awscr-s3 Goto Github PK
View Code? Open in Web Editor NEWA Crystal shard for S3.
Home Page: https://taylorfinnell.github.io/awscr-s3/
License: MIT License
A Crystal shard for S3.
Home Page: https://taylorfinnell.github.io/awscr-s3/
License: MIT License
I see the Ruby gem equivalent lets you set acl: 'public-read'
as an option when uploading. Can you do something similar here?
If I try this, is doesn't work though:
uploader = Awscr::S3::FileUploader.new(client)
headers = {
"acl" => "public-read",
"Content-Type" => "image/png",
"Cache-Control" => "max-age=31536000",
}
File.open(filename), "r") do |file|
uploader.upload(bucket, obj_key, file, headers)
end
Any other ways to achive it?
the following error occurs, when calling head_object
:
#<Time::Format::Error:Invalid hour for 12-hour clock at 25: "Fri, 26 Mar 2021 16:26:13>>">
the value of the Last-Modified header is shown in the error message. i was able to temporarily fix the problem by overriding either HeadObjectOutput.from_response
or HeadObjectOutput.parse_date
,
I was trying to generate a pre-signed expiring url for an existing object and was getting a PermanentRedirect
error: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
I edited presigned/url.cr line 49 and changed the Host
header to s3-#{@region}.amazonaws.com
and this worked.
I'm not sure if this is always needed or not. Do you know anything about this?
Using a slightly modified version of examples/file_upload.cr
on Crystal 1.0.0, and uploading a 10mb image file, I'm seeing the following error:
Unhandled exception in spawn: The Content-MD5 you specified did not match what we received. (Awscr::S3::BadDigest)
from lib/awscr-s3/src/awscr-s3/http.cr:116:50 in 'handle_response!'
from lib/awscr-s3/src/awscr-s3/http.cr:88:16 in 'exec'
from lib/awscr-s3/src/awscr-s3/http.cr:46:7 in 'put'
from lib/awscr-s3/src/awscr-s3/client.cr:111:14 in 'upload_part'
from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:87:17 in 'upload_part'
from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:80:9 in '->'
from /opt/crystal/1.0.0/src/primitives.cr:255:3 in 'run'
from /opt/crystal/1.0.0/src/fiber.cr:92:34 in '->'
The first 5.2mb is uploaded, but then the above error is raised and the method returns true
. This is with an S3-compatible service.
If I try to use Amazon's S3 service, the error is as follows:
Unhandled exception in spawn: XAmzContentSHA256Mismatch: The provided 'x-amz-content-sha256' header does not match what was computed. (Awscr::S3::Exception)
from lib/awscr-s3/src/awscr-s3/http.cr:116:50 in 'handle_response!'
from lib/awscr-s3/src/awscr-s3/http.cr:88:16 in 'exec'
from lib/awscr-s3/src/awscr-s3/http.cr:46:7 in 'put'
from lib/awscr-s3/src/awscr-s3/client.cr:111:14 in 'upload_part'
from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:87:17 in 'upload_part'
from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:80:9 in '->'
from /opt/crystal/1.0.0/src/primitives.cr:255:3 in 'run'
from /opt/crystal/1.0.0/src/fiber.cr:92:34 in '->'
Either way – the "large" file does not upload successfully.
Using a 1.1mb image file, uploads are successful as expected, for both services.
I noticed that with release 0.5.0 having equal signs as object keys broke. Steps to reproduce:
print "Testing harmless id... "
client.start_multipart_upload("test", "test")
print "Works! "
print "Testing id containing '='... "
begin
client.start_multipart_upload("test", "test=")
puts "Works!"
rescue error : Awscr::S3::Http::ServerError
raise error unless error.message.try(&.starts_with? "SignatureDoesNotMatch")
puts "Broken!"
end
Last known working version: 0.4.0
Could also be caused by crystal update since version 0.5.0 is the first to support crystal 0.30.0.
In order to use IAM instance profiles, one must pass the session token.
It should be as simple as passing it as a header:
x-amz-security-token:FwoGZXIvYXdzEMn...long-string-of-base64-encoded-gibberish...Tsaf=
Taken directly from the ruby API
:
client = Awscr::S3::Client.new ...
bucket = client.bucket("foo")
# enumerate every object in a bucket
bucket.objects.each do |obj|
puts "#{obj.key} => #{obj.etag}"
end
# batch operations, delete objects in batches of 1k
bucket.objects(prefix: '/tmp-files/').delete
# single object operations
obj = bucket.object('hello')
I use DigitalOcean spaces and would like upload file into key like 2020/09/26/12/08/5b640175f6a4c62aba090ab1c5a69175
. Unfortunately I got exception:
#<Awscr::S3::SignatureDoesNotMatch:>.
lib/awscr-s3/src/awscr-s3/http.cr:93:9 in 'handle_response!'
lib/awscr-s3/src/awscr-s3/http.cr:48:7 in 'put'
lib/awscr-s3/src/awscr-s3/client.cr:231:7 in 'put_object'
lib/awscr-s3/src/awscr-s3/file_uploader.cr:33:9 in 'upload'
src/console.cr:15:5 in 'run_console'
On other hand key 2020_09_26_12_08_5b640175f6a4c62aba090ab1c5a69175
works fine, but I need first one.
Is there any thing that could solve issue and any workaround?
I have this shard where I upload a file, and I want it to have public read. I set these header options, but it doesn't seem to work for me. The object on S3 still doesn't have public read permissions. Is there something I'm missing?
Hey there! Love the shard, so thank you very much. It's fantastic.
Question for you - have you run into issues uploading where the target S3 object key has spaces in it? When I attempt to do that I get the following error:
HttpVersionNotSupported: The HTTP version specified is not supported. (Awscr::S3::Http::ServerError)
Which is a little odd in itself. When I change the key's spaces to underscores, however - all is well. Thanks so much!
Line 48 in 36c0f3b
Unhandled exception: Nil assertion failed (NilAssertionError)
from /usr/share/crystal/src/nil.cr:108:5 in 'not_nil!'
from lib/awscr-s3/src/awscr-s3/xml.cr:48:9 in 'namespaces'
from lib/awscr-s3/src/awscr-s3/xml.cr:39:12 in 'namespace'
from lib/awscr-s3/src/awscr-s3/xml.cr:29:14 in 'build_path'
from lib/awscr-s3/src/awscr-s3/xml.cr:11:21 in 'string'
from lib/awscr-s3/src/awscr-s3/xml.cr:53:33 in 'Awscr::S3::XML#string<String>:String'
from lib/awscr-s3/src/awscr-s3/exceptions.cr:88:7 in 'from_response'
from lib/awscr-s3/src/awscr-s3/http.cr:114:9 in 'handle_response!'
from lib/awscr-s3/src/awscr-s3/http.cr:88:16 in 'exec'
from lib/awscr-s3/src/awscr-s3/http.cr:46:7 in 'put'
from lib/awscr-s3/src/awscr-s3/client.cr:243:7 in 'put_object'
from lib/awscr-s3/src/awscr-s3/file_uploader.cr:34:9 in 'upload'
Since the version number in awscr-s3 shard.yml is locked at awscr-signer version 0.3.6, awscr-signer version 0.3.8 cant be used.
could use something like version: ~> 0.3.6
instead ?
head_object
raises XML::Exception
. I think most providers use HTTP status codes for HEAD
to indicate if an object exists or not.
All of the errors may have similar but I haven't had time to test them.
Tested with minio 2019-07...
You can add return stop if (lo = @last_output) && lo.truncated? && lo.next_token.empty?
in Awscr::S3::Paginator::ListObjectsV2#next ?
Without a continuation-token, an infinite loop occurs if the number of objects > max_keys in the request parameters.
Thank you
You can't get more than the first page using the list_objects call the normal client, at least with Digital Ocean.
I'm upgrading a project to lucky 0.27.2
, crystal 1.0.0
and aswscr-s3 0.8.2
(from 0.8.0).
Previously, all files were uploading to s3 without any problems.
Since my upgrade files greater than 5MB do not upload.
My code:
client = Awscr::S3::Client.new(ENV.fetch("AWS_ACCESS_REGION"), ENV.fetch("AWS_ACCESS_KEY"), ENV.fetch("AWS_SECRET_KEY"))
uploader = Awscr::S3::FileUploader.new(client)
File.open(File.expand_path(path), "r") do |file|
path = uploader.upload(ENV.fetch("AWS_BUCKET_NAME"), filename, file, {"x-amz-acl" => "public-read"})
end
File smaller than 5MB (single request works):
9:44:15 PM web.1 | file # => #<File:/var/folders/55/y42kx6yj78q75hh7z_dypf7h0000gn/T/.W8eN9gfile>
9:44:15 PM web.1 | ▸ Performing request
9:44:16 PM web.1 | ▸ Sent 200 OK (786.2ms)
File bigger than 5MB (multipart fails):
9:40:38 PM web.1 | file # => #<File:/var/folders/55/y42kx6yj78q75hh7z_dypf7h0000gn/T/.5H2kXrfile>
9:40:38 PM web.1 | ▸ Performing request
9:40:38 PM web.1 | ▸ Performing request
9:40:38 PM web.1 | ▸ Performing request
9:40:38 PM web.1 | ▸ Performing request
9:40:38 PM web.1 | ▸ Performing request
9:40:39 PM web.1 | Unhandled exception in spawn: XAmzContentSHA256Mismatch: The provided 'x-amz-content-sha256' header does not match what was computed. (Awscr::S3::Exception)
9:40:39 PM web.1 | from lib/awscr-s3/src/awscr-s3/http.cr:114:9 in 'handle_response!'
9:40:39 PM web.1 | from lib/awscr-s3/src/awscr-s3/http.cr:88:16 in 'exec'
9:40:39 PM web.1 | from lib/awscr-s3/src/awscr-s3/http.cr:46:7 in 'put'
9:40:39 PM web.1 | from lib/awscr-s3/src/awscr-s3/client.cr:111:14 in 'upload_part'
9:40:39 PM web.1 | from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:87:17 in 'upload_part'
9:40:39 PM web.1 | from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:80:9 in '->'
9:40:39 PM web.1 | from /usr/local/Cellar/crystal/1.0.0/src/primitives.cr:255:3 in 'run'
9:40:39 PM web.1 | from /usr/local/Cellar/crystal/1.0.0/src/fiber.cr:92:34 in '->'
9:40:41 PM web.1 | Unhandled exception in spawn: XAmzContentSHA256Mismatch: The provided 'x-amz-content-sha256' header does not match what was computed. (Awscr::S3::Exception)
9:40:41 PM web.1 | from lib/awscr-s3/src/awscr-s3/http.cr:114:9 in 'handle_response!'
9:40:41 PM web.1 | from lib/awscr-s3/src/awscr-s3/http.cr:88:16 in 'exec'
9:40:41 PM web.1 | from lib/awscr-s3/src/awscr-s3/http.cr:46:7 in 'put'
9:40:41 PM web.1 | from lib/awscr-s3/src/awscr-s3/client.cr:111:14 in 'upload_part'
9:40:41 PM web.1 | from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:87:17 in 'upload_part'
9:40:41 PM web.1 | from lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:80:9 in '->'
9:40:41 PM web.1 | from /usr/local/Cellar/crystal/1.0.0/src/primitives.cr:255:3 in 'run'
9:40:41 PM web.1 | from /usr/local/Cellar/crystal/1.0.0/src/fiber.cr:92:34 in '->'
9:40:41 PM web.1 | ▸ Performing request
9:40:41 PM web.1 | ▸ The XML you provided was not well-formed or did not validate against our published schema
https://github.com/taylorfinnell/awscr-s3/blob/master/src/awscr-s3/responses/list_objects_v2.cr#L13 doesn't work with @digitalocean spaces, at least with large buckets - the key is simply not in the XML response. Presumably they've not implemented the spec, or have implemented an old one (2006?)....
Either way, this crashes:
$ crystal run src/lister.cr
Invalid Int32: (ArgumentError)
from /usr/local/Cellar/crystal-lang/0.24.2_1/src/string.cr:418:5 in 'to_i32'
from /usr/local/Cellar/crystal-lang/0.24.2_1/src/string.cr:319:5 in 'to_i'
from lib/awscr-s3/src/awscr-s3/responses/list_objects_v2.cr:28:25 in 'from_response'
from lib/awscr-s3/src/awscr-s3/paginators/list_object_v2.cr:23:7 in 'next'
from /usr/local/Cellar/crystal-lang/0.24.2_1/src/iterator.cr:390:15 in '__crystal_main'
from /usr/local/Cellar/crystal-lang/0.24.2_1/src/crystal/main.cr:11:3 in '_crystal_main'
from /usr/local/Cellar/crystal-lang/0.24.2_1/src/crystal/main.cr:112:5 in 'main_user_code'
from /usr/local/Cellar/crystal-lang/0.24.2_1/src/crystal/main.cr:101:7 in 'main'
from /usr/local/Cellar/crystal-lang/0.24.2_1/src/crystal/main.cr:135:3 in 'main'
I've been experiencing version issues with the shrine shard. Locally, the latest tag is used (v0.7.0) while Travis seems to use the master branch. I don't know how and why, but specs fail locally, while they pass on Travis CI, and vice versa.
The issues I was having were related to http vs https calls using Webmock. As it turns out, the master uses https:
https://github.com/taylorfinnell/awscr-s3/blob/master/src/awscr-s3/http.cr#L103
https://github.com/taylorfinnell/awscr-s3/blob/master/src/awscr-s3/http.cr#L113
While the latest tag doesn't:
https://github.com/taylorfinnell/awscr-s3/blob/v0.7.0/src/awscr-s3/http.cr#L119
https://github.com/taylorfinnell/awscr-s3/blob/v0.7.0/src/awscr-s3/http.cr#L129
There was also an issue with http.head
because master takes headers:
https://github.com/taylorfinnell/awscr-s3/blob/master/src/awscr-s3/http.cr#L57
While v0.7.0 doesn't:
https://github.com/taylorfinnell/awscr-s3/blob/v0.7.0/src/awscr-s3/http.cr#L73
Could you release a v0.8.0 with the latest changes? I've been banging my head against the wall while figuring out why those specs were failing 😄
S3 supports sending the Range
http header. The ability to pass this header with get_object
should be easy to implement and probably "just work".
Hi!
I'm using your project to upload files to S3, and I've had no problem thus far, and it is actually working, but when spec tests are thrown at it, I get this error.
In lib/awscr-s3/src/awscr-s3/multipart_file_uploader.cr:46:26
46 | @pending << Part.new(
^--
Error: no overload matches 'Awscr::S3::Part.new', offset: (Float64 | Int32), size: Int32, number: Int32
Overloads are:
- Awscr::S3::Part.new(offset : Int32, size : Int32, number : Int32)
Couldn't find overloads for these types:
- Awscr::S3::Part.new(offset : Float64, size : Int32, number : Int32)
So I've found that apparently the compute_default_part_size
function returns the Float64. If I convert the output to an Int32 at line 42 it passes.
I'm sorry for the problems, it's just that to make sure it consistently works I want to check this.
awscr-s3/src/awscr-s3/presigned/url.cr
Line 33 in 3bd5718
request.host
needs to be changed to request.hostname
In the docs, you have create_bucket
, but looks like it's called put_bucket
https://github.com/taylorfinnell/awscr-s3/blob/master/README.md#create-a-bucket
https://github.com/taylorfinnell/awscr-s3/blob/master/src/awscr-s3/client.cr#L55
I have a scenario (full thread) where I am generating a file too big for RAM. I have successfully GZIP'd it in crystal before writing it to disk.
Is it possible to stream this file directly to S3?
require "csv"
require "compress/gzip"
# time crystal run --release convert_gzip.cr
counter = 0
output_file_name = "data_after.csv.gz"
# File.open(output_file_name = "data_after.csv", "w") do |outfile|
Compress::Gzip::Writer.open(output_file_name) do |outfile|
File.open(input_file_name) do |infile|
reader = CSV.new(infile, header = true)
index_cust_id = reader.indices["cust_id"]
CSV.build(outfile) do |writer|
writer.row reader.headers
reader.each do |entry|
row = entry.row.to_a
row[index_cust_id] = "#{entry.row[index_cust_id]}{loop_index}"
writer.row row
counter += 1
if counter % 2000000 == 0
outfile.flush
end
end
end
end
outfile.flush
end
awscr-s3/src/awscr-s3/client.cr
Line 231 in d16e884
If I use slashes in the object name like uploads/image.jpg
it gets turned into uploads%2Fimage.jpg
which is unexpected.
The method that does this is URI.encode_www_form
but maybe you meant to use URI.encode
instead? It leaves the slash, but I'm not entirely sure of the differences.
First of all, thank you for the work you've done to produce such a wonderful library!
I'm using a couple of shards (https://github.com/jwoertink/sitemapper and https://github.com/jetrockets/shrine.cr) that use ~> 0.8.0
of this library, but with Crystal 0.35.1 are throwing a deprecation and with the nightly build are throwing errors.
It looks like this issue was already resolved here: taylorfinnell/awscr-signer#51
Would it be possible to cut a release from the latest code in master
so that other shards can safely upgrade to remove this warning?
Should we add a method for easily uploading a directory?
Presigned host is calculated correctly: https://github.com/taylorfinnell/awscr-s3/blob/master/src/awscr-s3/presigned/url.cr#L65
Direct client is not: https://github.com/taylorfinnell/awscr-s3/blob/master/src/awscr-s3/http.cr#L19
Thanks for creating the library! It works, but I ran into two issues. The first is merely of an observation, while the second is a question about the implementation of X-Amz-Security-Token
, which looks incomplete.
What I was trying to do is to list buckets like aws s3 ls
. I'm using aws-vault and have the credentials available as environment variables.
I was assuming the client would automatically default to the variables from the environment, as many libraries do. Have when setting them manually, I also ran into a problem. Here is what I tried:
require "awscr-s3"
client = Awscr::S3::Client.new(ENV["AWS_REGION"], ENV["AWS_ACCESS_KEY_ID"], ENV["AWS_SECRET_ACCESS_KEY"])
puts client.list_buckets.buckets
It turns out that aws-vaul is generating temporary credentials: there is a AWS_SECURITY_TOKEN
which has to be passed in the X-Amz-Security-Token
header. When I tested with non-temporary credentials, the code above worked.
Regarding the security token, I did find it being implemented in the awscr-signer library in the Awscr::Signer::Signers::V4 class. But from awscr-s3, I did not see a way to pass the value. Thus, it is always nil
, which explains why AWS fails with:
Unhandled exception: The AWS Access Key Id you provided does not exist in our records. (Awscr::S3::InvalidAccessKeyId)
from lib/awscr-s3/src/awscr-s3/http.cr:114:9 in 'handle_response!'
from lib/awscr-s3/src/awscr-s3/http.cr:88:16 in 'exec'
from lib/awscr-s3/src/awscr-s3/http.cr:83:13 in 'exec:headers'
from lib/awscr-s3/src/awscr-s3/http.cr:66:7 in 'get'
from lib/awscr-s3/src/awscr-s3/client.cr:46:7 in 'list_buckets'
from src/crystal-s3-hallo-world.cr:4:6 in '__crystal_main'
When I change the code in line 17 from
def initialize(... @amz_security_token : String? = nil])
to
def initialize(... @amz_security_token : String? = ENV["AWS_SECURITY_TOKEN"])
it works.
My questions:
I'm using Crystal 1.4.1 and my shard.lock looks like that:
version: 2.0
shards:
awscr-s3:
git: https://github.com/taylorfinnell/awscr-s3.git
version: 0.8.3
awscr-signer:
git: https://github.com/taylorfinnell/awscr-signer.git
version: 0.8.2
The environment is setup that I get from aws-vault looks like that (I think, the name of the environment variables is pretty standard):
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
AWS_SESSION_EXPIRATION=2022-05-06T16:50:53Z
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXuXXXXXXXXXXXXXXXXXXXXX
AWS_ACCESS_KEY_ID=XXXXXXXXTXXXXXXXXXXX
AWS_SESSION_TOKEN=IQo..xXxXxxxXxXxXXXxxxxxxxxxxxXxXXXxXXXxxxXxXXXXXXXXXXxxxxxXxxxxXxxxXXXXXXXxxxXXXxXxxXXxXXxXXxxXXxXXxxXxxxXxxXxxxXxxXXXxXxxxxxXxxXxxxXxxxXXxxxXxxXXXxxxxxxxxxxxxXXXXXxxxXXXxXXxxXXXxXXXxXXXXXxXxxXxXxxxxxXxxXxxXxXxxxxxxXXXxxXxxxXxxxxxxXXxXxXXXxxXXxxXXxxXxxxXxXxxxxxXxXxxXxXxXxXXXxXxxXxXXxXxXxXXxXxXxxxxxxXXXxXXxxxXxxxXXXxxXxxxXxxxXXxXxXXxxxxxxxXXXXXxxXxxxxxxxXxXXxxXxxXXxxXxXxxxXxXXxXXxXXxXXXXXxxxxXXXxxxXXxxXXXxXXxxxXXXxxxXxXXxxXxXxXXxxxXxXxxxXxxXXXxxxxxxXxxxxxxXXXxXXXxxxxxxXXxXXXxxxXxXXxxXxXXxxxxxXxxXXXxxxXXxxXxxxxxXXxxXxXXXXxXxxxXxXxXxxxxxxxXxXxxXXXxXxxxXXXxxXXxXxXXXxxXxXXXXxxXXxxXXXXxxXxXXxxxxxXXXxxxXxXXxXxXxxXxxxXXxxXxxxXXxXxxxXxxxxXxxxXxXXXXxXxXxxxxXxxxxxXXXXXXXXxxxXxXxXxXxxxxXXXxxxXXXxxxXXXXXxxxXXxXxXxXXXXXxxxxxXxXXxXXXXXxXXxxXxXxxxxxxXxXxxxxxxxXxXXxxxxxXxxxxxxxxxxxxxxxxXXXXXxxxXXxxxxXxxXxXxxXxXxxXxXXXxxxxxxXxXxxXXxxxxxxXxXxxxxxxXXxxxXxxxXxxXxXxxxxxXxXxxxXXXXxXxxXXxXXXxXxxXxXxXXxxxXxxxxXxXxxxxXXxxxxxxxxXZQ=
AWS_SECURITY_TOKEN=IQo..xXxXxxxXxXxXXXxxxxxxxxxxxXxXXXxXXXxxxXxXXXXXXXXXXxxxxxXxxxxXxxxXXXXXXXxxxXXXxXxxXXxXXxXXxxXXxXXxxXxxxXxxXxxxXxxXXXxXxxxxxXxxXxxxXxxxXXxxxXxxXXXxxxxxxxxxxxxXXXXXxxxXXXxXXxxXXXxXXXxXXXXXxXxxXxXxxxxxXxxXxxXxXxxxxxxXXXxxXxxxXxxxxxxXXxXxXXXxxXXxxXXxxXxxxXxXxxxxxXxXxxXxXxXxXXXxXxxXxXXxXxXxXXxXxXxxxxxxXXXxXXxxxXxxxXXXxxXxxxXxxxXXxXxXXxxxxxxxXXXXXxxXxxxxxxxXxXXxxXxxXXxxXxXxxxXxXXxXXxXXxXXXXXxxxxXXXxxxXXxxXXXxXXxxxXXXxxxXxXXxxXxXxXXxxxXxXxxxXxxXXXxxxxxxXxxxxxxXXXxXXXxxxxxxXXxXXXxxxXxXXxxXxXXxxxxxXxxXXXxxxXXxxXxxxxxXXxxXxXXXXxXxxxXxXxXxxxxxxxXxXxxXXXxXxxxXXXxxXXxXxXXXxxXxXXXXxxXXxxXXXXxxXxXXxxxxxXXXxxxXxXXxXxXxxXxxxXXxxXxxxXXxXxxxXxxxxXxxxXxXXXXxXxXxxxxXxxxxxXXXXXXXXxxxXxXxXxXxxxxXXXxxxXXXxxxXXXXXxxxXXxXxXxXXXXXxxxxxXxXXxXXXXXxXXxxXxXxxxxxxXxXxxxxxxxXxXXxxxxxXxxxxxxxxxxxxxxxxXXXXXxxxXXxxxxXxxXxXxxXxXxxXxXXXxxxxxxXxXxxXXxxxxxxXxXxxxxxxXXxxxXxxxXxxXxXxxxxxXxXxxxXXXXxXxxXXxXXXxXxxXxXxXXxxxXxxxxXxXxxxxXXxxxxxxxxXZQ=
Thank you for this shard! I'm using it on https://archive.fm/ and it's working great.
I'd like to be able to ask if a GetObject request is successful or not. HTTP::Client::Response provides both #success? and #status_code / #status. At a bare minimum, it would be sufficient to just provide an accessor for the http response itself within GetObjectOutput.
What do you think? I'll be happy to submit a PR if you think it's worthwhile to add.
Hey! I got a problem with Shrine (https://github.com/jetrockets/shrine.cr). In some cases (one of our applications that worked with 0.35 before and new Shrine specs), we are getting an error that is below. Maybe it is not really problem of FileUploader
, but for sure not all subclasses of IO
respond to size
.
Do you have any ideas how this can be solved?
In src/shrine/storage/s3.cr:57:18
57 | uploader.upload(bucket, object_key(id), io, options)
^-----
Error: instantiating 'Awscr::S3::FileUploader#upload(String, String, IO+, Hash(String, String))'
In lib/awscr-s3/src/awscr-s3/file_uploader.cr:32:13
32 | if io.size < UPLOAD_THRESHOLD
^---
Error: undefined method 'size' for IO::ARGF (compile-time type is IO+)
Would the form @client
need to send a delete
instead of post
?
e.g.:
failed to compile:
Showing last frame. Use --error-trace for full trace.
In /usr/local/Cellar/crystal/0.30.0_1/src/comparable.cr:83:16
83 | abstract def <=>(other : T)
^--
Error: abstract `def Comparable(T)#<=>(other : T)` must be implemented by Awscr::Signer::Header
would you be interested in a PR to add support for DigitalOcean Spaces and Minio?
this basically implies allowing to set the host (and port for Minio). something like:
# Spaces
Awscr::S3::Client.new(
"ams1",
"access_key",
"secret_key",
host: "digitaloceanspaces.com"
)
# Minio
Awscr::S3::Client.new(
"ams1",
"access_key",
"secret_key",
host: "127.0.0.1",
port: 9000
)
this will enable the use of most operations.
presigned URLs in DigitalOcean Spaces are a different stroy - they can only use the V2 signature (for now). in order to get these things to work I'd probably need to add V2 to https://github.com/taylorfinnell/awscr-signer (though I'm not sure it's worth the trouble, I might be able to work around this in my own project).
would like to hear your thoughts on this 🤓
It may be strange that FileUploader#upload
can either return a Response::PutObjectOutput
or a Response::CompleteMultipartUploadOutput
depending on the size of the uploaded file. The former response can not contain things like the bucket, or key (it only has headers in the response, no body) where as the second one does have bucket and key. Does it make sense to return true
on success and leave it up to the method callers to handle exceptions?
With a large number of requests I get the following error:
Unhandled exception: Error connecting to '...': Can't assign requested address (Socket::ConnectError)
Could a connection pool with persistent connections be used instead of 1 connection per request?
Hi,
I'm having trouble using your shard with DigitalOcean Spaces.
I'm simply trying to list all buckets and I'm getting Exception: SignatureDoesNotMatch
client = Awscr::S3::Client.new("ams3", ENV["AMS3_SPACES_KEY"], ENV["AMS3_SPACES_SECRET"], endpoint: "https://ams3.digitaloceanspaces.com")
resp = client.list_buckets
resp.buckets
What is weird about it is that presigned URLs works without issue. So my key and secret are correct.
Full error:
Exception: SignatureDoesNotMatch: (Awscr::S3::Http::ServerError)
from lib/awscr-s3/src/awscr-s3/http.cr:111:31 in 'handle_response!'
from lib/awscr-s3/src/awscr-s3/http.cr:86:7 in 'get'
from lib/awscr-s3/src/awscr-s3/client.cr:46:7 in 'list_buckets'
from src/dwoom-account-backend-image_manipulation.cr:23:5 in '->'
from lib/kemal/src/kemal/route.cr:255:3 in '->'
from lib/kemal/src/kemal/route_handler.cr:255:3 in 'process_request'
from lib/kemal/src/kemal/route_handler.cr:17:7 in 'call'
from /usr/local/Cellar/crystal/0.29.0/src/http/server/handler.cr:26:7 in 'call_next'
from lib/kemal/src/kemal/websocket_handler.cr:13:14 in 'call'
from /usr/local/Cellar/crystal/0.29.0/src/http/server/handler.cr:26:7 in 'call_next'
from lib/kemal/src/kemal/static_file_handler.cr:14:11 in 'call'
from /usr/local/Cellar/crystal/0.29.0/src/http/server/handler.cr:26:7 in 'call_next'
from lib/kemal/src/kemal/exception_handler.cr:8:7 in 'call'
from /usr/local/Cellar/crystal/0.29.0/src/http/server/handler.cr:26:7 in 'call_next'
from lib/kemal/src/kemal/log_handler.cr:10:35 in 'call'
from /usr/local/Cellar/crystal/0.29.0/src/http/server/handler.cr:26:7 in 'call_next'
from lib/kemal/src/kemal/init_handler.cr:12:7 in 'call'
from /usr/local/Cellar/crystal/0.29.0/src/http/server/request_processor.cr:39:11 in 'process'
from /usr/local/Cellar/crystal/0.29.0/src/http/server/request_processor.cr:16:3 in 'process'
from /usr/local/Cellar/crystal/0.29.0/src/http/server.cr:462:5 in 'handle_client'
from /usr/local/Cellar/crystal/0.29.0/src/http/server.cr:428:13 in '->'
from /usr/local/Cellar/crystal/0.29.0/src/fiber.cr:255:3 in 'run'
from /usr/local/Cellar/crystal/0.29.0/src/fiber.cr:47:34 in '->'
It would be really nice if we implemented automatic Content-Type
tagging based on file extension. See https://github.com/andrewrk/node-s3-client for example.
I'm getting this error:
Unhandled exception: Hostname lookup for s3-us-east1.amazonaws.com failed: No address found (Socket::Addrinfo::Error)
from /usr/local/Cellar/crystal/0.34.0/src/socket/tcp_socket.cr:124:9 in 'initialize'
from /usr/local/Cellar/crystal/0.34.0/src/socket/tcp_socket.cr:27:3 in 'new'
from /usr/local/Cellar/crystal/0.34.0/src/http/client.cr:762:5 in 'socket'
from /usr/local/Cellar/crystal/0.34.0/src/http/client.cr:650:19 in 'send_request'
from /usr/local/Cellar/crystal/0.34.0/src/http/client.cr:587:5 in 'exec_internal_single'
from /usr/local/Cellar/crystal/0.34.0/src/http/client.cr:574:5 in 'exec_internal'
from /usr/local/Cellar/crystal/0.34.0/src/http/client.cr:570:5 in 'exec'
from /usr/local/Cellar/crystal/0.34.0/src/http/client.cr:692:5 in 'exec'
from /usr/local/Cellar/crystal/0.34.0/src/http/client.cr:396:3 in 'put'
from lib/awscr-s3/src/awscr-s3/http.cr:47:7 in 'put'
from lib/awscr-s3/src/awscr-s3/client.cr:231:7 in 'put_object'
from lib/awscr-s3/src/awscr-s3/file_uploader.cr:33:9 in 'upload'
from src/xxxxxx.cr:51:13 in '__crystal_main'
from /usr/local/Cellar/crystal/0.34.0/src/crystal/main.cr:105:5 in 'main_user_code'
from /usr/local/Cellar/crystal/0.34.0/src/crystal/main.cr:91:7 in 'main'
from /usr/local/Cellar/crystal/0.34.0/src/crystal/main.cr:114:3 in 'main'
I noticed it's looking for s3-us-east1.amazonaws.com
. Shouldn't it be s3.us-east1.amazonaws.com
instead?
From Working with Amazon S3 Buckets > AWS SDK > Creating a client:
I'm not sure, but this may be the related line of code:
URI.parse("https://#{SERVICE_NAME}-#{@region}.amazonaws.com")
It would be handy to support unsigned requests to the S3 API, for the use case of public-read buckets
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.