GithubHelp home page GithubHelp logo

marcel / aws-s3 Goto Github PK

View Code? Open in Web Editor NEW
781.0 781.0 465.0 790 KB

AWS-S3 is a Ruby implementation of Amazon's S3 REST API

Home Page: http://amazon.rubyforge.org

License: MIT License

Ruby 99.27% CSS 0.73%

aws-s3's People

Contributors

aercolino avatar bokor avatar jordimassaguerpla avatar marcandre avatar marcel avatar nbibler avatar orenhe avatar pelargir avatar rafbm avatar technoweenie avatar yuki24 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-s3's Issues

proper handling of errors for s3 copy

From the Documentation:

There are two opportunities for a copy request to return an error. One can occur when Amazon S3 receives the copy request and the other can occur while Amazon S3 is copying the files. If the error occurs before the copy operation starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. Make sure to design your application to parse the contents of the response and handle it appropriately.

Current error checking is implemented as:

def error?
  !success? && response['content-type'] == 'application/xml' && parsed.root == 'error'
end

where success? only returns true for response code 200..299

Basically, it looks like aws-s3 won't handle the case specified in the documentation

"Invalid group: uri" while setting :authenticated_read

My object was private and I wanted to grant the READ permission to authenticated users:

policy = S3Object.acl(name, bucket)
policy.grants << ACL::Grant.grant(:authenticated_read)
# persist it:
S3Object.acl(name, bucket, policy)  # BOOM! doesn't work

The exception is from the server: "Invalid group uri". This is what it was trying to send (for the offending grant):

<Grantee xsi:type="Group" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <URI>http://acs.amazonaws.com/groups/global/Authenticated</URI>
</Grantee>

I think it Amazon choked because it wanted this URI instead:

http://acs.amazonaws.com/groups/global/AuthenticatedUsers

S3 0.6.1 aborts with wrong number of arguments

Code snippet (a Rake task):
task :boog => :environment do
require 'aws/s3'
include AWS::S3

  Base.establish_connection!(
    :access_key_id     => S3[:access_key_id],
    :secret_access_key => S3[:secret_access_key]
  )

  begin
    puts "Checking delete..."
    S3Object.delete("TEST", S3[:bucket])
    puts "...Success"
  rescue S3Exception => e
    p e.message
    puts "Fail: Could not delete"
  end
end

stack trace:
trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/extensions.rb:137:in __method__' trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/extensions.rb:137:inexpirable_memoize'
trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/extensions.rb:176:in canonical_string' trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/authentication.rb:72:inencoded_canonical'
trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/authentication.rb:94:in initialize' trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/connection.rb:130:innew'
trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/connection.rb:130:in authenticate!' trunk/vendor/gems/aws-s3-0.6.1/lib/aws/s3/connection.rb:34:inrequest'

problem with installation

In my bundler I typed: gem 'aws-s3', :git => 'git://github.com/marcel/aws-s3.git' and this message occurs instead of installing.

Could not find gem 'aws-s3 (>= 0) ruby' in git://github.com/marcel/aws-s3.git (at master).
Source does not contain any versions of 'aws-s3 (>= 0) ruby'

Anyone know what is wrong?

S3Object.store $stdin

Hey,

I've been trying to build a tool that allows me to put some data from STDIN into S3, and I have hit a brick wall with your S3 library. When I try to call AWS::S3::S3Object.store(path, $stdin, bucket) I get an exception thrown...

/Library/Ruby/Gems/1.8/gems/aws-s3-0.6.2/lib/aws/s3/connection.rb:41:in `request': undefined method `size' for #<IO:0x106464bd0> (NoMethodError)
    from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:543:in `start'
    from /Library/Ruby/Gems/1.8/gems/aws-s3-0.6.2/lib/aws/s3/connection.rb:52:in `request'
    from /Library/Ruby/Gems/1.8/gems/aws-s3-0.6.2/lib/aws/s3/base.rb:69:in `request'
    from /Library/Ruby/Gems/1.8/gems/aws-s3-0.6.2/lib/aws/s3/base.rb:88:in `put'
    from /Library/Ruby/Gems/1.8/gems/aws-s3-0.6.2/lib/aws/s3/object.rb:241:in `store'
    from ./s3pipe.rb:115

Everything works great if I call $stdin.read however the data i'm piping into my ruby program is upwards of 2GB, and i'm running this on low memory machines.

By the looks of things, your relying on a few methods that are only defined in the File class, and not handling raw IO objects. It would be aweseomeee if you could fix this bug :-)! I can't wait to open source this little ruby file i'm writing!

Thanks.

AWS::S3::Bucket.object

I have a bucket on s3 cloud with 5 directories in it. When I tr to get all the objects from the bucket from rails console using

AWS::S3::Bucket.object(bucket_name) it gives me objects of only first directory. I do not get objects of the remaining 4 directories.

Can you please help me out..

mybucket['file_name'] returns nil while S3Object.find('file_name', 'bucket_name') returns the file

This bug is not happening consistently. When I put this gem under a capacity test of 100 requests/hour. This bug will happen around 5% of times.

I have a file on S3 and I am sure it is there and valid. If I do mybucket['file_name'] I get nil. But for the same file if I do S3Object.find('file_name', 'bucket_name') it returns the file.

The documentation on http://amazon.rubyforge.org/ gave me the impression that these I can use these two methods the same and expect the same.

Find doesn't work on buckets with many thousand items (patch included)

The current find wasn't working for me on a bucket with a few thousand items that I'm using to cache documents. I was basically trying to find a doc, then store it if it didn't exist. But I could never find the document.

If I switched to a brand new bucket with zero items, it saved items properly.

Digging through the find code, it seemed like we didn't find things because we weren't in the first 'chunk' of the bucket.

Attached is a patch that will go through each chunk until it finds the item, or raise a NoSuchKey error if the item isn't found.

It fixed my problem locally.

Kernel __method__ collision

Just a heads up, I think there's a collision with ruby facets on the Kernel method patching. If anybody gets a 1 for 0 argument error, chances are something is loading facets and patching Kernel after aws-s3 has been loaded up.

POST Object restore support

Hi.

I accidentally backed up my whole bucket to glacier. This is great, except that the files are no longer in S3. I need to restore them all to S3 and since there are a lot - I would prefer to do this programmatically.

I have tried hacking together a method on S3Object called restore_form_glacier but I am having no luck with the signature.

Any plans to support this in the gem? Any ideas why the signature is not working for my custom method?

AWS::S3::S3Object.copy method doesn't work with additional options

AWS::S3::S3Object.copy method doesn't merge options hash passed to it properly with the store method used by it.

steps to reproduce:

  1. AWS::S3::S3Object.copy(source_path, copy_path, bucket, :access => :public_read)
  2. creates copy of file at copy_path
  3. trying to access the copy_path file will return "access denied" xml in Firefox

workaround:
use AWS::S3::S3Object.store(copy_path, open(AWS::S3::S3Object.url_for(source_path,bucket), bucket, :access => :public_read)

Ruby 1.9 Encoding Issue

Adding the following as the first line of lib/aws/s3/extensions.rb fixes a problem where the gem will not load in Ruby 1.9:

# encoding: BINARY

Without this you will get the error:

gems/aws-s3-0.6.2/lib/aws/s3/extensions.rb:84: invalid multibyte escape: /[\x80-\xFF]/

Allow setting of open_timeout

Sometimes we get timeouts from S3 that are in the order of 10 minutes. We'd much prefer to have timeouts set to a number less than a minute so that our application can deal with the failure and move on.

Object#returning has been deprecated in favor of Object#tap

I have an authenticated download method in my asset model as follows:

AWS::S3::S3Object.url_for(upload.path(style || upload.default_style), upload.bucket_name, :use_ssl => upload.s3_protocol == 'https', :expires_in => expires_in)

This generates a secure, expiring link for the asset. After upgrading from Ruby 1.8.7/Rails 2.3.5 to Ruby 1.9.2/Rails 3.0.3 I get the warning in the subject of this issue.

P.S. Is there a more active fork of this gem? A bit disconcerting that Paperclip uses it for S3 when it hasn't been updated in so long. Cheers.

License

Is there a license for this? If so I can't seem to find it.

Conflict with right_http_connection-1.2.4

... right_http_connection-1.2.4/lib/net_fix.rb

... aws-s3-0.6.2/lib/aws/s3/extensions.rb

Original version of send_request_with_body_stream takes 5 arguments, but rewritten one takes only 4.

Rename should copy acl too by default

I'm not sure if that's issue or feature request, but I believe that when you use the S3Object#rename method I believe that it should pass the :copy_acl=>true option to the copy operation.

And anyway, it should be documented...

HTTP SSL Options Error

AWS::S3::Connection.create_connection has the following bit of logic...

http.use_ssl = !options[:use_ssl].nil? || options[:port] == 443

:use_ssl => false thus turns SSL on - which is incorrect/unexpected behaviour

Question about usage

I have an empty folder (bar) in my s3 bucket

bucket = Bucket.find("my.bucket", :prefix => "bar")
bucket.object.size # => 1000

I'm fairly certain this should return 0, since there aren't any files that match :prefix, yet I'm getting back a bunch of files from other folders on s3.

Is this expected behavior?

:marker option no longer seems to work with AWS::S3::Bucket.objects()

These both return the same set of 1000 elements, despite all filenames starting with 'p' and using 'q' as the :marker option.

ruby-1.9.3-p125 :012 > AWS::S3::Bucket.objects(@bucket_label, :prefix => @remote_path).first
=> #<AWS::S3::S3Object:0x193416160 '/lumos-data-dump-prod01/reports/purchase-events/part-00000'>

ruby-1.9.3-p125 :013 > AWS::S3::Bucket.objects(@bucket_label, :prefix => @remote_path, :marker => "q").first
=> #<AWS::S3::S3Object:0x196253020 '/lumos-data-dump-prod01/reports/purchase-events/part-00000'>

AWS::S3::S3Object.store doesn't support transfer encoding chunked

When using AWS::S3::S3Object.store with a generic IO object - say a IO.pipe, store should use transfer encoding chunked since it can't compute the size of the read end of the pipe. This would allow people to stream uploads. The example use case is you are downloading files from one machine via net/http net/scp or net/ftp... instead of downloading the full document and writing it to disk it would be nice to stream those files down in chunks and upload the chunks directly to S3 rather than download them completely and than upload them.

Invalid multi byte escape error in Ruby 2.0.0-p0

Getting this error in the following line

.rvm/gems/ruby-2.0.0-p0/gems/aws-s3-0.6.2/lib/aws/s3/extensions.rb:84: invalid multibyte escape: /[\x80-\xFF]/ (SyntaxError)

Ruby - 2.0.0-p0
Rails - 3.2.11

Updating to 0.6.4 (or later)?

It looks like there are several pretty active forks of this package, some of which seem to be forks of 37signals' fork, which updated the version number to 0.6.4 in August 2010. Would it be possible to get their fork merged into this repo... or are things at a point where amazazon.rubyforge.com should point to a different branch?

Error during loading of aws-s3 gem

Hi
We have encountered an issue in our project which manifests itself during the starting of the rails server.
Within the s3.rb file, the require_library_or_gem 'xml/libxml' succeeds but the XML::Parser is not defined raising a NameError on what is currently line 54
This appears to happen as a result of our use of the savon gem.

We have validated this in our test project that just uses the aws-s3 gem and this works fine (the require_library_or_gem call fails and thus falls back to XmlSimple)
In our main project, other work brings in the savon gem and the problem is seen
Modifying our test project to require the savon gem then exhibits the problem.
Removing the require from our Gemfile removes the problem.

S3Object.url_for does not work correctly with CNAME host-specifications.

:001 > AWS::S3::Base.establish_connection! :access_key_id => 'key', :secret_access_key => 'secret', :server => 'my.cname'
 => #<AWS::S3::Connection:0x3317060 @http=#<Net::HTTP my.cname:80 open=false>, @options={:access_key_id=>"key", :secret_access_key=>"secret", :server=>"my.cname", :port=>80}, @secret_access_key="secret", @access_key_id="key"> 
:002 > AWS::S3::S3Object.url_for 'some-file.jpg', 'my.cname'
 => "http://my.cname/my.cname/some-file.jpg?AWSAccessKeyId=key&Expires=1304600700&Signature=fCWKsMFoWhul3JNlU8eR7VPVHRs%3D" 
:003 > AWS::S3::S3Object.url_for 'some-file.jpg', nil
AWS::S3::CurrentBucketNotSpecified: No bucket name can be inferred from your current connection's address (`my.cname')
    from /usr/local/rvm/gems/ree-1.8.7-2011.03@global/gems/aws-s3-0.6.2/bin/../lib/aws/s3/base.rb:107:in `current_bucket'
    from /usr/local/rvm/gems/ree-1.8.7-2011.03@global/gems/aws-s3-0.6.2/bin/../lib/aws/s3/base.rb:179:in `bucket_name'
    from /usr/local/rvm/gems/ree-1.8.7-2011.03@global/gems/aws-s3-0.6.2/bin/../lib/aws/s3/object.rb:300:in `path!'
    from /usr/local/rvm/gems/ree-1.8.7-2011.03@global/gems/aws-s3-0.6.2/bin/../lib/aws/s3/object.rb:291:in `url_for'
    from (irb):3
:004 > AWS::S3::Version
 => "0.6.2" 

The URL returned from :002 (and presumably :003 also) should be:
http://my.cname/some-file.jpg?...
and not
http://my.cname/my.cname/some-file.jpg?...
because the bucket-name should not appear in the path since it can be inferred from the host.

AWS::S3::Connection.prepare_path doesn't properly escape plus sign '+' in key

AWS::S3::Connection.prepare_path isn't properly escaping the plus sign in a key. This is because the plus sign is sometimes used to represent a space (at least according to the Ruby devs) i.e. the URI.escape() doesn't convert '+' to '%2B'...

Solution: Add a special gsub on line 11 i.e. URI.escape(path) to

URI.escape(path).gsub('+', '%2B')

Add invalidation for S3 backed Cloudfront files

I would like to see the ability to mark S3 backed Cloudfront files as invalid so that Cloudfront will replace them sooner rather than later.

The API call might look something like this:
AWS::S3::S3Object.invalidate(file_key, 'my_bucket')

Perhaps it could also be an option to delete or upload as well. Something like this:
AWS::S3::S3Object.delete(file_key, 'my_bucket', :invalidate => true)

Here is more information on the new invalidation feature from Amazon:
http://aws.amazon.com/about-aws/whats-new/2010/08/31/cloudfront-adds-invalidation-feature/

Numeric Keys

Buckets that have numeric keys are unable to list the contents of the bucket. For example, a key equal to '20092521' will not work correctly, but '20092521.txt' will.

Random method_missing called on unexpected T_ZOMBIE object

I'm getting some random method_missing called on unexpected T_ZOMBIE object. I'm using RVM + 1.9.2-p180 on SUSE ES 11.

I'm establishing the conn and keeping it opened

AWS::S3::Base.establish_connection! # account keys

After a few minutes calling this

bucket = AWS::S3::Bucket.find @bucket
files = bucket.objects

I got this error

method `method_missing' called on unexpected T_ZOMBIE object (0x804e450 flags=0x3e klass=0x0)
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:162:in `block in collapse_text'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:162:in `each'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:162:in `map'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:162:in `collapse_text'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:78:in `collapse'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:83:in `block in collapse'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:81:in `each'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:81:in `inject'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:81:in `collapse'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:83:in `block in collapse'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:81:in `each'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:81:in `inject'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:81:in `collapse'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:64:in `out'
...gems/aws-s3-0.6.2/support/faster-xml-simple/lib/faster_xml_simple.rb:53:in `xml_in'
...gems/aws-s3-0.6.2/lib/aws/s3/parsing.rb:64:in `parse'
...gems/aws-s3-0.6.2/lib/aws/s3/parsing.rb:55:in `initialize'
...gems/aws-s3-0.6.2/lib/aws/s3/response.rb:55:in `new'
...gems/aws-s3-0.6.2/lib/aws/s3/response.rb:55:in `parsed'
...gems/aws-s3-0.6.2/lib/aws/s3/extensions.rb:177:in `block in parsed'
...gems/aws-s3-0.6.2/lib/aws/s3/extensions.rb:146:in `expirable_memoize'
...gems/aws-s3-0.6.2/lib/aws/s3/extensions.rb:176:in `parsed'
...gems/aws-s3-0.6.2/lib/aws/s3/response.rb:68:in `bucket'
...gems/aws-s3-0.6.2/lib/aws/s3/bucket.rb:102:in `find'

Super Weird Class loading when I have a class ending in Bucket

Hi guys,

I had wrote a class: ConversionsByBucket, and weird things stared to happen.

After much hand wringing I tracked it down s3/extensions.rb overriding const_missing and hijacking modules/classes ending in 'Bucket'.

So:

  1. At the very least add a warning message, maybe:
    puts "Transforming #{sym.to_s} into a AWS::S3::Bucket, if using AWS/S3, ending modules/class with Bucket is reserved"

  2. Is the overriding of const_missing really necessary -- seem heavy handed.

Thanks,

Jonathan

Blank URL should not exist as an S3Object

Hello,

I think this is personal taste so I would not count this as an issue, but here is my doubt:

AWS::S3::S3Object.exists?( "", "mybucket")

I don't think this should return true because there is no S3Object.
I understand the logic behind it as the URL exists, this is the address of the bucket itself.
But in the way it is accessed here, it does not make much sense.
Especially when the find method raises an error:

AWS::S3::S3Object.find( "", "mybucket")   #=> NoMethodError: undefined method `-' for nil:NilClass

At least both should be consistent in terms of failure-or-not.
Also I understand that looking for a blank filename is stupid, but it makes sense if the name is generated.
I personally encountered this while building some tests/specs.

My test was originally having nil as a filename and it raises an error so I thought I could just pass it through Nil#to_s and that is when I had my test failing while it was imho correct.

What do you guys think about that?
I'm looking forward to hearing your opinion.

Thank you for reading
Mig

Segmentation fault

~/.bundler/ruby/1.8/gems/aws-s3-0.6.2/lib/aws/../../support/faster-xml-simple/lib/faster_xml_simple.rb:162: [BUG] Segmentation fault
ruby 1.8.7 (2009-06-12 patchlevel 174) [x86_64-linux], MBARI 0x6770, Ruby Enterprise Edition 2009.10

Connection reset by peer and slow uploads

I have about 100 000 files which I'm migrating to S3. Most of the files are under 1MB.

I'm using the aws-s3 gem with Rails 3.1. The bucket is located on the EU server and this is the contents of initializers/s3.rb

require 'aws/s3'

AWS::S3::DEFAULT_HOST.replace "s3-eu-west-1.amazonaws.com"

AWS::S3::Base.establish_connection!(
   :access_key_id     => '...',
   :secret_access_key => '...'
)

The background task responsible for migrating the files simply loops the database rows and uploads the associated files to S3:

for image in Image.unmigrated.limit(20)
  AWS::S3::S3Object.store(image.relative_path, open(image.absolute_path), "thebucket")
  if AWS::S3::Service.response.success?
    image.update_attribute(:s3, true)

    File.delete(image.absolute_path)
  end
end

I run this through a rake task and two bad things happen:

  • At some point (can be after just a few, or even 500 images) the "Connection reset by peer" error comes
  • Sometimes the upload speed becomes really slow (sometimes it recovers, sometimes it gives the above error)

I tried the two fixes suggested here https://forums.aws.amazon.com/message.jspa?messageID=86028 , that is, to change the TCP window scaling as well as changing the above code to use

bytes = nil
File.open(image.absolute_path, "rb") { |f| bytes = f.read }
AWS::S3::S3Object.store(image.relative_path, bytes, "thebucket")

However, I still keep getting the same fatal error. Any ideas as what I could try?

Key with leading slash

Hi - was wondering whether anyone come across an issue where if you key starts with "/" , you get back AWS::S3::NoSuchKey .

Thanks,

Patrick

#previous! depending on Ruby version

Starting on line 29 in /lib/aws/s3/extensions.rb:

  if RUBY_VERSION <= '1.9'
    def previous!
      self[-1] -= 1
      self
    end
  else
    def previous!
      self[-1] = (self[-1].ord - 1).chr
      self
    end
  end

Would subtracting 1 from a string ever work in Ruby 1.9? I'm guessing this is a feature in 2.0, but this bit seems to be breaking the S3 gem for us while we're running 1.9.3.

multithreaded environment

Can this library work in a multithreaded environment? Can a connection object be created rather than establishing an interpreter-global connection with AWS::S3::Base.establish_connection! ?

Cannot set metadata on creation

It would ne nice if we were able to add metadata on object creation.

If I do something like:

obj = bucket.new_object
obj.key = "test.txt"
obj.value = "some text"
obj.metadata[:subject] = "My subject"

I get
ArgumentError: wrong number of arguments (0 for 1)
aws-s3-0.6.2/lib/aws/s3/object.rb:513:in `initialize'

After that if I try to fetch this object using
obj = bucket["test.txt"]
then I do
obj.metadata, I get the same error.

I didn't find any way to modify the metadata of this object without restarting the app.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.