GithubHelp home page GithubHelp logo

lits3's People

lits3's Issues

Consider adding a mailing list for commit notifications

Consider adding a mailing (e.g. using Google Groups) where commit 
notifications can be sent and to which project members or anyone can 
subscribe for monitoring the project. The group should be setup for 
announcements only so no posting is allowed except by codesite-
[email protected].

Original issue reported on code.google.com by azizatif on 3 Dec 2008 at 6:19

slow upload performance on windows 2003

What steps will reproduce the problem?
1. uploading to s3 from windows 2003
2. take a lare file eg 20mb
3. upload speed is only 130kb/s and should be between 1-10mb/s

What is the expected output? What do you see instead?


What version of the product are you using? On what operating system?
1.0.1 (downloaded 20090920)

Please provide any additional information below.

running lib on windows 2003 r2 with 4gb ram and 100mbt line to the 
internet. Other services perform at up to 80% of this capacity


TEST CODE RUN IN TEST CLASS FROM SVN

            DateTime starttime = DateTime.Now;
            s3.CreateBucket(bucket);

            TimeSpan ts = DateTime.Now.Subtract(starttime);
            Console.WriteLine("bucket created in " + ts.TotalSeconds + " 
seconds");
            starttime = DateTime.Now;

            s3.ListAllObjects(bucket);

            ts = DateTime.Now.Subtract(starttime);
            Console.WriteLine("buckets listed in " + ts.TotalSeconds + " 
seconds");
            starttime = DateTime.Now;
            string filetoupload = "c://wmdownloads//testupload";

            s3.AddObject(new FileStream(filetoupload, 
System.IO.FileMode.Open), bucket, "testupload");

            ts = DateTime.Now.Subtract(starttime);
            Console.WriteLine("file uploaded in " + 
ts.TotalSeconds.ToString() + " seconds");
            FileInfo fi = new FileInfo(filetoupload);
            long size = fi.Length;
            Console.WriteLine("total: "+ size/1024 +" Kbytes in "+ 
((size/1024)/ts.TotalSeconds).ToString()+ "Kbyts/sec");

            starttime = DateTime.Now;
            string filetodownload = "c://wmdownloads//testupload2";
            s3.GetObject(bucket, "testupload", new FileStream
(filetodownload, System.IO.FileMode.OpenOrCreate));

            ts = DateTime.Now.Subtract(starttime);
            Console.WriteLine("file downloaded in " + ts.TotalSeconds + " 
seconds");
            fi = new FileInfo(filetodownload);
            size = fi.Length;
            Console.WriteLine("total: " + size / 1024 + " Kbytes in " + 
((size / 1024) / ts.TotalSeconds).ToString() + "Kbyts/sec");


            s3.DeleteObject(bucket, "testupload");
            s3.DeleteBucket(bucket);

Original issue reported on code.google.com by [email protected] on 20 Oct 2009 at 2:22

Can't Upload to Subfolders of buckets.

What steps will reproduce the problem?
1. Upload to a subfolder of a bucket 
e.g. s3.AddObject(@"C:\MyFile.txt", "MyBucket/mysubfolder", "test-file");


What is the expected output? What do you see instead?
file makes it into subfolder of a bucket

What version of the product are you using? On what operating system?
0.8.2, Server 2003

Please provide any additional information below.
Most people (myself included) want to use this for uploading images to S3 
for bandwidth improvements.  To make for an easy transition to S3 I found 
using subfolders of a bucket to work best.

e.g. Value for images stored in the database are relative urls 
"~/images/cars/image1.jpg"
when any image on the site is loaded is looks for a globalresource that has 
the domain name to point to where the images are.
e.g.  http://images.mysite.com/

Now to duplicate this on s3 you need a bucket to emulate you site root.
http://bucket.s3.amazonaws.com/subfolder/subfolder/
this way you can just change the CNAME on you dns server and point it to 
you amazon bucket.


Original issue reported on code.google.com by [email protected] on 18 Feb 2009 at 1:08

Unable to write data to the transport connection

I am unable to uplaod files into amazon server through LitS3
while uplaoding the files the following exception was comming
"Unable to write data to the transport connection: An existing connection 
was forcibly closed by the remote host."
Please kindly give me solution for this issue
and
 this is my code
] vctdemo: var s3 = new S3Service();

                           string filepath1 = 
FileUpload1.PostedFile.FileName;
                           s3.AccessKeyID = "dfgdfgzdfgzdfgzdf";
                           s3.SecretAccessKey 
= "dfggdfg/dfgadgfadfgadsfgdfg";



                           s3.AddObject(filepath1, 
ConfigurationManager.AppSettings["BucketName"].ToString(), fileid + 
ConfigurationManager.AppSettings["FileExtension"].ToString
(),null,CannedAcl.PublicReadWrite);

Original issue reported on code.google.com by [email protected] on 25 May 2009 at 7:06

The GetObjectStream method doesn't return full stream (only get a few lines of the file)

I tried to get an object from S3 by using GetObjectStream method. I added 
bucket and key as parameters, and got the full length of the stream.

below is my code.

 var stream = s3Service.GetObjectStream(bucketFullPath, key);

 int streamLength = FileSize;

 byte[] Data = new byte[streamLength];

 stream.Read(Data, 0, streamLength);

 stream.Close(); 

When I checked the Data[], I found it only contains a few thousands of 
valid byte, followed by massive white spaces.
I am using version 1.0.1 zip, with VS 2008

Any Help?

Original issue reported on code.google.com by [email protected] on 23 Sep 2009 at 7:22

Add progress events for getting and adding objects

S3Service methods like AddObject and GetObject (and family) that represent 
potentially long-running operations, do not provide a simple means for a 
client application to show progress to the user. The attached patch 
proposes two events on S3Service:

- AddObjectProgressChanged
- GetObjectProgressChanged

These events fire during the processing of any of the AddObject or 
GetObject family of methods. The event sink receives arguments of type 
ObjectTransferProgressChangedEventArgs whose properties supply information 
about the progress of a transfer:

- BucketName
- Key
- BytesTransferred
- TotalBytesToTransfer
- ProgressPercentage (inherited)
- UserState (inherited)


Original issue reported on code.google.com by azizatif on 25 Nov 2008 at 9:10

Attachments:

connection

I am unable to uplaod files into amazon server through LitS3
while uplaoding the files the following exception was comming
"Unable to write data to the transport connection: An existing connection 
was forcibly closed by the remote host."
Please kindly give me solution for this issue
and
 this is my code
var s3 = new S3Service();

                           string filepath1 = 
FileUpload1.PostedFile.FileName;
                           s3.AccessKeyID = "dfgdfgzdfgzdfgzdf";
                           s3.SecretAccessKey 
= "dfggdfg/dfgadgfadfgadsfgdfg";



                           s3.AddObject(filepath1, 
ConfigurationManager.AppSettings["BucketName"].ToString(), fileid + 
ConfigurationManager.AppSettings["FileExtension"].ToString
(),null,CannedAcl.PublicReadWrite);

Original issue reported on code.google.com by [email protected] on 25 May 2009 at 7:41

Creating subfolders

What steps will reproduce the problem?
1. Create new S3Service object.
2. Set AccessKeyID and SecretAccessKey
3. Call: CreateBucket("rootbucketname/subfoldername")

What is the expected output? What do you see instead?
You'd expect to see it create a subfolder called "subfoldername" under the
existing bucket "rootbucketname". Instead it throws an exception: You must
provide the Content-Length HTTP header.

What version of the product are you using? On what operating system?
lits3 1.0.1, Windows 7 x64

Please provide any additional information below.
I was able to fix the problem by making the change in the attached version
of CreateBucket.cs. Basically just added these two lines in
CreateBucketResponse, right before the call to return base.GetResponse()
(at line 45):

if (!EuropeRequested)
  WebRequest.ContentLength = 0;

Original issue reported on code.google.com by [email protected] on 1 Dec 2009 at 6:08

Attachments:

Circular Recursive Call :: S3Service.GetObject(string bucketName, string key, Stream outputStream)

I'm surprised the compiler didn't pick this up, really, so I tried just to
see what would happen - sure enough, crashes.

This is the implementation of the current version (Oct 21)

Current implementation:

 /// <summary>
 /// Gets an existing object in S3 and copies its data to the given Stream.
 /// </summary>
 public void GetObject(string bucketName, string key, Stream outputStream)
 {
   GetObject(bucketName, key, outputStream);
 }

Using the same model you had for other methods, I fixed this with:

 /// <summary>
 /// Gets an existing object in S3 and copies its data to the given Stream.
 /// </summary>
 public void GetObject(string bucketName, string key, Stream outputStream)
 {
   long contentLength;
   string contentType;
   using (Stream objectStream = GetObjectStream(bucketName, key, out
contentLength, out contentType))
     CopyStream(objectStream, outputStream, contentLength);
 }


Original issue reported on code.google.com by [email protected] on 3 Jan 2009 at 1:19

Problem uploading large file

What steps will reproduce the problem?
Tried to upload file that was 3.4G in size.

What is the expected output? What do you see instead?

Progress: 0
Progress: 5
Progress: 10
Progress: 15
Progress: 20
Progress: 25
Progress: 30
Progress: 35
Progress: 40
Progress: 45
Progress: 50
Progress: 55
Progress: -57
Progress: -52
Progress: -47
Progress: -42
Progress: -37
Progress: -32
Progress: -27
Progress: -22
Progress: -17

Unhandled Exception: System.Exception: Unexpected end of stream while copying.
  at LitS3.S3Service.CopyStream (System.IO.Stream source, System.IO.Stream
dest, Int64 length, System.Action`1 progressCallback) [0x00000]
  at LitS3.S3Service+<AddObject>c__AnonStorey3.<>m__0 (System.IO.Stream
stream) [0x00000]
  at LitS3.AddObjectRequest.PerformWithRequestStream (System.Action`1
action) [0x00000]
  at LitS3.S3Service.AddObject (System.String bucketName, System.String
key, Int64 bytes, System.String contentType, CannedAcl acl, System.Action`1
action) [0x00000]
  at LitS3.S3Service.AddObject (System.IO.Stream inputStream, Int64 bytes,
System.String bucketName, System.String key, System.String contentType,
CannedAcl acl) [0x00000]
  at LitS3.S3Service.AddObject (System.String inputFile, System.String
bucketName, System.String key, System.String contentType, CannedAcl acl)
[0x00000]
  at LitS3.S3Service.AddObject (System.String inputFile, System.String
bucketName, System.String key) [0x00000]


What version of the product are you using? On what operating system?

SVN revision 103 On mono 2.4.2


Please provide any additional information below.

I'll run some tests to see if I can track down exactly what size file
causes this.

Original issue reported on code.google.com by [email protected] on 14 Aug 2009 at 5:10

Compilation for Mono fails with CS0310 errors

What steps will reproduce the problem?
1. Change current directory to the root of the distribution or working copy
2. Compile using gmcs like this:
gmcs /target:library /out:LitS3.dll LitS3\*.cs

What is the expected output? What do you see instead?

Expected compilation to succeed. Instead 9 CS0310 errors are reported. See
attached build.txt for details.

What version of the product are you using? On what operating system?

LitS3: r74.
Tested on Mono 2.2 under Windows Vista and OpenSUSE 11.

Original issue reported on code.google.com by azizatif on 10 Mar 2009 at 8:15

Attachments:

PermanentRedirect is thrown, not followed

What steps will reproduce the problem?
1. Create an EU bucket.
2. Access it with `UseSubdomains` disabled.

What is the expected output? What do you see instead?
* I expect the library to follow redirects, even if they are not per HTTP 
specification. As so, if Amazon is returning error code "PermanentRedirect" 
the request should be retried against the specified, new endpoint.

What version of the product are you using? On what operating system?
* 1.0

Please provide any additional information below.
* This may also be the case for error code "Redirect".

Original issue reported on code.google.com by [email protected] on 4 Aug 2009 at 7:39

Consider removing ListEntryReader

Purpose of code changes on this branch:

Remove ListEntryReader, which seems to be serving little purpose, by in-
lining its Entries implementation directly into S3.ListAllObjects.

When reviewing my code changes, please focus on:

ListEntryReader has odd semantics and not much raison-d'être. Removing it 
simplifies the implementation and use (one less using statement) by a 
whole type/abstraction.

ListEntryReader has odd semantics because:

- It is not really a reader.
- It has no Read method as one would expect.
- It is not enumerable in itself.
- Entries is a property with a non-trivial implementation.
- As a result of above, Entries should really be a method.
- Dispose disposes the last request but this is already taken care of by 
Entries. The IEnumerable<ListEntry> object returned by Entries is by 
itself disposable and gets taken care of by a foreach or using statement 
at the call site.

After the review, I'll merge this branch into:
/trunk


Original issue reported on code.google.com by azizatif on 14 Mar 2009 at 12:33

Consider adding a mailing list for issue notifications

Consider adding a mailing list (e.g. using Google Groups) where issue 
notifications can be sent and to which project members or anyone can 
subscribe for monitoring progress. The group should be setup for 
announcements only so no posting is allowed except by codesite-
[email protected].

Original issue reported on code.google.com by azizatif on 3 Dec 2008 at 6:20

UseSubdomains is false per default

What is the expected output? What do you see instead?
* `UseSubdomains` should be true, like the XML comment for the property 
states.

What version of the product are you using? On what operating system?
* 1.0

Original issue reported on code.google.com by [email protected] on 4 Aug 2009 at 7:39

Uses culture-sensitive number and date methods

LitS3 uses culture-sensitive methods instead of using culture-invariant methods 
for communicating with Amazon S3. 

I assume that S3 responds in a culture-invariant manner, so this means that 
LitS3 won't operate correctly when the server's default culture isn't en-US. 

I'd be happy to provide a patch if I hear back from the maintainer.

Original issue reported on code.google.com by [email protected] on 25 Jan 2012 at 8:45

Return Uri instances instead of string values

`GetUrl()` and `GetAuthorizedUrl()` return strings (and with the reason for 
this design being explained in an XML comment).

The behaviour in .NET is as-designed, as you shouldn't rely on the 
`ToString()` for redirects or similar. If you will use the URI in a browser, 
in a link or anything else, you should use the `AbsoluteUri` property of the 
instance (that is how `WebRequest` uses it too).

If you like, you could override the `ToString()` to return the unescaped 
value, and thus benefiting from returning an Uri in the first place, without 
breaking the support for those not knowing about the `AbsoluteUri` property.

I have attached a patch where I show this approach.

Original issue reported on code.google.com by [email protected] on 22 Aug 2009 at 7:28

Attachments:

Header param x-amz-copy-source must be URL encoded

What steps will reproduce the problem?
1. Use the method  CopyObject from service with  a param sourceKey that have 
special char (like Cópia.txt)

What is the expected output? What do you see instead?
A http responde with code 200 was expected, but we got a error 403

What version of the product are you using? On what operating system?
1.0.1

Please provide any additional information below.
Fixed that changing the line 42 of 
CopyObjectRequest(https://code.google.com/p/lits3/source/browse/trunk/LitS3/Copy
Object.cs#42)

Diff line:
<<
 WebRequest.Headers[S3Headers.CopySource] = sourceBucketName + "/" + HttpUtility.UrlEncode(sourceObjectKey);
<<
>>
 WebRequest.Headers[S3Headers.CopySource] = sourceBucketName + "/" + sourceObjectKey;
>>

Original issue reported on code.google.com by [email protected] on 4 Dec 2013 at 9:13

CreateBucketInEurope fails with authorization error

What steps will reproduce the problem?
1. Call the CreateBucketInEurope in place of CreateBucket
2.
3.

What is the expected output? What do you see instead?
Expected: Bucket gets created
Instead: Exception is thrown, the response contents is:
<Error><Code>AccessDenied</Code><Message>Anonymous access is forbidden for 
this 
operation</Message><RequestId>47BF77CB4C646D51</RequestId><HostId>L5hWeJXgZ
igLFpz+XcTbClhYuo21Sse3jGEMsOf1VW/FWon3ZoOnXxeDscfL91Ia</HostId></Error>


What version of the product are you using? On what operating system?
Version: 0.8.2.0 OS: Windows XP Pro SP2

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 5 Mar 2009 at 8:07

ObjectExists throws exception

What steps will reproduce the problem?
1. Call the ObjectExists method passing in a bucket that exists but a key
that does not exist

What is the expected output? What do you see instead?
I expect to see false, instead an XmlException gets thrown


What version of the product are you using? On what operating system?
.9, Windows Vista.

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 26 Jun 2009 at 10:17

.NET seems to hang when expecting a 100 continue but receiving a 5xx error

When an object is uploaded to S3 via a PUT, LitS3 sets the 
Expect100Continue property of the ServicePoint for that HttpWebRequest to 
true. In certain circumstances that are very difficult to duplicate, .NET 
(using 3.5, not sure about other versions) seems to hang when a 5xx error 
is sent from S3 instead of a 100 continue. These circumstances have caused 
a great deal of grief for us, but we recently found that we were able to 
sidestep the issue completely by setting Expect100Continue to false. For 
more information, see 
http://groups.google.com/group/lits3/browse_thread/thread/e2bc21b107c28237.

While it would be ideal for LitS3 and .NET to transparently handle this 
issue, it doesn't seem to work correctly. Attached is a patch that exposes 
the Expect100Continue property of the service point so that this issue can 
be plugged.

Original issue reported on code.google.com by [email protected] on 15 May 2009 at 11:45

Attachments:

[PATCH] CopyObject metadata implementation

Attached is a patch that implements the request metadata for the 
CopyObjectRequest request. The specifications for the metadata were taken 
from here: http://docs.amazonwebservices.com/AmazonS3/2006-03-
01/index.html?RESTObjectCOPY.html. Assigning a value to the new CannedAcl 
property is the only functionality that has been tested; the rest of the 
metadata implementation has not been tested.

Original issue reported on code.google.com by [email protected] on 24 Apr 2009 at 3:58

Attachments:

S3Authorizer.signer is never disposed

S3Authorizer.signer is an instance of the HMACSHA1 class, which implements 
IDisposable. signer is never disposed, so undisposed instances will 
accumulate if multiple requests are made over the course of a program.

I suggest moving the usage of the HMACSHA1 class to the Sign method inside a 
using block since that is the only place it it used.

Original issue reported on code.google.com by [email protected] on 26 Jan 2010 at 5:11

Avoid escaping '/' in key

I would request that the slash, '/', isn't replaced as this is causing the 
file names for browser-based downloads (i.e. via the URL received from the  
`GetAuthorizedUrl`) to behave oddly.

The slash character isn't necessary to escape (if using the URL's in the 
browser, the suggested filename will include the directory part, if the 
slash is escaped).

The current method will return "/directory%2Ffilename", where I favor 
"/directory/filename". The latter allows for better use, if hosting 
multiple files that is supposed to be in the same directory (when the slash 
is escaped, the browser treat the files as was it placed in the root 
directory).

I've attached a patch of a simple, yet not pretty, way to fix the 
behaviour.

Original issue reported on code.google.com by [email protected] on 22 Aug 2009 at 5:26

Attachments:

Nuget Support

Is there any plans for nuget support?  If not, I would  like to help out.

Thanks,
Dave


Original issue reported on code.google.com by [email protected] on 22 Mar 2011 at 6:31

How to use Browser-Based Uploads Using POST with the LitS3 Library

What steps will reproduce the problem?
1. I am able to upload an object by uploading a file to the server and then
from the server using AddObject(file,..,..);
2. I am also able to upload the same object using a stream, but I still
think that goes to the server first.


What is the expected output? What do you see instead?
I will like to see how to do a Browser based upload using post within the
lits3 library.




Original issue reported on code.google.com by [email protected] on 26 Sep 2009 at 12:23

Patch to Submit : Signed Header Request

I have a nice patch to submit. It allows any client app to communicate directly 
with Amazon S3 - without the Secret Key.

We needed this because our client-side app needed to upload directly to Amazon 
S3, without proxying through our web/app server.

To accomplish this, the headers of any S3Request can be serialized into a 
SignedHeaderRequest & sent to our web/app server.  Our server can examine the 
request and sign the headers with the Secret Key, returning a 
SignedHeaderResponse to the client.  The client can then upload directly to 
Amazon S3 using the Authorization header our server signed.

We've added a new class SignedHeaderRequest / SignedHeaderResponse, along with 
the extension methods & VS2010 unit tests to implement.

Enjoy!

Ryan D. Hatch   @rdkhatch
Jeremiah Redekop  @jredekop

Original issue reported on code.google.com by [email protected] on 2 Aug 2011 at 1:52

Attachments:

Support for browser based uploads?

This is a great library for s3!  Can this library be used to help with 
browser based uploads?  http://doc.s3.amazonaws.com/proposals/post.html


Original issue reported on code.google.com by [email protected] on 12 Oct 2009 at 7:34

Invalid signature

I'm probably doing something dumb, but can't figure this one out.

I'm using a VB.NET winforms app and have declared a class variable for my 
s3 object:

Private s3 As LitS3.S3Service

Then instantiating it in Form_Load:

s3 = New LitS3.S3Service()
s3.AccessKeyID = _strKey
s3.SecretAccessKey = _strPrivateKey

Then dumping all my buckets into a listbox:

For Each objBucket As LitS3.Bucket In s3.GetAllBuckets
    Me.lstBuckets.Items.Add(objBucket.Name)
Next

This works fine. However, when the user clicks on a bucket name in the 
listbox I want to populate another listbox with the files in that bucket:

Me.lstFiles.Items.Clear()
If String.IsNullOrEmpty(strBucket) = False Then
    Dim lr As LitS3.ListEntryReader = s3.ListAllObjects(strBucket, "")
    For Each objFile As LitS3.ListEntry In lr.Entries
        Me.lstFiles.Items.Add(objFile.Name)
    Next
End If

This throws an exception (on the For Each line):

The request signature we calculated does not match the signature you 
provided. Check your key and signing method.

Any ideas?

Original issue reported on code.google.com by [email protected] on 11 Mar 2009 at 11:06

Recomendation

Issue is when using S3Service with UseSsl = true
The GetURL and GetAuthorizedUrl do not return a HTTPS address.

The Get URL method should read

 public string GetUrl(string bucketName, string key)
        {
            var uriString = new StringBuilder();
            if(UseSsl)
                uriString.Append("https://");
            else
                uriString.Append("http://");


Original issue reported on code.google.com by [email protected] on 25 Jan 2010 at 4:48

Add support for new versioning of Object

New feature released by Amazon allows for setting a Bucket to be in
Versioning mode, suspended mode, etc.

Allow for buckets to set their mode

Allow for getting Version Items, List of Versions, Restore a version, and
delete all versions.


Original issue reported on code.google.com by [email protected] on 9 Feb 2010 at 9:29

CloudFront Support

Support for
http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/

Original issue reported on code.google.com by [email protected] on 18 Nov 2008 at 5:18

S3 Keys containing a period are not encoded correctly

What steps will reproduce the problem?

1. Create and S3 object with a period in the name with some other tool.
2. Try to retrieve the object using LitS3,S3Service.GetObject(bucket,key,
stream).


What is the expected output? What do you see instead?

404


Original issue reported on code.google.com by [email protected] on 18 Feb 2010 at 10:06

No metadata support

I would like to be able to retrieve key/value pairs of metadata for stored
objects. I'm going to look into adding support for this.


Original issue reported on code.google.com by [email protected] on 4 Nov 2008 at 7:48

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.