GithubHelp home page GithubHelp logo

s3-tests's Introduction

S3 compatibility tests

This is a set of unofficial Amazon AWS S3 compatibility tests, that can be useful to people implementing software that exposes an S3-like API. The tests use the Boto2 and Boto3 libraries.

The tests use the Tox tool. To get started, ensure you have the tox software installed; e.g. on Debian/Ubuntu:

sudo apt-get install tox

You will need to create a configuration file with the location of the service and two different credentials. A sample configuration file named s3tests.conf.SAMPLE has been provided in this repo. This file can be used to run the s3 tests on a Ceph cluster started with vstart.

Once you have that file copied and edited, you can run the tests with:

S3TEST_CONF=your.conf tox

You can specify which directory of tests to run:

S3TEST_CONF=your.conf tox -- s3tests_boto3/functional

You can specify which file of tests to run:

S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py

You can specify which test to run:

S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py::test_bucket_list_empty

Some tests have attributes set based on their current reliability and things like AWS not enforcing their spec stricly. You can filter tests based on their attributes:

S3TEST_CONF=aws.conf tox -- -m 'not fails_on_aws'

Most of the tests have both Boto3 and Boto2 versions. Tests written in Boto2 are in the s3tests directory. Tests written in Boto3 are located in the s3test_boto3 directory.

You can run only the boto3 tests with:

S3TEST_CONF=your.conf tox -- s3tests_boto3/functional

STS compatibility tests

This section contains some basic tests for the AssumeRole, GetSessionToken and AssumeRoleWithWebIdentity API's. The test file is located under s3tests_boto3/functional.

To run the STS tests, the vstart cluster should be started with the following parameter (in addition to any parameters already used with it):

vstart.sh -o rgw_sts_key=abcdefghijklmnop -o rgw_s3_auth_use_sts=true

Note that the rgw_sts_key can be set to anything that is 128 bits in length. After the cluster is up the following command should be executed:

radosgw-admin caps add --tenant=testx --uid="9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef" --caps="roles=*"

You can run only the sts tests (all the three API's) with:

S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_sts.py

You can filter tests based on the attributes. There is a attribute named test_of_sts to run AssumeRole and GetSessionToken tests and webidentity_test to run the AssumeRoleWithWebIdentity tests. If you want to execute only test_of_sts tests you can apply that filter as below:

S3TEST_CONF=your.conf tox -- -m test_of_sts s3tests_boto3/functional/test_sts.py

For running webidentity_test you'll need have Keycloak running.

In order to run any STS test you'll need to add "iam" section to the config file. For further reference on how your config file should look check s3tests.conf.SAMPLE.

IAM policy tests

This is a set of IAM policy tests. This section covers tests for user policies such as Put, Get, List, Delete, user policies with s3 actions, conflicting user policies etc These tests uses Boto3 libraries. Tests are written in the s3test_boto3 directory.

These iam policy tests uses two users with profile name "iam" and "s3 alt" as mentioned in s3tests.conf.SAMPLE. If Ceph cluster is started with vstart, then above two users will get created as part of vstart with same access key, secrete key etc as mentioned in s3tests.conf.SAMPLE. Out of those two users, "iam" user is with capabilities --caps=user-policy=* and "s3 alt" user is without capabilities. Adding above capabilities to "iam" user is also taken care by vstart (If Ceph cluster is started with vstart).

To run these tests, create configuration file with section "iam" and "s3 alt" refer s3tests.conf.SAMPLE. Once you have that configuration file copied and edited, you can run all the tests with:

S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py

You can also specify specific test to run:

S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py::test_put_user_policy

Some tests have attributes set such as "fails_on_rgw". You can filter tests based on their attributes:

S3TEST_CONF=your.conf tox -- s3tests_boto3/functional/test_iam.py -m 'not fails_on_rgw'

s3-tests's People

Contributors

albin-antony avatar alfredodeza avatar alimaredia avatar calebamiles avatar cbodley avatar clwluvw avatar cmccabe avatar four2five avatar galsalomon66 avatar gaul avatar iraj465 avatar jdurgin avatar jmunhoz avatar liewegas avatar mattbenjamin avatar mwodrich avatar oritwas avatar pritha-srivastava avatar robbat2 avatar soumyakoduri avatar tchaikov avatar theanalyst avatar tianshan avatar tipabu avatar tobegit3hub avatar tobias-urdin avatar vasukulkarni avatar xylv avatar yehudasa avatar yuvalif avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3-tests's Issues

test_s3:test_bucket_list_return_data_versioning bug

@zhangsw for this test case : s3tests.functional.test_s3:test_bucket_list_return_data_versioning
need to changes in boto/s3/key.py endElement function as well, otherwise test case always failed , could u help to check? Thanks
def endElement(self, name, value, connection): if name == 'Key': self.name = value elif name == 'ETag': self.etag = value elif name == 'IsLatest': if value == 'true': self.is_latest = True else: self.is_latest = False elif name == 'LastModified': self.last_modified = value elif name == 'Size': self.size = int(value) elif name == 'StorageClass': self.storage_class = value elif name == 'Owner': pass elif name == 'VersionId': self.version_id = value else: setattr(self, name, value)

need to set key's 'Content-Encoding'

Config file

E..................................

ERROR: test suite for <module 's3tests.functional' from '/home/fongy/Desktop/Working/Ceph/s3-tests-master/s3tests/functional/init.pyc'>

Traceback (most recent call last):
File "/home/fongy/Desktop/Working/Ceph/s3-tests-master/virtualenv/local/lib/python2.7/site-packages/nose/suite.py", line 209, in run
self.setUp()
File "/home/fongy/Desktop/Working/Ceph/s3-tests-master/virtualenv/local/lib/python2.7/site-packages/nose/suite.py", line 292, in setUp
self.setupContext(ancestor)
File "/home/fongy/Desktop/Working/Ceph/s3-tests-master/virtualenv/local/lib/python2.7/site-packages/nose/suite.py", line 315, in setupContext
try_run(context, names)
File "/home/fongy/Desktop/Working/Ceph/s3-tests-master/virtualenv/local/lib/python2.7/site-packages/nose/util.py", line 471, in try_run
return func()
File "/home/fongy/Desktop/Working/Ceph/s3-tests-master/s3tests/functional/init.py", line 266, in setup
cfg.readfp(f)
File "/usr/lib/python2.7/ConfigParser.py", line 324, in readfp
self._read(fp, filename)
File "/usr/lib/python2.7/ConfigParser.py", line 512, in _read
raise MissingSectionHeaderError(fpname, lineno, line)
MissingSectionHeaderError: File contains no section headers.
file: aws.conf, line: 1
'fixtures:\n'


Ran 34 tests in 1.202s

FAILED (errors=1)

I realise that this issue had been covered here (and closed)

#121

but I don't understand the answer :(

If someone could elaborate I'd be most grateful :)

regards

kai

Check grants on test_object_header_acl_grants failing due sort problem

test_object_header_acl_grants is failing with
Traceback (most recent call last):
File "/tmp/s3-tests-master/virtualenv/lib64/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/tmp/s3-tests-master/s3tests/functional/test_s3.py", line 3861, in test_object_header_acl_grants
type='CanonicalUser',
File "/tmp/s3-tests-master/s3tests/functional/test_s3.py", line 80, in check_grants
eq(g.permission, w.pop('permission'))
AssertionError: u'FULL_CONTROL' != 'READ'

And it seems is due sorting that is failing. I changed to sort by permission in check_grants and the compares succeed.

def check_grants(got, want):
...
eq(len(got), len(want))
got = sorted(got, key=operator.attrgetter('permission'))
want = sorted(want, key=operator.itemgetter('permission'))
...

Upload unicode object metadata

The ceph test: "test_object_set_get_unicode_metadata" assumes that s3 can get unicode metadata but according to AWS s3api, object metadata should contain only ascii characters.
image

test_put_obj_with_tags fails on tag set order

The ceph test "test_put_obj_with_tags" contains the check:
eq(response['TagSet'], tagset).
it fails because of tagging order, but AWS spec does not specifies which order the tags supposed to return.

Test assertion on bucket names with invalid dns rules

Hi there, when I run these tests, the tests are getting "E" with exceptions in the code saying getting 400 InvalidBucketName response, but this is actually expected behaviour according AWS bucket naming rules, so I was wondering if this was caused by error message inteperted incorrectly? The assertion is "fails with subdomain", but I was unable to find where this assertion gets defined, do you guys have any suggestion?
s3tests.functional.test_s3.test_bucket_create_naming_dns_dash_at_end
s3tests.functional.test_s3.test_bucket_create_naming_dns_dash_dot
s3tests.functional.test_s3.test_bucket_create_naming_dns_dot_dash
s3tests.functional.test_s3.test_bucket_create_naming_dns_dot_dot
s3tests.functional.test_s3.test_bucket_create_naming_dns_long
s3tests.functional.test_s3.test_bucket_create_naming_dns_underscore

bootstrap is failing in RHEL/Centos 8

bootstrap is failing in RHEL/Centos 8
It looks like the reason is that in RHEL/Centos 8:

  1. python-virtualenv was replaced with python2-virtualenv
  2. python-devel was replaced with python2-devel
  3. virtualenv was replaced with virtualenv-2

$ ls -la /etc/redhat-release
lrwxrwxrwx 1 root root 14 Aug 14 06:42 /etc/redhat-release -> centos-release

$ cat /etc/redhat-release
CentOS Linux release 8.0.1905 (Core)

$ cd src/test/system_tests/s3-tests/
$ bash -x ./bootstrap

...
+ echo './bootstrap: missing required RPM packages. Installing via sudo.'
./bootstrap: missing required RPM packages. Installing via sudo.
+ sudo yum -y install python-virtualenv python-devel libevent-devel libffi-devel libxml2-devel libxslt-devel zlib-devel
CentOS-8 - AppStream                                                                                                                                                                                                                          2.0 MB/s | 6.3 MB     00:03
CentOS-8 - Base                                                                                                                                                                                                                               2.9 MB/s | 7.9 MB     00:02
CentOS-8 - Extras                                                                                                                                                                                                                             893  B/s | 2.1 kB     00:02
No match for argument: python-virtualenv
No match for argument: python-devel
Error: Unable to find a match

Boto's api for lifecycle is not complete.

I want to add some test cases about non-current version expiration in lifcycle. Boto doesn't have the api to do this while boto3 has. How should we do about this? Whether we could update from boto to boto3?

Something in the test quite repeatedly loads http://hans-moleman.w3.org/MarkUp/DTD/xhtml-image-1.mod

This is fetched over and over again, and each time it takes several seconds, rendering the test quite very slow:

socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 9
connect(9, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("128.30.52.100")}, 16) = 0
sendto(9, "GET /MarkUp/DTD/xhtml-csismap-1.mod HTTP/1.0\r\nHost: www.w3.org\r\nUser-Agent: Python-urllib/1.17\r\nAccept: /\r\n\r\n", 111, 0, NULL, 0) = 111

Can't it just be cached the first time?

Using AWS4

What's the proper way to invoke the AWS4 authentication path with s3-tests? I looked through some old issues and found one that indicating something like this to try:

S3TEST_CONF=aws.conf S3_USE_SIGV4=1 nosetests s3tests.functional.test_s3:test_object_raw_get_x_amz_expires_not_expired -a 'auth_aws4'

I also tried

S3TEST_CONF=aws.conf S3_USE_SIGV4=true nosetests s3tests.functional.test_s3:test_object_raw_get_x_amz_expires_not_expired -a 'auth_aws4'

But neither resulted in AWS4 authentication headers. So, I was wondering what the proper way to do it. BTW, I have boto version 2.6.0 in my environment.

Thanks.

Pull request backlog

This project has had a large backlog of open pull requests for many years. Looking at different snapshots from archive.org suggests that unreviewed pull requests steadily increased from 40 in June 2018 to 61 today. I have repeatedly raised this issue on ceph-devel and via private mails to the committers:

Is this project still alive? Does it accept pull requests from external contributors? If not please add a warning to the README.

cc @alimaredia @cbodley @liewegas

Provide Docker image

s3-tests has Python environment dependencies which are irritating to satisfy even with virtualenv. Providing a Docker image would reduce this configuration.

check_grants will raise exception in case grantee of "Group" type don't have ID in response

check_grants method expecting all grantees in response to have ID,
but in case the grantee of a "Group" type ID in response isn't required, thus AttributeError will be raised.
According to aws docs, there's absolutely valid to "Group" grantee not contain ID in response, so I guess check_grants should handle such a case.

https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#sample-acl

by sorted method.

def check_grants(got, want):
    """
    Check that grants list in got matches the dictionaries in want,
    in any order.
    """
    eq(len(got), len(want))
    got = sorted(got, key=operator.attrgetter('id', 'permission'))  #  This will fail if grantee doesn't have ID
    want = sorted(want, key=operator.itemgetter('id', 'permission'))
    for g, w in zip(got, want):
        w = dict(w)
        eq(g.permission, w.pop('permission'))
        eq(g.id, w.pop('id'))
        #eq(g.display_name, w.pop('display_name'))
        w.pop('display_name')
        eq(g.uri, w.pop('uri'))
        #eq(g.email_address, w.pop('email_address'))
        w.pop('email_address')
        eq(g.type, w.pop('type'))
        eq(w, {})

Ceph test copy not owned bucket is wrong

After checking in AWS the s3 test "test_object_copy_not_owned_bucket" is wrong and not supposed to return 403 error code.
The operation that supposed to fail is ok and the object can be copied.

"409 conflict" error while running these tests on openstack swift cluster.

I am running an openstack swift cluster with current rocky version. While running these ceph s3 tests, once in a while I get this following error:

`S3ResponseError: S3ResponseError: 409 Conflict

BucketNotEmptyThe bucket you tried to delete is not emptytx0f9654a1cec04f4c99215-005be9aaea
-------------------- >> begin captured logging << --------------------`

This means it is trying to delete the bucket, but because it is not empty, this error comes up. Please help me if anyone had come across this error. Or internally how these works? Any kind of help is fine.

SSL: CERTIFICATE_VERIFY_FAILED

I'm trying to run the tests with self-signed certificate and almost every tests fails with error:
SSL: CERTIFICATE_VERIFY_FAILED

is_secure is set to 'True' in config file.

Is there any way to ignore SSL certificate error?

File contains no section headers ? how deal with it?

ERROR: test suite for

module 's3tests.functional' from '/home/deploy/s3-tests/s3tests/functional/init.pyc'

Traceback (most recent call last):
File "/home/deploy/s3-tests/virtualenv/lib/python2.7/site-packages/nose/suite.py", line 209, in run
self.setUp()
File "/home/deploy/s3-tests/virtualenv/lib/python2.7/site-packages/nose/suite.py", line 292, in setUp
self.setupContext(ancestor)
File "/home/deploy/s3-tests/virtualenv/lib/python2.7/site-packages/nose/suite.py", line 315, in setupContext
try_run(context, names)
File "/home/deploy/s3-tests/virtualenv/lib/python2.7/site-packages/nose/util.py", line 471, in try_run
return func()
File "/home/deploy/s3-tests/s3tests/functional/init.py", line 266, in setup
cfg.readfp(f)
File "/usr/lib64/python2.7/ConfigParser.py", line 324, in readfp
self._read(fp, filename)
File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read
raise MissingSectionHeaderError(fpname, lineno, line)
MissingSectionHeaderError: File contains no section headers.
file: config.yaml.SAMPLE, line: 1
'fixtures:\n'

s3-tests failed with CEPH on Raspberry Pi

Hello I am trying to create a testbucket on a Ceph Raspberry Pi cluster(local) and I get the following error message:

OS:Debian Jessie
Ceph: v12.2.12 Luminous
s3cmd:2.0.2

[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host admin and default port 7480


cephuser@admin:~/mycluster $ ceph -s
  cluster:
    id:     745d44c2-86dd-4b2f-9c9c-ab50160ea353
    health: HEALTH_WARN
            too few PGs per OSD (24 < min 30)

  services:
    mon: 1 daemons, quorum admin
    mgr: admin(active)
    osd: 4 osds: 4 up, 4 in
    rgw: 1 daemon active

  data:
    pools:   4 pools, 32 pgs
    objects: 80 objects, 1.09KiB
    usage:   4.01GiB used, 93.6GiB / 97.6GiB avail
    pgs:     32 active+clean

  io:
    client:   5.83KiB/s rd, 0B/s wr, 7op/s rd, 1op/s wr

After one minute the service(rgw: 1 daemon active) is no longer visible:

cephuser@admin:~/mycluster $ ceph -s
  cluster:
    id:     745d44c2-86dd-4b2f-9c9c-ab50160ea353
    health: HEALTH_WARN
            too few PGs per OSD (24 < min 30)

  services:
    mon: 1 daemons, quorum admin
    mgr: admin(active)
    osd: 4 osds: 4 up, 4 in

  data:
    pools:   4 pools, 32 pgs
    objects: 80 objects, 1.09KiB
    usage:   4.01GiB used, 93.6GiB / 97.6GiB avail
    pgs:     32 active+clean

./s3cmd --debug mb s3://testbucket

Debug Message:

DEBUG: Unicodising 'mb' using UTF-8
DEBUG: Unicodising 's3://testbucket' using UTF-8
DEBUG: Command: mb
DEBUG: CreateRequest: resource[uri]=/
DEBUG: Using signature v2
DEBUG: SignHeaders: u'PUT\n\n\n\nx-amz-date:Wed, 15 Jan 2020 02:28:25 +0000\n/testbucket/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(testbucket): 192.168.178.50:7480
DEBUG: ConnMan.get(): creating new connection: http://192.168.178.50:7480
DEBUG: non-proxied HTTPConnection(192.168.178.50, 7480)
DEBUG: Response:


DEBUG: Unicodising './s3cmd' using UTF-8
DEBUG: Unicodising '--debug' using UTF-8
DEBUG: Unicodising 'mb' using UTF-8
DEBUG: Unicodising 's3://testbucket' using UTF-8
Invoked as: ./s3cmd --debug mb s3://testbucket
Problem: error: [Errno 111] Connection refused
S3cmd:   2.0.2
python:   2.7.17 (default, Oct 19 2019, 23:36:22)
[GCC 9.2.1 20190909]
environment LANG=en_GB.UTF-8

Traceback (most recent call last):
  File "./s3cmd", line 3092, in <module>
    rc = main()
  File "./s3cmd", line 3001, in main
    rc = cmd_func(args)
  File "./s3cmd", line 237, in cmd_bucket_create
    response = s3.bucket_create(uri.bucket(), cfg.bucket_location)
  File "/home/cephuser/s3cmd-2.0.2/S3/S3.py", line 398, in bucket_create
    response = self.send_request(request)
  File "/home/cephuser/s3cmd-2.0.2/S3/S3.py", line 1258, in send_request
    conn = ConnMan.get(self.get_hostname(resource['bucket']))
  File "/home/cephuser/s3cmd-2.0.2/S3/ConnMan.py", line 253, in get
    conn.c.connect()
  File "/usr/lib/python2.7/httplib.py", line 831, in connect
    self.timeout, self.source_address)
  File "/usr/lib/python2.7/socket.py", line 575, in create_connection
    raise err
error: [Errno 111] Connection refused

Does anyone know about the error ?

Ceph Tests make specific assumptions about RGW custom headers

The following list of tests make assumptions about custom RGW headers in the S3 responses :

  1. s3tests_boto3.functional.test_s3.test_abort_multipart_upload
  2. s3tests_boto3.functional.test_s3.test_bucket_head_extended
  3. s3tests_boto3.functional.test_s3.test_encryption_sse_c_multipart_upload
  4. s3tests_boto3.functional.test_s3.test_multipart_upload
  5. s3tests_boto3.functional.test_s3.test_sse_kms_multipart_upload

The tests expect a response that contain custom headers like x-rgw-bytes-used and x-rgw-object-count, etc. which are not a part of the Amazon AWS S3 spec.

Does AWS S3 support conditional put?

I am testing OpenStack Swift using s3-tests, all cases about conditional put/update output FAIL/ERROR.

Like: s3tests.functional.test_s3.test_put_object_ifmatch_good

Then I test conditional put with AWS S3, it raises NotImplemented error.

./s3curl/s3curl.pl --id test  --   http://testbucket.s3-ap-southeast-1.amazonaws.com/aaa -X PUT  --data @/tmp/aaa  -H 'If-Match: abc'
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NotImplemented</Code><Message>A header you provided implies functionality that is not implemented</Message><Header>If-Match</Header><RequestId>301498F0A024E98Q</RequestId><HostId>Yd9JPz7BG8ixUiSCtVOSl90fGjqNtxH45WJa0hb+bli6x88TNjyxhzVFxRI6CHjHDA6GR8cIE8Y=</HostId></Error>

Does S3 not actually support conditional put?

Outdated Bucket Naming Tests

According to newest AWS bucket naming policy, the bucket name should be in the range of [3,63], is there any particular reason these names are correct?
s3tests.functional.test_s3.test_bucket_create_naming_good_long_250
s3tests.functional.test_s3.test_bucket_create_naming_good_long_251
s3tests.functional.test_s3.test_bucket_create_naming_good_long_252
s3tests.functional.test_s3.test_bucket_create_naming_good_long_253
s3tests.functional.test_s3.test_bucket_create_naming_good_long_254
s3tests.functional.test_s3.test_bucket_create_naming_good_long_255

Document exactly what permissions [s3 main] and [s3 alt] user_id's should have

It seems the "s3 main" user must be able to create buckets and the "s3 alt" user must be able to set the bucket acl even when the bucket was created by the "[s3 main]" user_id?
So should both users have all permissions on the account? What are the minimum set of permissions (policy actions) they can have?

[s3 main]
## the tests assume two accounts are defined, "main" and "alt".
...
[s3 alt]
## another user account, used for ACL-related tests

test_object_create_bad_contentlength_negative expects error_code to be None?

Is it correct for the test_object_create_bad_contentlength_negative (and a few others) to expect no error_code when sending an invalid ContentLength? Perhaps the error_code should be ignored or something like 'InvalidURI' instead?

def test_object_create_bad_contentlength_negative():
    key = _setup_bad_object({'Content-Length': -1})

    e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
    eq(e.status, 400)
    eq(e.reason, 'Bad Request')
    eq(e.error_code, None)
    ^^^^^^^^^^^^^^^^^^^^^^

bump gevent version >=1.0

I find gevent 0.13.6 has several bugs on DNS resolving here. Is it possible to bump to the 1.0 version?

new test case suggestion: bucket creation with ACL

Hi,

one common action for many s3 clients, such as cyberduck, is to set ACLs for a bucket at the same time as it is being created. This would be a valuable test case, as the having the X-Amz-Acl header set could be treated differently when creating a bucket - it caught me out when working on swift, at least :)

I'm not sure what boto sends, but here's an example request from cuberduck:

PUT /lol/ HTTP/1.1
Date: Tue, 16 Oct 2012 22:56:57 GMT
x-amz-acl: public-read
Content-Type: application/octet-stream
Authorization: AWS admin:admin:qibESADS8jtU8aZTJa2K0KJqo1g=
Content-Length: 0
Host: 192.168.1.20:443
Connection: Keep-Alive
User-Agent: Cyberduck/4.2.1 (Mac OS X/10.8.2) (i386)

When to use a tenant user?

Hi, there. I found there are three users in s3test.conf: main, alt and tenant. In which situation should we use tenant? I'm writing a test case for object copy. The purpose of the case is to check whether a tenant user can copy objects. Should I add another function or modify the current function from using the main user to the tenant user? Thanks!

s3tests.functional.test_s3:test_bucket_acl_no_grants

AWS: I find that when we create a bucket in AWS, revoke the ACL, the owner of the bucket is still able to create objects in that bucket.
Ceph test: In this test- test_bucket_acl_no_grants, when the acls are revoked, when object creation is attempted it is expecting 'denied'

https://github.com/ceph/s3-tests/blob/master/s3tests/functional/test_s3.py

can't write

key = bucket.new_key('baz')
check_access_denied(key.set_contents_from_string, 'bar')

Could some one please suggest as I feel this is contradictory to AWS behaviour. Thanks.

TooManyBuckets test failures against AWS

AWS only allows 100 buckets by default and s3-tests has over 300 tests. Since s3-tests only nukes its buckets after the end of all tests, this results in many spurious failures:

Ran 305 tests in 250.788s

FAILED (SKIP=4, errors=158, failures=10)

Any suggestions on the least invasive way to address this? nosetests allows defining a single teardown for all test methods in a class but not for all functions in a source file:

http://nose.readthedocs.org/en/latest/writing_tests.html

--no-site-packages flag is deprecated

When running ./bootstrap on today's code, this warning appears:

The --no-site-packages flag is deprecated; it is now the default behavior.

This is with the virtulenv in Ubuntu precise:
Package: python-virtualenv
Priority: optional
Section: universe/python
Installed-Size: 2257
Maintainer: Ubuntu Developers [email protected]
Original-Maintainer: Debian Python Modules Team [email protected]
Architecture: all
Version: 1.7.1.2-1
Depends: python-pkg-resources, python-setuptools, python2.7, python (>= 2.7.1-0ubuntu2), python (<< 2.8)
Recommends: python-pip (>= 0.7.2)
Filename: pool/universe/p/python-virtualenv/python-virtualenv_1.7.1.2-1_all.deb
Size: 2111826
MD5sum: 20a2b9da21d37f1c8a761faf793eadbf
SHA1: ee48febaf31cb88c1b4f739cf1028c1ea668ab4b
SHA256: 28c187627844cf05c122f34366554835e269c1a1825f40e6d50454f074dfda84
Description-en: Python virtual environment creator
The virtualenv utility creates virtual Python instances, each invokable
with its own Python executable. Each instance can have different sets
of modules, installable via easy_install. Virtual Python instances can
also be created without root access.
Homepage: http://pypi.python.org/pypi/virtualenv
Description-md5: 21e2f7e142e8ff29fdbe9a4424e017d0
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Origin: Ubuntu

Problem to install with Python 3.7

Hi,
Has anyone try this with a newer version of Python?
I have try to install it with both Python 2.7 and Python 3.7 but both ending up with same issue.

(s3-tests) [chrisp@issen-lnx s3-tests]$ S3TEST_CONF=your.conf ./virtualenv/bin/nosetests
EEEEEE
======================================================================
ERROR: Failure: ModuleNotFoundError (No module named 'httplib')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/failure.py", line 39, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
    addr.filename, addr.module)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 244, in load_module
    return load_package(name, filename)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 216, in load_package
    return _load(spec)
  File "<frozen importlib._bootstrap>", line 696, in _load
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/chrisp/github/s3-tests/s3tests/functional/__init__.py", line 11, in <module>
    from httplib import HTTPConnection, HTTPSConnection
ModuleNotFoundError: No module named 'httplib'

======================================================================
ERROR: Failure: SyntaxError (invalid syntax (headers.py, line 341))
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/failure.py", line 39, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
    addr.filename, addr.module)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 234, in load_module
    return load_source(name, filename, file)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 171, in load_source
    module = _load(spec)
  File "<frozen importlib._bootstrap>", line 696, in _load
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/chrisp/github/s3-tests/s3tests/fuzz/test/test_fuzzer.py", line 21, in <module>
    from ..headers import *
  File "/home/chrisp/github/s3-tests/s3tests/fuzz/headers.py", line 341
    except BotoServerError, e:
                          ^
SyntaxError: invalid syntax

======================================================================
ERROR: Failure: SyntaxError (Missing parentheses in call to 'print'. Did you mean print('original  hash: ', self.original_hash)? (realistic.py, line 50))
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/failure.py", line 39, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
    addr.filename, addr.module)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 234, in load_module
    return load_source(name, filename, file)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 171, in load_source
    module = _load(spec)
  File "<frozen importlib._bootstrap>", line 696, in _load
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/chrisp/github/s3-tests/s3tests/tests/test_realistic.py", line 1, in <module>
    from s3tests import realistic
  File "/home/chrisp/github/s3-tests/s3tests/realistic.py", line 50
    print 'original  hash: ', self.original_hash
                           ^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print('original  hash: ', self.original_hash)?

======================================================================
ERROR: test suite for <module 's3tests_boto3.functional' from '/home/chrisp/github/s3-tests/s3tests_boto3/functional/__init__.py'>
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/suite.py", line 210, in run
    self.setUp()
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/suite.py", line 293, in setUp
    self.setupContext(ancestor)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/suite.py", line 316, in setupContext
    try_run(context, names)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/util.py", line 471, in try_run
    return func()
  File "/home/chrisp/github/s3-tests/s3tests_boto3/functional/__init__.py", line 131, in setup
    cfg = ConfigParser.RawConfigParser()
AttributeError: type object 'ConfigParser' has no attribute 'RawConfigParser'

======================================================================
ERROR: Failure: ModuleNotFoundError (No module named 'httplib')
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/failure.py", line 39, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
    addr.filename, addr.module)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 234, in load_module
    return load_source(name, filename, file)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 171, in load_source
    module = _load(spec)
  File "<frozen importlib._bootstrap>", line 696, in _load
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/chrisp/github/s3-tests/s3tests_boto3/fuzz/test/test_fuzzer.py", line 21, in <module>
    from ..headers import *
  File "/home/chrisp/github/s3-tests/s3tests_boto3/fuzz/headers.py", line 4, in <module>
    from httplib import BadStatusLine
ModuleNotFoundError: No module named 'httplib'

======================================================================
ERROR: Failure: SyntaxError (Missing parentheses in call to 'print'. Did you mean print('original  hash: ', self.original_hash)? (realistic.py, line 50))
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/failure.py", line 39, in runTest
    raise self.exc_val.with_traceback(self.tb)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
    addr.filename, addr.module)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 47, in importFromPath
    return self.importFromDir(dir_path, fqname)
  File "/home/chrisp/github/s3-tests/virtualenv/lib/python3.7/site-packages/nose/importer.py", line 94, in importFromDir
    mod = load_module(part_fqname, fh, filename, desc)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 234, in load_module
    return load_source(name, filename, file)
  File "/home/chrisp/github/s3-tests/virtualenv/lib64/python3.7/imp.py", line 171, in load_source
    module = _load(spec)
  File "<frozen importlib._bootstrap>", line 696, in _load
  File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 728, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/chrisp/github/s3-tests/s3tests_boto3/tests/test_realistic.py", line 1, in <module>
    from s3tests import realistic
  File "/home/chrisp/github/s3-tests/s3tests/realistic.py", line 50
    print 'original  hash: ', self.original_hash
                           ^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print('original  hash: ', self.original_hash)?

----------------------------------------------------------------------
Ran 5 tests in 0.376s

FAILED (errors=6)

Cannot run using Fedora 31

Fedora 31 removes many of its Python 2.7 packages:

https://fedoraproject.org/wiki/Releases/31/ChangeSet#F31_Mass_Python_2_Package_Removal

This includes virtualenv so now s3-tests cannot run:

./bootstrap: missing required RPM packages. Installing via sudo.
Fedora Modular 31 - x86_64                                                                    36 kB/s |  15 kB     00:00    
Fedora Modular 31 - x86_64 - Updates                                                          43 kB/s |  17 kB     00:00    
Fedora 31 - x86_64 - Updates                                                                 9.3 kB/s |  17 kB     00:01    
Fedora 31 - x86_64                                                                            38 kB/s |  14 kB     00:00    
No match for argument: python2-virtualenv
Error: Unable to find a match: python2-virtualenv

Etags different due double quotes in test_bucket_list_return_data

Traceback (most recent call last):
Etags different due double quotes in test_bucket_list_return_data

File "/tmp/s3-tests-master/virtualenv/lib64/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/tmp/s3-tests-master/s3tests/functional/test_s3.py", line 719, in test_bucket_list_return_data
eq(key.etag, key_data['etag'])
AssertionError: u'37b51d194a7513e45b56f6524f2d51f2' != '"37b51d194a7513e45b56f6524f2d51f2"'

Suggestion:
etag': key.etag.strip('"'),
Line 705 on s3-tests-master/s3tests/functional/test_s3.py

test_object_lock_put_obj_lock_invalid_days assert is incorrect according to AWS

The ceph test "test_object_lock_put_obj_lock_invalid_days" contains the check:
eq(error_code, 'InvalidRetentionPeriod')
Running the same code in AWS throws Error that contains error code: InvalidArgument and not InvalidRetentionPeriod.

In addition, test_object_lock_put_obj_lock_invalid_years appears twice and have different content (and one of them has the same problem as in test_object_lock_put_obj_lock_invalid_days).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.