GithubHelp home page GithubHelp logo

s3tester's Introduction

s3tester - S3 Performance Benchmarking

The goal of s3tester is to be a lightweight S3 performance testing utility. It is solely focused on S3 testing.

This tool is in active development - please submit feature requests in the issues page.

Minimum Requirements

  • Go 1.18 or higher

Installation

$ go get github.com/s3tester/s3tester

If you don't want to build from source you can download the compiled version of s3tester for Windows or Linux from github.com/s3tester/s3tester/releases

Usage

Setting your S3 credentials

There are multiple options for setting credentials.

  • Using environment Variables:
$ export AWS_ACCESS_KEY_ID=AKIAINZFCN46TISVUUCA
$ export AWS_SECRET_ACCESS_KEY=VInXxOfGtEIwVck4AdtUDavmJf/qt3jaJEAvSKZO
  • Using AWS credential file: see the --profile option below for details.

Command line options

Parameter Type Note
addressing-style string Whether to use virtual-hosted style addresses (bucket name is in the hostname) or path-style addresses (bucket name is part of the path). Value must be one of virtual or path. Default: path
bucket string Bucket name (mandatory). Default: test
concurrency int Maximum concurrent requests. 0: scan concurrency, run with ulimit -n 16384. Default: 1
consistency string The StorageGRID consistency control to use for all requests. Does nothing against non StorageGRID systems. (all, available, strong-global, strong-site, read-after-new-write, weak)
cpuprofile string Write CPU profile to file
days int The number of days that the restored object will be available for. Default: 1
debug boolean Print response body on request failure.
describe boolean Instead of running tests, show the consolidated list of test parameters that will be used when a test is run.
duration int Test duration in seconds. Duration must be used without 'requests' for operations that do not need existing objects, such as options, put, and multipartput. Duration must be used with 'requests' for operations that do need existing objects, such as get (and will return to the beginning if the number of requests is exceeded). Duration cannot be used with operations that remove objects.
endpoint string target endpoint(s). If multiple endpoints are specified separate them with a ,. Note: the concurrency must be a multiple of the number of endpoints. Default: "https://127.0.0.1:18082"
header - Specify one or more headers of the form <header-name>: <header-value>.
incrementing boolean Force the key naming to be lexicographically increasing. This is achieved by zero-padding the numerical suffix. For most use cases, suffix-naming should be set to together if this parameter is set to true.
json boolean The result will be printed out in JSON format if this flag exists. Default: false
logdetail string Write detailed log to file
loglatency string Write latency histogram to file
metadata string The metadata to use for the objects. The string must be formatted as such: 'key1=value1&key2=value2'. Used for put, updatemeta, multipartput, putget and putget9010r
metadata-directive string Specifies whether the metadata is copied from the source object or if it is replaced with the metadata provided in the object copy request. Value must be one of COPY or REPLACE. Default: COPY
mixed-workload string Path to a JSON file that specifies a mixture of operations.
no-sign-request boolean Do not sign requests. Credentials will not be loaded if this argument is provided
operation string Operation type: put, multipartput, get, puttagging, updatemeta, randget, delete, options, head, restore. Default: put
overwrite int Turns a PUT/GET/HEAD into an operation on the same S3 key. 1: all writes/reads are to same object, 2: threads clobber each other but each write/read is to unique objects
partsize int Size of each part in bytes. Only has an effect when a multipart PUT is used. Metric and binary byte size entries are valid (for example, 5MiB = 5242880 and 5MB = 5000000).
prefix string Object name prefix. Default: testobject
profile string Use a specific profile from AWS CLI credential file
query-params string Specify one or more custom query parameters of the form <queryparam-name>=<queryparam-value> or <queryparam-name> separated by ampersands.
random-range string Used to perform random range GET requests. Format is <min>-<max>/<size>, where <size> is the number of bytes per GET request, and <min>-<max> is an inclusive byte range within the object . Ex: Use 0-399/100 to perform random 100-byte reads within the first 400 bytes of an object.
range string Specify range header for GET requests
ratelimit float The total number of operations per second across all threads. Default: 1.7976931348623157e+308
region string Region to send requests to. Default: us-east-1
repeat int Repeat each S3 operation this many times: Default: 0 (do not repeat)
requests int Total number of requests. Default: 1000
retries int Number of retry attempts. Default: 0
retrysleep int How long to sleep in between each retry in milliseconds. Default: 0 (exponential backoff)
rr - Reduced redundancy storage for PUT requests
size int Object size in bytes. Metric and binary byte size entries are valid (for example, 5MiB = 5242880 and 5MB = 5000000). Default: 30720
suffix-naming string Determines how the numerical key names are divided between concurrent threads. One of: separate, together. (Default is separate.) If separate, each thread gets a separate numerical range to handle; if together, the threads are assigned numbers to increase at the same rate (this does not force the threads to sync with each other).
tagging string The tag-set for the object. The tag-set must be formatted as such: 'tag1=value1&tag2=value2'. Used for put, puttagging, putget and putget9010r
tagging-directive string Specifies whether the object tag-set is copied from the source object or if it is replaced with the tag-set provided in the object copy request. Value must be one of 'COPY' or 'REPLACE'. Default: COPY
tier string The retrieval option for restoring an object. One of expedited, standard, or bulk. AWS default option is standard if not specified. Default: standard
uniformDist string Generates a uniform distribution of object sizes given a min-max size. Allowed values: 10 to 20
verify int Verify the retrieved data on a get operation. 0: disable verify (default); 1: normal put data, 2: multipart put data. If verify equals 2, partsize is required (default partsize is 5242880 bytes)
workload string File path to a JSON file that describes a workload to be run. The file is parsed with the Go template package and must produce JSON that is valid according to the workload schema

workload JSON Sample File

{
  "global": {
    "concurrency": 4,
    "prefix": "test",
    "requests": 20
  },
  "workload": [
    {
      "bucket": "b1",
      "operation": "put"
    },
    {
      "bucket": "b2",
      "copy-source-bucket": "b1",
      "operation": "copy"
    },
    {
      "bucket": "b2",
      "operation": "get"
    },
    {
      "bucket": "b2",
      "operation": "head"
    },
    {
      "bucket": "b1",
      "operation": "delete"
    },
    {
      "bucket": "b2",
      "operation": "delete"
    }
  ]
}

NOTE: File Sample and Template Support

mixedWorkload JSON Sample File

{
	"mixedWorkload": [{
		"operationType": "put",
		"ratio": 25
	}, {
		"operationType": "get",
		"ratio": 25
	}, {
		"operationType": "updatemeta",
		"ratio": 25
	}, {
		"operationType": "delete",
		"ratio": 25
	}]
}

NOTE: The order of operations specified will generate the requests in the same order. That is, if you have DELETE followed by a PUT, but no objects on your grid to delete, all your deletes will fail.

Exit codes

  • 1: one or more requests has failed

Examples

Writing objects into a bucket

./s3tester -concurrency=128 -size=20MB -operation=put -requests=20000 -endpoint="https://10.96.105.5:18443" -prefix=3
  • Starts writing objects into the default bucket test.
  • The bucket needs to be created prior to running s3tester.
  • The naming of the ingested objects will be 3-object# where 3 is the prefix specified and object# is a sequential number starting from zero and going to the number of requests.
  • This command will perform a total of 20,000 PUT requests (or in this case slightly less because 20,000 does not divide by 128).
  • The object size is 20MB (20,000,000 bytes).
  • Replace the sample IP/port combination with the one you are using.

Reading objects from a bucket (and other operations)

./s3tester -concurrency=128 -operation=get -requests=200000 -endpoint="https://10.96.105.5:18443" -prefix=3
  • Matches the request above and will read the same objects written in the same sequence.
  • If you use the randget operation the objects will be read in random order simulating a random-access workload.
  • If you use the head operation then the S3 HEAD operation will be performed against the objects in sequence.
  • If you use the delete operation then the objects will be deleted.

As of version 2.1.0 the concurrency on a retrieval operation can be different from the concurrency used to ingest the objects. The goal is to save time by ingesting data once and retrieving at different concurrencies to observe the impact on performance. However, the number of requests has to match the number that was actually ingested. For example, if we ingest with concurrency 1000 and requests set to 1100 then only 1000 requests will actually be ingested (1100 - 1100%1000) to keep the number of requests per client thread equal. Now when performing the retrieval the number of requests specified must be 1000, not 1100.

Interpreting the results

        --- Total Results ---
Operation: put
Concurrency: 64
Total number of requests: 99968
Total number of unique objects: 99968
Failed requests: 0
Total elapsed time: 2m43.251246249s
Average request time: 101.057175ms
Minimum request time: 13.84ms
Maximum request time: 712.75ms
Nominal requests/s: 633.3
Actual requests/s: 612.4
Content throughput: 2.392018 MB/s
Average Object Size: 4096
Response Time Percentiles
50     :   93.91 ms
75     :   114.68 ms
90     :   140.4 ms
95     :   166 ms
99     :   331.71 ms
99.9   :   492.57 ms
Latency(ms) : Operations
  0 - 1   : 0     |
  2 - 3   : 0     |
  4 - 7   : 0     |
  8 - 15  : 7     |
 16 - 31  : 945   ||
 32 - 63  : 12093 ||||||||||||||
 64 - 127 : 71662 |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
128 - 255 : 13671 ||||||||||||||||
256 - 511 : 1505  ||
512 - 713 : 85    |
  • Nominal requests/s is calculated ignoring any client side overheads. This number will always be higher than actual requests/s. If those two numbers diverge significantly it can be an indication that the client machine isn't capable of generating the required workload and you may want to consider using multiple machines.
  • Actual requests/s is the total number of requests divided by the total elapsed time in seconds.
  • Content throughput is the total amount of data ingested and retrieved in MB divided by the total elapsed time in seconds.
  • Total number of unique objects is the total number of unique objects being operated on successfully.

For per request details, s3tester can be run with the -logdetail option for capturing all the request latencies into a .csv file.

s3tester's People

Contributors

blakewedwards avatar crankbird avatar felixlau19 avatar greg-kent avatar guysirton avatar lirukshman88 avatar pierreboerkoel avatar rohan-thomas avatar scaleoutsean avatar tchutw avatar vardhanv avatar vmore avatar wqjennette avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3tester's Issues

Print results when s3tester is killed

When using s3tester for performance testing we would like to be able to stop s3tester and still get the results printout. I noticed the newest version 3.0.0 has some signal handling but killing the s3tester pid does not result in printing the results. I have changes that worked on the previous 2.1.0 version, what are the chances of making the necessary changes to the latest codebase to get the results printout when s3tester is killed?

Here are the changes I have working in the previous version, however I'm not sure how to integrate these changes with the latest 3.0.0 which added signal handling code.

diff --git a/s3tester.go b/s3tester.go
index 9393cfc..6fcd246 100644
--- a/s3tester.go
+++ b/s3tester.go
@@ -10,10 +10,12 @@ import (
 	"net"
 	"net/http"
 	"os"
+	"os/signal"
 	"runtime/pprof"
 	"strconv"
 	"strings"
 	"sync"
+	"syscall"
 	"time"
 
 	"golang.org/x/net/context"
@@ -180,7 +182,9 @@ var detailed []detail
 func runtest(args parameters) (float64, results) {
 	c := make(chan result, args.concurrency)
 	startTime := time.Now()
+	fmt.Println("s3tester starting")
 	startTestWorker(c, args)
+	fmt.Println("s3tester running")
 	testResult := collectWorkerResult(c, args, startTime)
 
 	if args.optype != "validate" {
@@ -211,6 +215,7 @@ func startTestWorker(c chan<- result, args parameters) {
 	}
 
 	for i, endpoint := range args.endpoints {
+		fmt.Println("starting worker for endpoint:", endpoint)
 		endpointStartTime := time.Now()
 		for currEndpointWorkerId := 0; currEndpointWorkerId < workersPerEndpoint; currEndpointWorkerId++ {
 			workerId := i*workersPerEndpoint + currEndpointWorkerId
@@ -297,8 +302,13 @@ func worker(results chan<- result, args parameters, credentials *credentials.Cre
 		source = rand.New(rand.NewSource(time.Now().UnixNano()))
 	}
 
+	signal_c := make(chan os.Signal, 1)
+	signal.Notify(signal_c, syscall.SIGINT, syscall.SIGTERM)
+
 	durationLimit := NewDurationSetting(args.duration, runstart)
 
+	// fmt.Println("starting worker:", id)
+
 	if workerChan != nil {
 		ReceiveS3Op(svc, httpClient, &args, durationLimit, limiter, workerChan, &r)
 	} else {
@@ -333,6 +343,13 @@ func worker(results chan<- result, args parameters, credentials *credentials.Cre
 					results <- r
 					return
 				}
+
+				select {
+				case <-signal_c:
+					results <- r
+					return
+				default:
+				}
 			}
 		}
 	}

5TB ingest failed

tried to ingest a 5TB multipart object with s3tester. I started it with:

s3tester-11.1.0-20180326.2306.78769a3
./s3tester -endpoint https://10.96.105.161:8082 -operation multipartput -bucket test5tb1 -prefix test -requests 1 -size 5000000000000 -partsize 500000000 --retries 1

After around 2 hours this failed with:

qfatal error: unexpected signal during runtime execution
[signal SIGBUS: bus error code=0x2 addr=0x958bec pc=0x44db09]

runtime stack:
fatal error: unexpected signal during runtime execution
panic during panic
[signal SIGBUS: bus error code=0x2 addr=0x869d0c pc=0x44db09]

runtime stack:
fatal error: unexpected signal during runtime execution

I repeated this multiple times and it failed always with the same error.

Can someone look into it? I assume there is a defect in s3tester.

Would like to get the logdetails for active requests, content throughput, and failed request for every second

Summary:

Currently the logdetails having the elapsed time for every second . In the way, it would be nice to have the active requests, content throughput and failed requests details for every second. currently we are able to get this details only at the end of the load completed.

We are in the need to check those details for every second when we trigger load for longer time.

Do we have now any option to check those details per second

codahale/hdrhistogram repo url has been transferred under the github HdrHstogram umbrella

Problem

The codahale/hdrhistogram repo has been transferred under the github HdrHstogram umbrella with the help from the original author in Sept 2020 (new repo url https://github.com/HdrHistogram/hdrhistogram-go). The main reasons are to group all implementations under the same roof and to provide more active contribution from the community as the original repository was archived several years ago.

The dependency URL should be modified to point to the new repository URL. The tag "v0.9.0" was applied at the point of transfer and will reflect the exact code that was frozen in the original repository.

If you are using Go modules, you can update to the exact point of transfer using the @v0.9.0 tag in your go get command.

go mod edit -replace github.com/codahale/hdrhistogram=github.com/HdrHistogram/[email protected]

Performance Improvements

From the point of transfer, up until now (mon 16 aug 2021), we've released 3 versions that aim support the standard HdrHistogram serialization/exposition formats, and deeply improve READ performance.
We recommend to update to the latest version.

panic: runtime error: integer divide by zero

Trying to run a mixedworload and it crashes.

root@VM109:~/s3tester-v2.1.0-linux-x64# screen -S s3tester ./s3tester -concurrency=5 -size=1048576 -requests=2000 -endpoint=https://xxxxxxxxxx:443 -prefix=mixedworkload -bucket=bhaskar04022020 -workload test.json 
panic: runtime error: integer divide by zero

goroutine 1 [running]:
main.calcStats(0xc000ba76a8, 0x5, 0x1d59de)
        /home/vlad/go/src/github.com/s3tester/s3tester/s3tester.go:460 +0x18b
main.setupResultStat(0xc000ba76a8)
        /home/vlad/go/src/github.com/s3tester/s3tester/s3tester.go:448 +0x49
main.processTestResult(0xc000ba76a8, 0x5, 0x100000, 0xc00007d800, 0x1, 0x1, 0x957efc, 0x3, 0x7fffcd918204, 0xf, ...)
        /home/vlad/go/src/github.com/s3tester/s3tester/s3tester.go:435 +0x74
main.runtest(0x5, 0x100000, 0xc00007d800, 0x1, 0x1, 0x957efc, 0x3, 0x7fffcd918204, 0xf, 0x7fffcd9181ee, ...)
        /home/vlad/go/src/github.com/s3tester/s3tester/s3tester.go:187 +0x1cd
main.main()
        /home/vlad/go/src/github.com/s3tester/s3tester/s3tester.go:576 +0x90d 
root@VM109:~/s3tester-v2.1.0-linux-x64# cat test.json 
{'mixedWorkload':[{'operation':'put','ratio':25},{'operationType':'get','ratio':25},{'operationType':'updatemeta','ratio':25},{'operationType':'delete','ratio':25}]}

Not sure what am I missing.

s3 restore fails as 'specified key does not exist' with s3tester

  1. Created a s3 bucket on AWS, uploaded a file say 1.txt
  2. Enabled versioning for the AWS s3 bucket
  3. Deleted the object '1.txt' from the AWS console.
  4. Validated from the 'Show versions' under s3 bucket objects, I could see the object '1.txt' with storage class as 'Standard'
  5. Now when I try to restore the object using s3 tester tool, it fails as
    "2022/06/17 14:14:56 Failed restore on object bucket 'XXXXXXX2022/purest-0': NoSuchKey: The specified key does not exist.

Git requirement

Please add this to requirement.

exec: "git": executable file not found in $PATH

error during go get without Git.

Mixed workload operation types must be one of {'put','get','delete','updatemeta','head'}, but got

Using current release on Ubuntu 18.04 I get this incomplete message about my mixedWorkload file:

$ ./s3tester `
-bucket one001 `
-concurrency 1 `
-endpoint "https://111.111.111.111:1443" `
-ratelimit 4 `
-requests 100 `
-size 8888 `
-workload "w.json"

2020/11/02 12:17:37 Mixed workload operation types must be one of {'put','get','delete','updatemeta','head'}, but got

That's all it says.
w.json looks good:

$ cat w.json

{
  "mixedWorkload": [
    {
      "operation": "put",
      "ratio": 50
    },
    {
      "operationType": "get",
      "ratio": 50
    }
  ]
}

Executed without -workload, it works fine.

clean up after test

It would be great if you can implement an option for cleaning up the files from the last test.

Include an arm64 binary for download in Releases?

Thanks for this project. It would be great if a linux arm64 binaries were available in addition to the x86_64 binary.
Maybe the binaries could be named with the output of $(uname -m).
s3tester-v3.0.1-linux-x86_64
s3tester-v3.0.1-linux-aarch64

go version

The current version of s3tester (1.1.4) won't build with go 1.6.

Please try with go version 1.8.x or higher.

s3tester seems to be hanged

Hi,

I was using this tool widely recently and it is great.

I did notice that in rare situation i run the tool and it runs till it hangs. Usually it is when using high number of connections and it seems to hang close to operation completion.
For example i was running a PUT workload of 512 connections using object size of 1MB -

./s3tester -bucket=curve-objectsize-connnum-adjusted16-os1024kb-con512 -prefix=server216-os1024-con512 -operation=put -concurrency=512 -size=1048576  -requests=111489 -endpoint=https://<object-storage-endpoint-url>:443 -region=US -retries=10

Looking on the Load Balancer i noticed that for some reason the utility opened more than 512 connection -

2020-04-06_2248 [INFO] - Load Balancer connection list in last 60 seconds:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  ActiveFEIP:https wlc
  -> 10.2.17.46:https             Route   80     202        0         
  -> 10.2.17.47:https             Route   80     202        2         
  -> 10.2.17.48:https             Route   80     202        0         

Any idea why it may happen?

then after some time the connections were reduced while some were moved to be inactive -

2020-04-06_2300 [INFO] - Load Balancer connection list in last 60 seconds:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  ActiveFEIP:https wlc
  -> 10.2.17.46:https             Route   80     125        89        
  -> 10.2.17.47:https             Route   80     115        93        
  -> 10.2.17.48:https             Route   80     193        51        

Here you can see that although the weights are equal the connections are not balanced.

Then at certain point s3tester CPU utilization was 0 -

2020-04-06_2302 [INFO] - server216 - s3tester CPU Usage in last 60 seconds:
top - 23:02:02 up 64 days,  9:15,  3 users,  load average: 0.19, 0.95, 1.29
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s): 11.6 us,  1.5 sy,  0.0 ni, 86.0 id,  0.1 wa,  0.0 hi,  0.8 si,  0.0 st
KiB Mem : 32709772 total, 21356216 free,  3833544 used,  7520012 buff/cache
KiB Swap:  2097148 total,  2063568 free,    33580 used. 28415732 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 6950 root      20   0 4290684 1.215g   9892 S   0.0  3.9  27:43.30 s3tester
2020-04-06_2302 [INFO] - server216 - client CPU Usage in last 60 seconds:
Linux 4.15.0-20-generic (server216)     04/06/2020      _x86_64_        (12 CPU)
.
.
.
2020-04-06_2302 [INFO] - Load Balancer connection list in last 60 seconds:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  ActiveFEIP:https wlc
  -> 10.2.17.46:https             Route   80     68         58        
  -> 10.2.17.47:https             Route   80     79         52        
  -> 10.2.17.48:https             Route   80     110        86        

Till reaching this status -

2020-04-06_2318 [INFO] - Load Balancer connection list in last 60 seconds:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  ActiveFEIP:https wlc
  -> 10.2.17.46:https             Route   80     0          0         
  -> 10.2.17.47:https             Route   80     1          0         
  -> 10.2.17.48:https             Route   80     3          0         

and from this point this was the connection status, see the status after 12 hours (11:48 AM) -

2020-04-07_1148 [INFO] - Load Balancer connection list in last 60 seconds:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  ActiveFEIP:https wlc
  -> 10.2.17.46:https             Route   80     0          0         
  -> 10.2.17.47:https             Route   80     1          0         
  -> 10.2.17.48:https             Route   80     3          0         

s3tester seems to still be running but no CPU utilization -

top - 23:20:35 up 64 days,  9:34,  3 users,  load average: 0.07, 0.10, 0.42
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s): 11.6 us,  1.5 sy,  0.0 ni, 86.0 id,  0.1 wa,  0.0 hi,  0.8 si,  0.0 st
KiB Mem : 32709772 total, 21357544 free,  3830384 used,  7521844 buff/cache
KiB Swap:  2097148 total,  2063568 free,    33580 used. 29562024 avail Mem 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 6950 root      20   0 4290684 1.215g   9892 S   0.0  3.9  27:43.50 s3tester

From the object storage side we could not see any failure or issue.

Does the s3tester has any logs that it maintain during its run which i can refer to?
Does it issues any logging to the server log (could not find any)?
Will using the -logdetail provide more info on s3tester hangs/errors?
In general any idea how i can further analyze this issue?

Thx in advance for any assistance that you can provide on this issue.

feature request

It will be nice to have more options around mixed workload, ie customize PUTGET ratio by entering numeric entries like 5050, 8020, etc...

Windows Usage

There is an executable for Windows, however how do we specify the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as they do not seem to be parameters we can pass in the cli and "export" does not work in Windows.

How do you use this to test s3 plot s3 performance?

I need to plot various performance metrics like throughput and latency vs object size as object size increases and things like that. I can't seem to figure out how to use this tool to generate plottable get performance.

For put performance, I wrapped s3tester in a while loop, running a single thread, single request test using a 1MB file, increasing the size by 1MB each loop until 1GB. This gives me 1024 data points to plot.

Something like this:

SIZE_S="$KB_1024" # start with 1MB file
SIZE_E="$GB_1"         # end when file size is 1GB
SIZE_I="$KB_1024"   # increment by 1MB each loop
C=1                                   # 1 thread
R=1                                   # 1 request
let SIZE="$SIZE_S"
while [[ $SIZE -lt $SIZE_E ]]; do
    LOGTAG="${TESTNAME}-${SIZE}-${R}"
    echo "running test with concurrency $C, size $SIZE and requests $R"
    ../s3tester -json -prefix="$TESTNAME" -endpoint="$ENDPOINT" -operation=put -bucket "$BUCKET" \
      -concurrency="$C" -size="$SIZE" -requests="$R" \
      -logdetail "${LOGDIR}/${LOGTAG}.log" -loglatency "${LOGDIR}/${LOGTAG}_lat.log" > "${LOGDIR}/${LOGTAG}.json" 2>> "${LOGDIR}/${LOGTAG}.err"
    let SIZE=${SIZE}+${SIZE_I}
done

For get tests, s3tester seems to want files to exist with name $PREFIX-0 where 0 is the request number. Even if I specify 100 requests, for example, I end up with a single data file with no way to plot.

Am I using the wrong tool for this or using the tool wrong?

Thanks,
CC

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.