GithubHelp home page GithubHelp logo

act's Issues

ACT doesn't distinguish between partitions and devices so setting scheduler doesn't work

since setting a scheduler is being done at the device level, providing a list of partitions results in not actually setting the scheduler at all:

ERROR: couldn't open /sys/block/nvme0n1p2/queue/scheduler errno 2 'No such file or directory'
/dev/nvme0n1p2 size = 1187558457344 bytes, 1132544 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme0n1p3/queue/scheduler errno 2 'No such file or directory'
/dev/nvme0n1p3 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme0n1p4/queue/scheduler errno 2 'No such file or directory'
/dev/nvme0n1p4 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme0n1p5/queue/scheduler errno 2 'No such file or directory'
/dev/nvme0n1p5 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme0n1p6/queue/scheduler errno 2 'No such file or directory'
/dev/nvme0n1p6 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme1n1p2/queue/scheduler errno 2 'No such file or directory'
/dev/nvme1n1p2 size = 1187558457344 bytes, 1132544 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme1n1p3/queue/scheduler errno 2 'No such file or directory'
/dev/nvme1n1p3 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme1n1p4/queue/scheduler errno 2 'No such file or directory'
/dev/nvme1n1p4 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme1n1p5/queue/scheduler errno 2 'No such file or directory'
/dev/nvme1n1p5 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
ERROR: couldn't open /sys/block/nvme1n1p6/queue/scheduler errno 2 'No such file or directory'
/dev/nvme1n1p6 size = 1188632199168 bytes, 1133568 large blocks, minimum IO size = 512 bytes
# ls  /sys/block/nvme1n1/queue/scheduler
/sys/block/nvme1n1/queue/scheduler

ACT Test result is far lower than offical webside

My Samsung 840 Pro 256GB SSD ACT Test result is far lower than offical webside.
Is it config file issue?

My config file:

mandatory non-zero, ignored if queue-per-device is yes:

num-queues: 8

mandatory non-zero:

threads-per-queue: 8
test-duration-sec: 86400
report-interval-sec: 1
read-reqs-per-sec: 6000
large-block-ops-per-sec: 71
read-req-num-512-blocks: 3
large-block-op-kbytes: 128

My test system:
CPU: i7-4770, RAM:DDR3 16GB

Test rsult: > 1 ms (% total / ssd only) = 28.5 / 27.75

Should ACT generate a warning if irqbalance disabled?

Wasted a bunch of time testing some instances and eventually gave up on figuring out why the results looked weird compared to other similar instances.. The vendor finally came back and said it was because their OS image had "irqbalance" disabled by default, which caused the IO to stack up on cpu0 and make the ACT results look bad. After two weeks of back and forth, finally came to that conclusion and with irqbalance enabled things look sane.

Because having irqbalance can badly hurt IO performance, can we add a warning to ACT to tell the end user if irqbalance is disabled? Obviously this could be intended but a warning might be a good idea.

Stack blowed

In thread subroutine run_service, it uses stack space as the buffer (see line 453 and 465). Given the default number of service thread being 5 times of the cpu cores on the host and the read_size seeming to be a random value, how likely is it to exhaust the stack space so that actually few numbers of threads would be created (and few I/O would be executed than specified, resulting in a superficially higher score) ? Using thread stack this way could be dangerous. Could we change it to use heap memory as the buffer?

Error reporting is too noisy

I was running ACT tests in a cloud provider over the weekend.. Today I noticed they are out of disk.

#:~/act/config# grep -c 'Too many open files' act_250x.conf.out
58379616
#:~/act/config# grep -c '^ERROR' act_250x.conf.out
58433002

-rw-r--r-- 1 root root 3.8G Sep 30 16:36 act_250x.conf.out

Okay, I know error logging is important - but this is getting out of hand.

As far as I know, this error isn't helpful. This happens almost every time we push a system to its limits, especially on cloud systems even with us running as root with no file limit set to 1MM.

# ulimit -Hn
1048576

This also causes problems with the latency interpreter, making it run very slow and sometimes giving bad results (I've seen it but don't have replication steps for the issue).

Can we request some changes here? I'm thinking

  1. have a flag to ignore open files limit? Or just ignore by default. Is it valuable?
  2. Possibly write errors to a separate log file?
  3. Could it aggregate errors instead of printing each individual error line?
  4. Maybe limit the number of errors reported, like 'repeated 23457x bajillion times' instead of logging each individual problem.

I know the #2/#3 point would make it difficult to troubleshoot ACT so I think just having a flag to ignore this kind of error would be a good thing.

[UX] Actprep silently fails if a partition is still mounted from the target device

Running sudo ./actprep /dev/sdN will fail if the disk was previously in-use and had a partition that might still be mounted by accident (and therefore, dd wouldn't fully zero-ize it). It would enhance usability if there was a note hinting to unmount first or atleast a general, noisy error message would show up.

Using ubuntu-server 14.04.3 (LTS)

partitions disappear on running act_prep script

Followed the instructions and created partitions on my nvme drives. as per this link https://github.com/aerospike/act
As soon as I run act_prep script, all the partitions disappear.
`[root@PCI27 ~]# ls -1 /dev/nvme*
/dev/nvme0
/dev/nvme0n1
/dev/nvme0n1p1
/dev/nvme0n1p10
/dev/nvme0n1p11
/dev/nvme0n1p12
/dev/nvme0n1p13
/dev/nvme0n1p14
/dev/nvme0n1p15
/dev/nvme0n1p16
/dev/nvme0n1p17
/dev/nvme0n1p18
/dev/nvme0n1p19
/dev/nvme0n1p2
/dev/nvme0n1p20
/dev/nvme0n1p21
/dev/nvme0n1p22
/dev/nvme0n1p23
/dev/nvme0n1p24
/dev/nvme0n1p25
/dev/nvme0n1p26
/dev/nvme0n1p27
/dev/nvme0n1p28
/dev/nvme0n1p29
/dev/nvme0n1p3
/dev/nvme0n1p30
/dev/nvme0n1p31
/dev/nvme0n1p32
/dev/nvme0n1p4
/dev/nvme0n1p5
/dev/nvme0n1p6
/dev/nvme0n1p7
/dev/nvme0n1p8
/dev/nvme0n1p9
/dev/nvme1
/dev/nvme1n1
/dev/nvme1n1p1
/dev/nvme1n1p10
/dev/nvme1n1p11
/dev/nvme1n1p12
/dev/nvme1n1p13
/dev/nvme1n1p14
/dev/nvme1n1p15
/dev/nvme1n1p16
/dev/nvme1n1p17
/dev/nvme1n1p18
/dev/nvme1n1p19
/dev/nvme1n1p2
/dev/nvme1n1p20
/dev/nvme1n1p21
/dev/nvme1n1p22
/dev/nvme1n1p23
/dev/nvme1n1p24
/dev/nvme1n1p25
/dev/nvme1n1p26
/dev/nvme1n1p27
/dev/nvme1n1p28
/dev/nvme1n1p29
/dev/nvme1n1p3
/dev/nvme1n1p30
/dev/nvme1n1p31
/dev/nvme1n1p32
/dev/nvme1n1p4
/dev/nvme1n1p5
/dev/nvme1n1p6
/dev/nvme1n1p7
/dev/nvme1n1p8
/dev/nvme1n1p9
/dev/nvme2
/dev/nvme2n1
/dev/nvme2n1p1
/dev/nvme2n1p10
/dev/nvme2n1p11
/dev/nvme2n1p12
/dev/nvme2n1p13
/dev/nvme2n1p14
/dev/nvme2n1p15
/dev/nvme2n1p16
/dev/nvme2n1p17
/dev/nvme2n1p18
/dev/nvme2n1p19
/dev/nvme2n1p2
/dev/nvme2n1p20
/dev/nvme2n1p21
/dev/nvme2n1p22
/dev/nvme2n1p23
/dev/nvme2n1p24
/dev/nvme2n1p25
/dev/nvme2n1p26
/dev/nvme2n1p27
/dev/nvme2n1p28
/dev/nvme2n1p29
/dev/nvme2n1p3
/dev/nvme2n1p30
/dev/nvme2n1p31
/dev/nvme2n1p32
/dev/nvme2n1p4
/dev/nvme2n1p5
/dev/nvme2n1p6
/dev/nvme2n1p7
/dev/nvme2n1p8
/dev/nvme2n1p9
/dev/nvme3
/dev/nvme3n1
/dev/nvme3n1p1
/dev/nvme3n1p10
/dev/nvme3n1p11
/dev/nvme3n1p12
/dev/nvme3n1p13
/dev/nvme3n1p14
/dev/nvme3n1p15
/dev/nvme3n1p16
/dev/nvme3n1p17
/dev/nvme3n1p18
/dev/nvme3n1p19
/dev/nvme3n1p2
/dev/nvme3n1p20
/dev/nvme3n1p21
/dev/nvme3n1p22
/dev/nvme3n1p23
/dev/nvme3n1p24
/dev/nvme3n1p25
/dev/nvme3n1p26
/dev/nvme3n1p27
/dev/nvme3n1p28
/dev/nvme3n1p29
/dev/nvme3n1p3
/dev/nvme3n1p30
/dev/nvme3n1p31
/dev/nvme3n1p32
/dev/nvme3n1p4
/dev/nvme3n1p5
/dev/nvme3n1p6
/dev/nvme3n1p7
/dev/nvme3n1p8
/dev/nvme3n1p9
[root@FM21V213-PCI27 ~]# cd act/target/bin/
[root@FM21V213-PCI27 bin]# sudo ./act_prep /dev/nvme0
nvme0 nvme0n1p13 nvme0n1p19 nvme0n1p24 nvme0n1p3 nvme0n1p6
nvme0n1 nvme0n1p14 nvme0n1p2 nvme0n1p25 nvme0n1p30 nvme0n1p7
nvme0n1p1 nvme0n1p15 nvme0n1p20 nvme0n1p26 nvme0n1p31 nvme0n1p8
nvme0n1p10 nvme0n1p16 nvme0n1p21 nvme0n1p27 nvme0n1p32 nvme0n1p9
nvme0n1p11 nvme0n1p17 nvme0n1p22 nvme0n1p28 nvme0n1p4
nvme0n1p12 nvme0n1p18 nvme0n1p23 nvme0n1p29 nvme0n1p5
[root@FM21V213-PCI27 bin]# sudo ./act_prep /dev/nvme0n1 &
[1] 8590
[root@PCI27 bin]# /dev/nvme0n1 size = 7681501126656 bytes, 58605202 large blocks
cleaning device /dev/nvme0n1
................

##Ran the below steps on all my 4 drives
sudo ./act_prep /dev/nvme0n1 &
sudo ./act_prep /dev/nvme1n1 &
sudo ./act_prep /dev/nvme2n1 &
sudo ./act_prep /dev/nvme3n1 &

Then I check for partitions again and this is what I see

[root@PCI27 bin]# ls -1 /dev/nvme*
/dev/nvme0
/dev/nvme0n1
/dev/nvme1
/dev/nvme1n1
/dev/nvme2
/dev/nvme2n1
/dev/nvme3
/dev/nvme3n1
[root@PCI27 bin]#`

act_prep take 10x longer in 5.0

Hi,
I started using act 5.0 recently and noticed act 5.0 salting (14MB/sec) take much longer compared to 4.0 (400MB/sec).
After a bit digging, difference seems like caused by O_DSYNC flag. I am wondering will this invalidate my test results if I patch out O_DSYNC flag during salting.
Thanks!

Act not quitting properly on Ubuntu 14.04 LTS ('Device can't keep up' message not logged, zombie process)

We ran a few benchmarking rounds by now. On our initial setup we used Ubuntu 14.04 (LTS) and now that we switched to CentOS we noticed that Act under CentOS fails properly / user friendly while Act under Ubuntu would leave only question marks behind.

We noticed that at some point the act process would simply stop dumping into the output file (checked with tail) while the process kept running as normal (using same cpu resources as before!) without any hint what happened. The output didn't contain a device can't keep up message (tried to grep). Process couldn't be killed with SIGTERM or SIGKILL (as root), hinting at some kind of an "waiting for I/O"-deadlock. Out of 4-5x this happened, it only exited smoothly 1x. With CentOS we haven't experience any unsmooth exit so far.

It was only strace that would bring down the process and hint at an output into stdout with the message "device can't keep up". Unfortunately we didn't save the strace output. Can anybody confirm that this is a general issue? Is it supposed/unavoidable to die like this? If not, we are ready to reinstall ubuntu and re-produce the situation, if further input is necessary.

On both OS we started act with an &-operator.

Certification question

I've been reading the text on the wiki and I had a question about the act certification criteria for single drives. In the readme I find encountered this line:

Run ACT for 24 hrs using the 3x test (6000 reads/sec and 3000 writes/sec). The drive passes this test if less than 5% of operations fail to complete in 1 ms or less.

Later in the document I encountered these lines:

In any one-hour period of an ACT performance test, we expect that:

fewer than 5% of transactions fail to complete in 1 ms
fewer than 1% of transactions fail to complete in 8 ms
fewer than 0.1% of transactions fail to complete in 64 ms
The max line of the output shows the highest values observed in any single slice (hour) of time and the values on the max line should not exceed the allowable error values specified above.

My question is which of the above is correct? Does the drive pass if " if less than 5% of operations fail to complete in 1 ms or less" for the total number of operations or is it per one hour time period as described in the second quote above?

Let's say I have these results from a drive:

     trans                  device
     %>(ms)                 %>(ms)

slice 1 8 64 1 8 64


 1     1.67   0.00   0.00     1.63   0.00   0.00
 2     1.38   0.00   0.00     1.32   0.00   0.00
 3     1.80   0.14   0.00     1.56   0.08   0.00
 4     6.43   0.00   0.00     1.39   0.00   0.00
 5     1.68   0.00   0.00     1.65   0.00   0.00
 6     1.37   0.00   0.00     1.33   0.00   0.00
 7     1.44   0.00   0.00     1.41   0.00   0.00
 8     1.41   0.00   0.00     1.35   0.00   0.00
 9     2.70   0.73   0.00     1.91   0.08   0.00
10     1.54   0.00   0.00     1.51   0.00   0.00
11     1.53   0.00   0.00     1.48   0.00   0.00
12     1.47   0.00   0.00     1.43   0.00   0.00
13     1.67   0.00   0.00     1.63   0.00   0.00
14     1.38   0.00   0.00     1.32   0.00   0.00
15     1.80   0.14   0.00     1.56   0.08   0.00
16     2.43   0.00   0.00     1.39   0.00   0.00
17     1.68   0.00   0.00     1.65   0.00   0.00
18     1.37   0.00   0.00     1.33   0.00   0.00
19     1.44   0.00   0.00     1.41   0.00   0.00
20     1.41   0.00   0.00     1.35   0.00   0.00
21     2.70   0.73   0.00     1.91   0.08   0.00
22     1.54   0.00   0.00     1.51   0.00   0.00
23     1.53   0.00   0.00     1.48   0.00   0.00
24     1.47   0.00   0.00     1.43   0.00   0.00

avg 1.62 0.07 0.00 1.50 0.01 0.00
max 6.43 0.73 0.00 1.91 0.08 0.00

As we can see at hour 4 the percentage of operations that failed to complete in under 1ms is greater than 5% and thus the drive failed for that time period. As it fails for that one time period then it fails certification at that workload, correct?

ACT profile characteristics

Quoting from https://www.aerospike.com/docs/operations/plan/ssd/ssd_certification.html#pcie-nvme-based-flash These devices were tested at the specified speed with a 67% read/33% write ratio of 1.5 KB objects over 24 hours.

How to determine or specify the read/write ratio in the the config file? does the read/write ratio include both reads and large-block-reads?

1x load has 2000 reads/sec and 1000 writes/sec per device, what would be the read-write ratio here?

Thanks.

Act_prep and compression

Hi all!

I want to run test act_storage with compression 2 (50%). But act_prep clean and then filling my device without compression. How I can run act_prep with compression?

Incorrect values in analysis histogram

There appears to be a regression in the analysis script after 2cca411 which results in incorrect latency reported in the first histogram bucket.

Steps to reproduce:

  1. Run ACT to produce some output to analyze
  2. Checkout current commit db9961f
  3. Run /analysis/act_latency.py and notice large values in first bucket (bug)
        reads                  device-reads           /dev/nvme0n1-reads     large-block-writes     large-block-reads
        %>(ms)                 %>(ms)                 %>(ms)                 %>(ms)                 %>(ms)
slice        1      2      4        1      2      4        1      2      4        1      2      4        1      2      4
-----   ------ ------ ------   ------ ------ ------   ------ ------ ------   ------ ------ ------   ------ ------ ------
    1    17.50   0.10   0.00     0.03   0.00   0.00     0.03   0.00   0.00   100.00 100.00 100.00     0.00   0.00   0.00
  1. Checkout earlier commit 2cca411
  2. Run /analysis/act_latency.py again and notice correct values in first bucket
        reads                  device-reads           /dev/nvme0n1-reads     large-block-writes     large-block-reads
        %>(ms)                 %>(ms)                 %>(ms)                 %>(ms)                 %>(ms)
slice        1      2      4        1      2      4        1      2      4        1      2      4        1      2      4
-----   ------ ------ ------   ------ ------ ------   ------ ------ ------   ------ ------ ------   ------ ------ ------
    1     0.20   0.03   0.01     0.13   0.02   0.01     0.13   0.02   0.01   100.00 100.00 100.00     0.06   0.03   0.01

Latency differences between ACT 5.2 and 6.4 versions

I am running ACT benchmark on an NVMe SSD, and seeing odd results between different versions of ACT. Specifically, after act_prep has run, and the drive has reached a steady state, running 40x scaling load via ACT version 5.2 vs. version 6.4 produce very different latency results, running each on the same system and disk. Here is a snippet of the latency results between the two versions. I am using the default config for the specific version in each run, changing only the target read-reqs-per-sec and write-reqs-per-sec to match the target load.

6.4 ACT Results @40x

        reads
        %>(ms)
slice        1      2      4      8     16     32     64
-----   ------ ------ ------ ------ ------ ------ ------
    1    70.96  28.44   3.76   0.05   0.00   0.00   0.00
    2    71.34  28.79   3.93   0.06   0.00   0.00   0.00
    3    71.18  28.79   3.87   0.05   0.00   0.00   0.00
    4    71.13  28.71   3.84   0.05   0.00   0.00   0.00
    5    71.25  28.90   3.99   0.08   0.00   0.00   0.00
    6    71.16  28.69   3.88   0.08   0.01   0.00   0.00
    7    71.17  28.63   3.76   0.05   0.00   0.00   0.00
    8    71.27  28.79   3.88   0.05   0.00   0.00   0.00
    9    71.26  28.94   4.01   0.06   0.00   0.00   0.00
   10    71.19  28.80   3.94   0.06   0.00   0.00   0.00
-----   ------ ------ ------ ------ ------ ------ ------
  avg    71.19  28.75   3.89   0.06   0.00   0.00   0.00
  max    71.34  28.94   4.01   0.08   0.01   0.00   0.00

5.2 ACT Results @40x

        reads                                              device-reads
        %>(ms)                                             %>(ms)
slice        1      2      4      8     16     32     64        1      2      4      8     16     32     64
-----   ------ ------ ------ ------ ------ ------ ------   ------ ------ ------ ------ ------ ------ ------
    1     0.33   0.02   0.01   0.00   0.00   0.00   0.00     0.07   0.01   0.01   0.00   0.00   0.00   0.00
    2     0.33   0.02   0.01   0.00   0.00   0.00   0.00     0.08   0.02   0.01   0.00   0.00   0.00   0.00
    3     0.26   0.00   0.00   0.00   0.00   0.00   0.00     0.04   0.00   0.00   0.00   0.00   0.00   0.00
    4     0.28   0.02   0.01   0.00   0.00   0.00   0.00     0.06   0.02   0.01   0.00   0.00   0.00   0.00
    5     0.27   0.01   0.01   0.00   0.00   0.00   0.00     0.05   0.01   0.01   0.00   0.00   0.00   0.00
    6     0.29   0.02   0.01   0.00   0.00   0.00   0.00     0.07   0.02   0.01   0.00   0.00   0.00   0.00
    7     0.28   0.00   0.00   0.00   0.00   0.00   0.00     0.05   0.00   0.00   0.00   0.00   0.00   0.00
    8     0.29   0.01   0.01   0.00   0.00   0.00   0.00     0.07   0.01   0.00   0.00   0.00   0.00   0.00
    9     0.30   0.02   0.01   0.00   0.00   0.00   0.00     0.07   0.02   0.01   0.00   0.00   0.00   0.00
   10     0.27   0.01   0.00   0.00   0.00   0.00   0.00     0.05   0.01   0.00   0.00   0.00   0.00   0.00
-----   ------ ------ ------ ------ ------ ------ ------   ------ ------ ------ ------ ------ ------ ------
  avg     0.29   0.01   0.01   0.00   0.00   0.00   0.00     0.06   0.01   0.01   0.00   0.00   0.00   0.00
  max     0.33   0.02   0.01   0.00   0.00   0.00   0.00     0.08   0.02   0.01   0.00   0.00   0.00   0.00

I have attached the full results output for both the 5.2 and 6.4 run. I am looking for some pointers on what might be the cause for the significant difference in latency being reported between the two test runs on the versions noted.
ACT-5.2-40x-results.txt
ACT-6.4-40x-results.txt

ACT does not build on Debian 9 (GCE)

On Debian 9 (on a GCE vm-instance) I've installed the dependencies and tried make ; make -f Makesalt. I get the following error:

echo "Building act.o"
Building act.o
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD  -o act.o -c -I. -I/usr/include act.c
echo "Building histogram.o"
Building histogram.o
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD  -o histogram.o -c -I. -I/usr/include histogram.c
histogram.c: In function ‘histogram_create’:
histogram.c:101:9: warning: passing argument 1 of ‘memset’ discards ‘volatile’ qualifier from pointer target type [-Wdiscarded-array-qualifiers]
  memset(&h->counts, 0, sizeof(h->counts));
         ^
In file included from histogram.c:37:0:
/usr/include/string.h:62:14: note: expected ‘void *’ but argument is of type ‘volatile cf_atomic64 (*)[65] {aka volatile long unsigned int (*)[65]}’
 extern void *memset (void *__s, int __c, size_t __n) __THROW __nonnull ((1));
              ^~~~~~
echo "Building queue.o"
Building queue.o
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD  -o queue.o -c -I. -I/usr/include queue.c
echo "Building random.o"
Building random.o
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD  -o random.o -c -I. -I/usr/include random.c
echo "Linking act"
Linking act
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD -o act  ./act.o ./histogram.o ./queue.o ./random.o -lpthread -lrt -lcrypto -lssl -lz
/usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status
Makefile:29: recipe for target 'act' failed
make: *** [act] Error 1
echo "Building salt.o"
Building salt.o
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD  -o salt.o -c -I. -I/usr/include salt.c
echo "Linking actprep"
Linking actprep
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD -o actprep  ./salt.o ./random.o -lpthread -lrt -lcrypto -lssl -lz
/usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status
Makesalt:29: recipe for target 'actprep' failed
make: *** [actprep] Error 1

ACT TEST1 and TEST2

In the ACT TEST 1 and TEST 2 you never mentions or provide the number threads/queue. A sample will do. Or the number thread irrelevant for the ACT?

If I am using 56 Core, 256 GB, what would be recommended number of threads.

Also for certification do I need to run "storage" and "index" or just storage will do?

Thanks

Licence

You've listed this is open source in a blog post, but you don't have a licence included. Which one are you using for this project, and could you include it in the source?

actprep build error

I am receiving the error below when trying to build actprep.

matt@matt-work:~/source/act$ make -f Makesalt
echo "Building salt.o"
Building salt.o
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD -o salt.o -c -I. -I/usr/include salt.c
echo "Linking actprep"
Linking actprep
gcc -g -fno-common -std=gnu99 -Wall -D_REENTRANT -D_FILE_OFFSET_BITS=64 -MMD -o actprep ./salt.o -lpthread -lrt -lssl
./salt.o: In function rand_fill': /home/matt/source/act/salt.c:429: undefined reference toRAND_bytes'
./salt.o: In function rand_seed': /home/matt/source/act/salt.c:458: undefined reference toRAND_seed'
collect2: error: ld returned 1 exit status
make: *** [actprep] Error 1

ACT cannot set the scheduler mode if the device is partitioned using fdisk

Looks like a bug where ACT cannot set scheduler mode to noop if the device is a partition,

Would be nice if we can find the raw devicename in which the partition belongs and set the scheduler mode for that.

Example output of error:

Aerospike act - device IO test
Copyright 2011 by Aerospike. All rights reserved.

CIO CONFIGURATION
device-names: /dev/sde1
num-devices: 1
queue-per-device: no
num-queues: 2
threads-per-queue: 8
test-duration-sec: 86400
report-interval-sec: 1
read-reqs-per-sec: 2000
large-block-ops-per-sec: 24
read-req-num-512-blocks: 3
large-block-op-kbytes: 128
use-valloc: no
num-write-buffers: 1024
scheduler-mode: noop

ERROR: couldn't open /sys/block/sde1/queue/scheduler
/dev/sde1 size = 101146235904 bytes, 197551104 512-byte blocks, 771684 large blocks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.