GithubHelp home page GithubHelp logo

lanconnected / enhanceio Goto Github PK

View Code? Open in Web Editor NEW

This project forked from stec-inc/enhanceio

101.0 101.0 31.0 567 KB

EnhanceIO Open Source for Linux

License: Other

Makefile 0.34% C 90.88% Shell 3.30% Python 4.75% Roff 0.73%

enhanceio's People

Contributors

ape avatar bhansaliakhil avatar cpt-graywolf avatar deepenmehta85 avatar kcgthb avatar lanconnected avatar nikmartin avatar onlyjob avatar pfactum avatar sanoj-stec avatar schreibikus avatar xjtuwjp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enhanceio's Issues

EnhanceIO + LVM snapshots = corruption

It looks like using EnhanceIO with LVM snapshots leads to corruption.

I'm using CentOS 6 with Linux 4.9.25 from the Xen4CentOS stack. I've set the EnhanceIO source device to be an LVM PV partition of a hard disk (containing disk images for Xen VMs), and the SSD to be an SSD.

Everything works great until I create a snapshot of an LVM LV within the PV. Even after doing this, things work okay for a while but after a bunch of heavy IO (doing an apt dist-upgrade to upgrade the OS image inside that LV, for instance) EnhanceIO eventually freaks out dropping kernel messages like this in the host:

[79465.117894] in eio_map bio_idx is 1
[79465.118260] in eio_map bio_idx is 2
[79465.118283] in eio_map bio_idx is 3
[79465.118672] in eio_map bio_idx is 4
[79465.118713] in eio_map bio_idx is 5

Until eventually disk corruption occurs in the LVM LV. The snapshot appears to be fine, however, and the data can be recovered by disabling EnhanceIO, restoring the LVM snapshot, and then enabling EnhanceIO again.

Having not reviewed the source code I don't understand the scenario in which that error would be emitted, but at a guess it seems to be getting confused while mapping sectors. As LVM snapshots use copy-on-write, I'm wondering if it's getting confused by the same sector being mapped to multiple LVs?

Cache creation failed: get_device for source device failed.

Hey guys,
I was trying to create a cache using this command:

sudo eio_cli create -d /dev/sdc1 -s /dev/sdb5 -p lru -m wb -c new_cache

But I'm getting the error in the title. Both directories seem to be valid and I'm not sure where the error could rise from. I'd appreciate your help.

Thanks

P.S: I'm using centos 7 if that helps.

Makefile uses the current kernel source which it shouldn't

The current Makefile uses uname -r to locate the kernel source and kernel module directory.
This is very unfortunate!
When you have just installed a new kernel (something I do about each week),
you're still using the "old" kernel but you want to re-install modules for the new kernel.
All (Gentoo) packages which need the kernel sources access these via the symbolic link /usr/src/linux.
So, it's custom to set this symlink to the new kernel source before re-installing kernel modules.

With the current Makefile you have to boot into the new kernel before you can re-install the
EnhanceIO-kernel-module. This way I've lost some data. I had installed a cache in write-back mode
but the new kernel couldn't load the EnhanceIO-kernel-module since these were not there, yet.

I've attached a patch to make Makefile refer to /usr/src/linux which may be different from the
current running kernel.
Makefile.patch.zip

building on MANJARO linux

I am a Manjaro Linux user (sorry, not a developer - no patches), kernel 4.11. I tried to compile the master branch:

cd EnhanceIO/Driver/enhanceio
make # no errors
sudo make install
make[1]: Entering directory '/usr/lib/modules/4.11.7-1-MANJARO/build'
scripts/Makefile.build:44: arch/x86/entry/syscalls/Makefile: No such file or directory
make[2]: *** No rule to make target 'arch/x86/entry/syscalls/Makefile'.  Stop.

and indeed, there is no 'entry' subdir in: /usr/lib/modules/4.11.7-1-MANJARO/build/arch/x86/

I am not sure, maybe I need to install something?!

eio_cli delete - why are you so slow?

Given, the underlying hard disk is not that fast.
fio reports about 36 Mb/sec in sequential write.

But "eio_cli delete" writes only 8 Mb/sec.
I've done a fio test with 10G on the cached FS which was showed about 225 Mb/sec.
Then I tried to delete the cache.
It wrote only 1024 blocks (a 8K) per second.

/proc interface to manage caches

I happen to have a nearly ready pull request that allows caches management directly from a new procfs file /proc/enhanceio/control.
It allows to enable caches directly from initrd scripts thanks to this because the python CLI is not anymore required.
Is this is of any interest, I can make a PR.

For example this creates a cache:

echo "  create   policy = lru mode=wb   bs=4096 cache=test ssd=/dev/zram0 dev=$(readlink -f /dev/disk/by-id/ata-Micron_1100_SATA_256GB_1720171DB8D8-part1)  " > /proc/enhanceio/control

(this is a test line where spaces are intentionally not coherent)

checksum error with btrfs (lru+wb, fifo+ro too)

Hello,
On ubuntu 16.04, kernel 4.10, 32GiB ram,
ssd partition of 64GB
btrfs filesystem on spinning hard drive,
bonnie++ (creating 64GiB data) to check performances, I get these errors:

[ 1004.048598] BTRFS warning (device sdb3): csum failed ino 258 off 66056749056 csum 1995617934 expected csum 3912343800
[ 1055.395836] BTRFS warning (device sdb3): csum failed ino 258 off 1221169152 csum 1152196041 expected csum 3153844041
[ 1073.145194] BTRFS warning (device sdb3): csum failed ino 258 off 1963663360 csum 2858193683 expected csum 1022787776
[ 1074.487867] BTRFS warning (device sdb3): csum failed ino 258 off 2022420480 csum 3031311257 expected csum 355173908
[ 1075.927695] BTRFS warning (device sdb3): csum failed ino 258 off 2076913664 csum 1105526511 expected csum 2993740284
...
[ 1295.325269] BTRFS warning (device sdb3): csum failed ino 258 off 10913980416 csum 2406927829 expected 
csum 1599545349
[ 1300.186300] __readpage_endio_check: 8419 callbacks suppressed
[ 1300.186303] BTRFS warning (device sdb3): csum failed ino 258 off 10913980416 csum 2406927829 expected 
csum 1599545349
...

...

# dmesg | grep 'sdb3.*csum failed' | wc -l
230

and bonnie complains about corrupted files.

attempt to access beyond end of device

data corruption with new formatted partition
окт 09 22:26:26 homepc kernel: sdb5: rw=2049, want=3585370432, limit=102400000 окт 09 22:26:26 homepc kernel: attempt to access beyond end of device окт 09 22:26:26 homepc kernel: sdb5: rw=2049, want=3585373296, limit=102400000 окт 09 22:26:26 homepc kernel: io_callback: io error -5 block 1354872 action 4 prev_count 2 окт 09 22:26:26 homepc kernel: io_callback: io error -5 block 1368384 action 4 prev_count 1 окт 09 22:26:26 homepc kernel: EXT4-fs warning (device sdb5): ext4_end_bio:323: I/O error 10 writing to inode 657155 (offset 4096 size 83845 окт 09 22:26:26 homepc kernel: buffer_io_error: 1413 callbacks suppressed окт 09 22:26:26 homepc kernel: Buffer I/O error on device sdb5, logical block 169359 окт 09 22:26:26 homepc kernel: Buffer I/O error on device sdb5, logical block 169360 окт 09 22:26:26 homepc kernel: attempt to access beyond end of device
there is 16Gb ssd partition and 50Gb home partition. kernel 5.2 amd64
command eio_cli create -d /dev/sdb5 -s /dev/sda4 -p lru -m wt -c sdb5_cache
behavior is the same with wt or with ro.
in the upstream it seems that similar error was fixed.

Problem with eio_clean_thr process

Hi,

I compiled and installed master version in kernel 4.14.34.
When writing sequentially into the disk, the cache device gets full and cannot evict data from the cache.
eio_clean_thr sticks in D state and the eviction (flushing/cleaning) process cannot continue even after stopping IO (sequential write).
This is the stack trace of the process:

eio_clean_thr D 0 28023 2 0x80000080
Dec 7 12:08:46 srv01 kernel: Call Trace:
Dec 7 12:08:46 srv01 kernel: ? __schedule+0x1ad/0x6a0
Dec 7 12:08:46 srv01 kernel: schedule+0x32/0x80
Dec 7 12:08:46 srv01 kernel: rwsem_down_write_failed+0x1fe/0x380
Dec 7 12:08:46 srv01 kernel: call_rwsem_down_write_failed+0x13/0x20
Dec 7 12:08:46 srv01 kernel: down_write+0x29/0x40
Dec 7 12:08:46 srv01 kernel: eio_clean_set+0x14c/0x9f0 [enhanceio]
Dec 7 12:08:46 srv01 kernel: ? del_timer_sync+0x35/0x40
Dec 7 12:08:46 srv01 kernel: ? call_timer_fn+0x130/0x130
Dec 7 12:08:46 srv01 kernel: eio_clean_thread_proc+0x1bc/0x360 [enhanceio]
Dec 7 12:08:46 srv01 kernel: ? __schedule+0x1b5/0x6a0
Dec 7 12:08:46 srv01 kernel: kthread+0xfc/0x130
Dec 7 12:08:46 srv01 kernel: ? eio_clean_all+0xd0/0xd0 [enhanceio]
Dec 7 12:08:46 srv01 kernel: ? __kthread_parkme+0x70/0x70
Dec 7 12:08:46 srv01 kernel: ret_from_fork+0x35/0x40

Thanks,
Saba

Cache creation failed: get_device for cache device failed.

After finally finding the offending copy of eio_cli that was causing the max int issue, I've run into a new problem. When running this command:

eio_cli create -d /dev/sdc -s /dev/sda1 -p lru -m wb -c btrfsCache

I'm presented with the following output:

Cache Name : btrfsCache
Source Device : /dev/sdc
SSD Device : /dev/sda1
Policy : lru
Mode : Write Back
Block Size : 4096
Associativity : 256
Cache creation failed (dmesg can provide you more info)

dmesg provides this line:

[1245882.920180] enhanceio: Cache creation failed: get_device for cache device failed.

Error building with kernel 4.8.8.2 in Arch Linux.

I'm seeing this error:

image

It seems REQ_OP_MASK was added in kernel 4.10.

As I known nothing about kernel coding, could someone recommend a patch for this? It seems to be related to debugging, so as I want to test it to use for production (want to test it against ZFS, instead of L2ARC), I can leave without debugging for now.

Unfortunately I can't upgrade the kernel at this point, since it's a production server... :(

eio_cli does not work with Python3

Python3 is very different from Python2 when it comes to handling of strings and bytes.
Therefore, a simple invocation of 2to3 does not suffice.
Byte-strings have to be decoded and strings to be encoded.
In my attached patch, I use "latin-1". This might be changed to "utf-8" if one uses file names
which contain characters only present in utf-8.
eio_cli_Python3.patch.zip

Cannot compile on kernel > 4.3

make

make -C /lib/modules/4.4.5-15.26.amzn1.x86_64/build M=/root/EnhanceIO/Driver/enhanceio modules V=0
make[1]: Entering directory `/usr/src/kernels/4.4.5-15.26.amzn1.x86_64'
CC [M] /root/EnhanceIO/Driver/enhanceio/eio_conf.o
/root/EnhanceIO/Driver/enhanceio/eio_conf.c: In function ‘eio_init_ssddev_props’:
/root/EnhanceIO/Driver/enhanceio/eio_conf.c:1436:2: error: implicit declaration of function ‘bio_get_nr_vecs’ [-Werror=implicit-function-declaration]
max_nr_pages = (u_int32_t)bio_get_nr_vecs(dmc->cache_dev->bdev);

It appears the bio_get_nr_vecs function no longer exists in the kernel source.

compilation fails with kernel 4.13.11

There must have been some changes in the kernel recently.
Are there any patches, yet?
Many thanks,
Helmut.

Trying to compile EnhanceIO I get

In file included from /Temp/enhanceio/eio.h:64:0,
from /Temp/enhanceio/eio_main.c:37:
/Temp/enhanceio/eio_main.c: In function 'bc_put':
/Temp/enhanceio/compat.h:288:38: error: 'struct bio' has no member named 'bi_error'; did you mean 'bi_iter'?
#define EIO_BIO_ENDIO(B,E) do { (B)->bi_error = E; bio_endio(B); } while (0)
^
/Temp/enhanceio/eio_main.c:110:3: note: in expansion of macro 'EIO_BIO_ENDIO'
EIO_BIO_ENDIO(bc->bc_bio, bc->bc_error);
^~~~~~~~~~~~~
/Temp/enhanceio/eio_main.c: In function 'eio_map':
/Temp/enhanceio/compat.h:288:38: error: 'struct bio' has no member named 'bi_error'; did you mean 'bi_iter'?
#define EIO_BIO_ENDIO(B,E) do { (B)->bi_error = E; bio_endio(B); } while (0)
^
/Temp/enhanceio/eio_main.c:2395:3: note: in expansion of macro 'EIO_BIO_ENDIO'
EIO_BIO_ENDIO(bio, 0);
^~~~~~~~~~~~~
/Temp/enhanceio/compat.h:288:38: error: 'struct bio' has no member named 'bi_error'; did you mean 'bi_iter'?
#define EIO_BIO_ENDIO(B,E) do { (B)->bi_error = E; bio_endio(B); } while (0)
^
/Temp/enhanceio/eio_main.c:2403:4: note: in expansion of macro 'EIO_BIO_ENDIO'
EIO_BIO_ENDIO(bio, -EPERM);
^~~~~~~~~~~~~
/Temp/enhanceio/compat.h:288:38: error: 'struct bio' has no member named 'bi_error'; did you mean 'bi_iter'?
#define EIO_BIO_ENDIO(B,E) do { (B)->bi_error = E; bio_endio(B); } while (0)
^
/Temp/enhanceio/eio_main.c:2430:3: note: in expansion of macro 'EIO_BIO_ENDIO'
EIO_BIO_ENDIO(bio, -ENODEV);
^~~~~~~~~~~~~
/Temp/enhanceio/compat.h:288:38: error: 'struct bio' has no member named 'bi_error'; did you mean 'bi_iter'?
#define EIO_BIO_ENDIO(B,E) do { (B)->bi_error = E; bio_endio(B); } while (0)
^
/Temp/enhanceio/eio_main.c:2460:3: note: in expansion of macro 'EIO_BIO_ENDIO'
EIO_BIO_ENDIO(bio, -ENOMEM);
^~~~~~~~~~~~~
/Temp/enhanceio/compat.h:288:38: error: 'struct bio' has no member named 'bi_error'; did you mean 'bi_iter'?
#define EIO_BIO_ENDIO(B,E) do { (B)->bi_error = E; bio_endio(B); } while (0)
^
/Temp/enhanceio/eio_main.c:2486:4: note: in expansion of macro 'EIO_BIO_ENDIO'
EIO_BIO_ENDIO(bio, ret);
^~~~~~~~~~~~~

Is writeback mode stable or not

I have read somewhere on the internet (don't remember exactly where) claiming that enhanceIO may loss data under some circumstances. Hence my question is

  1. Does someone here ever use enhanceio writeback cache mode in production, and how about reliability?
  2. For how long?

I am planing to use enhanceio as the cache solution below Gluster bricks.

configuring problem

Configurazione di enhanceio-dkms (0+git20190417.5815670-1)...
Removing old enhanceio-0+git20190417.5815670 DKMS files...


Deleting module version: 0+git20190417.5815670
completely from the DKMS tree.

Done.
Loading new enhanceio-0+git20190417.5815670 DKMS files...
Building for 5.13.0-trunk-amd64
Building initial module for 5.13.0-trunk-amd64
Error! Bad return status for module build on kernel: 5.13.0-trunk-amd64 (x86_64)
Consult /var/lib/dkms/enhanceio/0+git20190417.5815670/build/make.log for more information.
Si sono verificati degli errori nell'elaborazione:.............................]
enhanceio-dkms
enhanceio
E: Sub-process /usr/bin/dpkg returned an error code (1)

EnhanceIO doesn't compile with kernel 4.14.0

Kernel 4.14.0 has changed things again - this time it looks like a bigger change.
struct bio doesn't have a field called bi_bdev
Caused by commits

74d46992e0d9 ("block: replace bi_bdev with a gendisk pointer and partitions
index")

827180946edf ("zram: write incompressible pages to backing device")
83ff0ec8b13f ("zram: read page from backing device")

interacting with commit

74d46992e0d9 ("block: replace bi_bdev with a gendisk pointer and partitions
index")

Support for Kernel mainline 5.6.x

it does not build.

extern struct proc_dir_entry *proc_create_data(const char *, umode_t,
                              ^~~~~~~~~~~~~~~~
/mnt/rdata/work/linuxwork/EnhanceIO/Driver/enhanceio/eio_procfs.c:1417:39: error: passing argument 4 of ‘proc_create_data’ from incompatible pointer type [-Werror=incompatible-pointer-types]
  entry = proc_create_data(s, 0, NULL, &eio_config_operations, dmc);

SSD_ADD event triggered on partition scan

I believe opening gparted results in a few calls that produce this kernel error from the past:

[ 1501.669967] enhanceio: eio_handle_ssd_message: SSD_ADD event called for ACTIVE cache "7tbcache", ignoring!!!

Running Pop_OS with a self-compiled eio layer. Kernel:

uname -a 
Linux chibe 5.4.0-7634-generic #38~1591219791~20.04~6b1c5de-Ubuntu SMP Thu Jun 4 02:56:10 UTC 2 x86_64 x86_64 x86_64 GNU/Linux

new project

Hi,

Since EnhanceIO is considered abandonware and these 3 repositories try to make an effort of evolvning the project:
https://github.com/lanconnected/EnhanceIO
https://github.com/Frontier314/EnhanceIO
https://github.com/d4fseeker/EnhanceIO

I would suggest that you all 3 communicate and fork EnhanceIO into a new project where all 3 repositories are merged into one EnhanceIO-ng. And doing it proper by creating a organization that invites everyone that wants to be a part of the project, thus not having something that relies on one persons interest to continue the development of EnhanceIO.

And then look at all the pull requests that the original project have and try to merge them into EnhanceIO-ng.

That way the community will perhaps notice that there is a new project that is more up to date and still alive.

Cryptoluks device point

Hi there! I want to know can enhanceIO improve read speed on cryptoLuks devices? If yes, i want to know what should i do? Connect to ssd cache physical device (e.g. /dev/sda) or mapper (crypto) device (e.g. /dev/mapper/test_luks)?

Sorry for open issue, i need help :) Thenk you much!

./Install-EIO install-cli does not work, and eio_cli when installed manually does not work either.

Running ./Install-EIO install-cli just produces:

INSTALL-EIO: Failed to install eio_cli!
INSTALL-EIO: Argument not understood "install-cli"
INSTALL-EIO: Try running "Install-EIO --help"

Even when installing manually via the instructions provided in install.txt running it only produces:

  File "/usr/bin/eio_cli", line 293
    print "Cache Name       : " + self.name 
                              ^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("Cache Name       : " + self.name)?

"signed integer is greater than maximum" when creating cache

Creating a cache on any device using my 1TB SSD results in the following. There is no information in dmesg. I have tried applying the patch described here stec-inc#77 to no avail.

root@pve:~/git/EIO/EnhanceIO/Driver/enhanceio#
eio_cli create -d /dev/sdc -s /dev/sda1 -p lru -m wb -c btrfsCache
Cache Name : btrfsCache
Source Device : /dev/sdc
SSD Device : /dev/sda1
Policy : lru
Mode : Write Back
Block Size : 4096
Associativity : 256
signed integer is greater than maximum
Cache creation failed (dmesg can provide you more info)

enhanceio: Cache creation failed: Cache already exists.

Hi,hi guys.
I tried to use enhanceio on centos7.5. I found that after creating enhanceio, restart the machine and enhanceio could not load.Do you have any suggestions?
centos7.5 (kvm)
kernel 3.10.0-957.1.3.el7.x86_64

after reboot
[root@enhanceio-test1 ~]# eio_cli info
No caches Found
[root@enhanceio-test1 ~]# eio_cli create -d /dev/vdc -s /dev/vdb1 -m wb -c eio1
Cache Name : eio1
Source Device : /dev/vdc
SSD Device : /dev/vdb1
Policy : lru
Mode : Write Back
Block Size : 4096
Associativity : 256
Cache creation failed (dmesg can provide you more info)
[root@enhanceio-test1 ~]# dmesg
[ 4540.642825] enhanceio: Cache creation failed: Cache already exists.

do_gettimeofday depreciated in kernel 5.0

"do_gettimeofday" is depreciated, and causes errors when compiling on kernel versions higher than 4.x.

Was able to compile with "ktime_get_real_ts64" instead after also changing "struct timeval" values to "struct timespec64", but have not yet tested this change. It may cause unforeseen bugs to change things blindly.

I'll see what happens. I don't expect I know what I'm doing.

Edit: Seems to work. I was able to load all modules and use the command line utility without issue on fedora 29 with kernel 5.0.3.

Have submitted pull.

read-only cache caused data corruption

I am writing to share my experience with this fork of EnhanceIO. After searching for available solutions around SSD caching on Linux, I decided to at least try it out because it seemed like a good match for my needs. My OS is Ubuntu 20.04.1, kernel 5.4.0-48-lowlatency, user home is on an iSCSI target and the SSD cache is an NVMe on USB.

Compilation went fine, the modules were loaded and the cache was created successfully. I thought I was being careful by using the read-only cache mode and specifying block devices with IDs. I executed a few test runs, the SSD was blinking and /proc had statistics showing that the cache was in use.

I then rebooted the system to see how that works out. It didn't. The source disk array was marked dirty and the filesystem check revealed extensive data corruption and non-recoverable errors. I actually then repeated the process once more for good measure. Clean reinstall and reinitialisation of the cache device. The results were the same. Comparing with my backups, the errors occurred around the areas of the filesystem I touched during my tests.

How could a read-only cache damage the source device is beyond me. Perhaps someone more familiar with the project and the underlying code could shed some light. Time permitting, I am willing to assist in debugging.

problem compile on kernel 4.15.x

System PROXMOX VE 5.1.
kernel version 4.15.17-1-pve
Please, help compile EnhanceIO on this system

make -C /lib/modules/4.15.17-1-pve/build M=/root/src/111/EnhanceIO/Driver/enhanceio modules V=0
make[1]: Entering directory '/usr/src/linux-headers-4.15.17-1-pve'
make[2]: *** No rule to make target '/root/src/111/EnhanceIO/Driver/enhanceio/eio_conf.o', needed by '/root/src/111/EnhanceIO/Driver/enhanceio/enhanceio.o'. Stop.
Makefile:1552: recipe for target 'module/root/src/111/EnhanceIO/Driver/enhanceio' failed
make[1]: *** [module/root/src/111/EnhanceIO/Driver/enhanceio] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-4.15.17-1-pve'
Makefile:28: recipe for target 'modules' failed
make: *** [modules] Error 2

Apparent deadlock in writeback mode

Hi,

I'm testing out EnhanceIO in my lab but I'm experiencing what appears to be a deadlock in the writeback locking code; I see EIO processes stuck at down_write() and down_read(). The issue appears to occur somewhat randomly but I believe only after writeback has started and while data is being read from the cached device. In my testing the deadlock always eventually occurs within a few hours to a day or two. Details follow. Please let me know what else I can do to help troubleshoot this.

Thanks,
Carl

Here is the system state at one deadlock:

[root@tantor data]# uname -a
Linux tantor 4.14.33.cet #1 SMP Thu Apr 12 01:46:41 UTC 2018 x86_64 GNU/Linux

[root@tantor data]# free -h
              total        used        free      shared  buff/cache   available
Mem:           165G        1.7G        143G        360M         19G        161G
Swap:            0B          0B          0B

[root@tantor data]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
tmpfs                       800M  361M  440M  46% /
overlay                     800M  361M  440M  46% /etc
overlay                     800M  361M  440M  46% /var/lib
overlay                     800M  361M  440M  46% /root
overlay                     800M  361M  440M  46% /data
overlay                     800M  361M  440M  46% /mnt
overlay                     800M  361M  440M  46% /usr/local
tmpfs                        17G  436K   17G   1% /run
dev                          10M     0   10M   0% /dev
shm                          83G     0   83G   0% /dev/shm
cgroup_root                  10M     0   10M   0% /sys/fs/cgroup
/dev/sdj1                   380M  2.4M  353M   1% /var/mnt/data
tmpfs                        83G  4.0K   83G   1% /tmp
/dev/mapper/tantor-vmware1   16T  344G   16T   3% /data/vmware1


[root@tantor data]# pwd
/proc/enhanceio/data

[root@tantor data]# cat config
src_name   /dev/md/r10
ssd_name   /dev/disk/by-id/ata-INTEL_SSDSC2BA400G4_BTHV51930A69400NGN-part2
src_size   54685646848
ssd_size   88780544
set_size          256
block_size       4096
mode                1
eviction            2
num_sets       346799
num_blocks   88780544
metadata        small
state        normal
flags      0x00000000

[root@tantor data]# cat errors
disk_read_errors       0
disk_write_errors      0
ssd_read_errors        0
ssd_write_errors       0
memory_alloc_errors    0
no_cache_dev           0
no_source_dev          0

[root@tantor data]# cat io_hist
512              1
4096       1323758
8192        341080
12288       198223
16384       223120
20480       128309
24576       118333
28672       108896
32768       109278
36864        95388
40960        93576
45056        96451
49152       106226
53248        89696
57344        86880
61440        84751
65536       221431

[root@tantor data]# cat stats
reads                        1523238529
writes                        121480088
read_hits                      22576641
read_hit_pct                          1
write_hits                     71195384
write_hit_pct                        58
dirty_write_hits                4732437
dirty_write_hit_pct                   3
cached_blocks                   9643087
rd_replace                       391320
wr_replace                       437070
noroom                            52260
cleanings                             0
md_write_dirty                 10207060
md_write_clean                        0
md_ssd_writes                    313640
do_clean                              0
nr_blocks                      88780544
nr_dirty                        1745079
nr_sets                          346799
clean_index                           0
uncached_reads                  4459936
uncached_writes                    3884
uncached_map_size                     0
uncached_map_uncacheable              0
disk_reads                   1500660872
disk_writes                    69662360
ssd_reads                      90370881
ssd_writes                    155635815
ssd_readfills                  35450928
ssd_readfill_unplugs             227564
readdisk                        4459936
writedisk                       4459936
readcache                       2834380
readfill                        4431366
writecache                     19373963
readcount                       4570648
writecount                      1735523
kb_reads                      761619264
kb_writes                      60740044
rdtime_ms                      39336574
wrtime_ms                       1690530
unaligned_ios                         0

[root@tantor data]# grep "" /proc/sys/dev/enhanceio/data/*
/proc/sys/dev/enhanceio/data/autoclean_threshold:128
/proc/sys/dev/enhanceio/data/cache_wronly:0
/proc/sys/dev/enhanceio/data/control:0
/proc/sys/dev/enhanceio/data/dirty_high_threshold:30
/proc/sys/dev/enhanceio/data/dirty_low_threshold:10
/proc/sys/dev/enhanceio/data/dirty_set_high_threshold:100
/proc/sys/dev/enhanceio/data/dirty_set_low_threshold:30
/proc/sys/dev/enhanceio/data/do_clean:0
/proc/sys/dev/enhanceio/data/mem_limit_pct:75
/proc/sys/dev/enhanceio/data/time_based_clean_interval:0
/proc/sys/dev/enhanceio/data/zero_stats:0

[root@tantor ~]# ps aux |grep D
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      2787  0.0  0.0      0     0 ?        D    May11   0:02 [eio_clean_threa]
root      3019  0.0  0.0      0     0 ?        D    May11   0:15 [xfsaild/dm-4]
root      3502  0.0  0.0      0     0 ?        D    02:15   0:00 [kworker/u49:1]
root      3514 15.3  0.0      0     0 ?        D    02:18   4:10 [kworker/u48:2]
root      3529  0.0  0.0      0     0 ?        D    02:28   0:00 [kworker/14:2]
root      3538  0.0  0.0      0     0 ?        D    02:33   0:00 [kworker/14:0]
root      8553  0.0  0.0   3360   584 pts/3    S+   02:45   0:00 grep D

[root@tantor ~]# cat /proc/2787/stack
[<ffffffff946b3b03>] call_rwsem_down_write_failed+0x13/0x20
[<ffffffffc02a48e9>] eio_clean_set+0xe9/0xa60 [enhanceio]
[<ffffffffc02a5511>] eio_clean_thread_proc+0x1f1/0x3a0 [enhanceio]
[<ffffffff9407d362>] kthread+0xf2/0x130
[<ffffffff9480017f>] ret_from_fork+0x1f/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

[root@tantor ~]# cat /proc/3019/stack
[<ffffffffc09111f4>] _xfs_log_force+0x144/0x280 [xfs]
[<ffffffffc091d80a>] xfsaild+0x16a/0x5b0 [xfs]
[<ffffffff9407d362>] kthread+0xf2/0x130
[<ffffffff9480017f>] ret_from_fork+0x1f/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

[root@tantor ~]# cat /proc/3502/stack
[<ffffffff946b3ad4>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffc02a380c>] eio_map+0x7cc/0x15a0 [enhanceio]
[<ffffffffc02a939d>] eio_make_request_fn+0x33d/0x480 [enhanceio]
[<ffffffff942c4f49>] generic_make_request+0x119/0x2a0
[<ffffffff942c5137>] submit_bio+0x67/0x140
[<ffffffffc08e881b>] xfs_submit_ioend+0x7b/0x190 [xfs]
[<ffffffffc08e8998>] xfs_vm_writepages+0x68/0x90 [xfs]
[<ffffffff941199b2>] do_writepages+0x12/0x60
[<ffffffff941afaea>] __writeback_single_inode+0x2a/0x160
[<ffffffff941b00ba>] writeback_sb_inodes+0x20a/0x4c0
[<ffffffff941b03f2>] __writeback_inodes_wb+0x82/0xb0
[<ffffffff941b06e7>] wb_writeback+0x1f7/0x210
[<ffffffff941b0cbd>] wb_workfn+0x15d/0x260
[<ffffffff94077a01>] process_one_work+0x141/0x2e0
[<ffffffff94077be8>] worker_thread+0x48/0x3a0
[<ffffffff9407d362>] kthread+0xf2/0x130
[<ffffffff9480017f>] ret_from_fork+0x1f/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

[root@tantor ~]# cat /proc/3514/stack
[<ffffffff946b3ad4>] call_rwsem_down_read_failed+0x14/0x30
[<ffffffffc02a380c>] eio_map+0x7cc/0x15a0 [enhanceio]
[<ffffffffc02a939d>] eio_make_request_fn+0x33d/0x480 [enhanceio]
[<ffffffff942c4f49>] generic_make_request+0x119/0x2a0
[<ffffffffc04fbcce>] process_cell+0x24e/0x4e0 [dm_thin_pool]
[<ffffffffc04fb2ce>] do_worker+0x2ae/0x980 [dm_thin_pool]
[<ffffffff94077a01>] process_one_work+0x141/0x2e0
[<ffffffff94077be8>] worker_thread+0x48/0x3a0
[<ffffffff9407d362>] kthread+0xf2/0x130
[<ffffffff9480017f>] ret_from_fork+0x1f/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

[root@tantor ~]# cat /proc/3529/stack
[<ffffffffc09103c4>] xlog_state_get_iclog_space+0x104/0x2d0 [xfs]
[<ffffffffc09109db>] xlog_write+0x18b/0x770 [xfs]
[<ffffffffc0912586>] xlog_cil_push+0x2a6/0x450 [xfs]
[<ffffffff94077a01>] process_one_work+0x141/0x2e0
[<ffffffff94077d6c>] worker_thread+0x1cc/0x3a0
[<ffffffff9407d362>] kthread+0xf2/0x130
[<ffffffff9480017f>] ret_from_fork+0x1f/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

[root@tantor ~]# cat /proc/3529/stack
[<ffffffffc09103c4>] xlog_state_get_iclog_space+0x104/0x2d0 [xfs]
[<ffffffffc09109db>] xlog_write+0x18b/0x770 [xfs]
[<ffffffffc0912586>] xlog_cil_push+0x2a6/0x450 [xfs]
[<ffffffff94077a01>] process_one_work+0x141/0x2e0
[<ffffffff94077d6c>] worker_thread+0x1cc/0x3a0
[<ffffffff9407d362>] kthread+0xf2/0x130
[<ffffffff9480017f>] ret_from_fork+0x1f/0x30
[<ffffffffffffffff>] 0xffffffffffffffff

could not insert 'enhanceio'

Hi. I am new to EnhanceIO. When I run sudo ./Install-EIO in Ubuntu 16.04 x86 LTS I get this modprobe: ERROR: could not insert 'enhanceio': Unknown symbol in module, or unknown parameter (see dmesg) error and when I run it in Ubuntu 20.04 x64 LTS I get several errors in /Driver/enhanceio/eio_procfs and eio.h. Can you please help me? I had some problems with the main project of EnhanceIO but when I contacted its team, they suggested using this active fork.

New Cache Scheme

Hi,

This is not an issue. Since I didn't have your email, I wasn't able to write an email to you. Currently I'm working on EnhanceIO as my bachelor project. The goal of project is to propose a better method for caching blocks in SSD device.
As an starting point of experiments I want to only cache blocks that had been accessed at least three times during a period of time.
Frankly speaking its so hard to write a new project to do this and I was wondering if you help where should I start and how can I solve my problem.

EnhanceIO vs. zram

I'm trying to use enhanceio with zram. It looks that enhanceio can't write to block device zram. In dmesg it looks as below:
Mar 29 09:03:05 localhost kernel: [ 107.807802] zram: Added device: zram0
Mar 29 09:03:31 localhost kernel: [ 133.503113] zram0: detected capacity change from 0 to 300003328
Mar 29 09:03:53 localhost kernel: [ 155.963158] register_policy: policy 1 added
Mar 29 09:03:53 localhost kernel: [ 155.968492] register_policy: policy 2 added
Mar 29 09:03:53 localhost kernel: [ 155.973663] register_policy: policy 3 added
Mar 29 09:03:53 localhost kernel: [ 156.002035] enhanceio: Setting mode to write through
Mar 29 09:03:53 localhost kernel: [ 156.002039] get_policy: policy 2 found
Mar 29 09:03:53 localhost kernel: [ 156.002042] enhanceio_lru: eio_lru_instance_init: created new instance of LRU
Mar 29 09:03:53 localhost kernel: [ 156.002044] enhanceio: Setting replacement policy to lru (2)
Mar 29 09:03:53 localhost kernel: [ 156.002058] Not enough sets to use small metadata
Mar 29 09:03:53 localhost kernel: [ 156.002061] enhanceio: Allocate 4432KB (8B per) mem for 567296-entry cache (capacity:285MB, associativity:256, block size:512 bytes)
Mar 29 09:03:53 localhost kernel: [ 156.015349] enhanceio_lru: Initialized 2216 sets in LRU
Mar 29 09:04:15 localhost kernel: [ 177.677515] io_callback: io error -5 block 0 action 5
Mar 29 09:04:15 localhost kernel: [ 177.677520] io_callback: io error -5 block 1 action 5
[flood messages io_callback]

I suspect that it can be somehow related to block size, enhanceio logged "block size:512 bytes" but zram has size 4k:
# blockdev --report /dev/zram0
RO RA SSZ BSZ StartSec Size Device
rw 256 4096 4096 0 300003328 /dev/zram0

Disabling cache without flushing data

Is there anyway to disable the cache without flushing data?
I tried to change the function call to eio_ttc_deactivate by setting the second parameter to 1. The cache is disabled, but it is not removed from the list of caches.

The other option, which I've found is using fast_remove option. Can I delete the cache with this option and then enable it again later to use the data in the cache?

kernel-4.10, error: implicit declaration of function ‘bio_flags’

If you have a spare time...:)

make[1]: Entering directory '/usr/src/linux-4.10.1-gentoo'
CC [M] /usr/src/EnhanceIO/Driver/enhanceio/eio_main.o
In file included from /usr/src/EnhanceIO/Driver/enhanceio/eio_main.c:37:0:
/usr/src/EnhanceIO/Driver/enhanceio/eio_main.c: In function ‘eio_disk_io’:
/usr/src/EnhanceIO/Driver/enhanceio/eio.h:837:41: error: implicit declaration of function ‘bio_flags’ [-Werror=implicit-function-declaration]
#define GET_BIO_FLAGS(ebio) bio_flags((ebio)->eb_bc->bc_bio)
^
/usr/src/EnhanceIO/Driver/enhanceio/eio_main.c:2147:25: note: in expansion of macro ‘GET_BIO_FLAGS’
GET_BIO_OP(ebio), GET_BIO_FLAGS(ebio),
^
cc1: some warnings being treated as errors
make[2]: *** [scripts/Makefile.build:295: /usr/src/EnhanceIO/Driver/enhanceio/eio_main.o] Błąd 1
make[1]: *** [Makefile:1490: module/usr/src/EnhanceIO/Driver/enhanceio] Error 2
make[1]: Leaving directory '/usr/src/linux-4.10.1-gentoo'
make: *** [Makefile:28: modules] Error 2

'kernel BUG at /home/adam/code/EnhanceIO/Driver/enhanceio/eio_ttc.c:1168!'

This assertion keeps firing on a 4.18 kernel:
EIO_ASSERT(op != REQ_OP_READ);

It always fires once during startup and then typically fires again when doing 'sync'. Subsequent IO operations on the source disk also usually (but not always) silently start to hang.

Example trace:

[ 2850.000007] Call Trace:
[ 2850.000015] eio_map+0x280/0x1780 [enhanceio]
[ 2850.000020] ? iget5_locked+0x2e/0x90
[ 2850.000024] ? bdev_test+0x20/0x20
[ 2850.000031] eio_make_request_fn+0x36d/0x4d0 [enhanceio]
[ 2850.000036] ? generic_make_request_checks+0x387/0x630
[ 2850.000041] ? wait_woken+0x80/0x80
[ 2850.000045] ? mempool_alloc_slab+0x15/0x20
[ 2850.000048] generic_make_request+0x1a4/0x400
[ 2850.000052] submit_bio+0x73/0x140
[ 2850.000054] ? submit_bio+0x73/0x140
[ 2850.000058] ? _cond_resched+0x19/0x40
[ 2850.000117] vdev_disk_io_start+0x61e/0x760 [zfs]
[ 2850.000126] ? spl_kmem_cache_alloc+0x64/0x140 [spl]
[ 2850.000129] ? _cond_resched+0x19/0x40
[ 2850.000132] ? mutex_lock+0x12/0x40
[ 2850.000139] ? taskq_init_ent+0x3c/0x80 [spl]
[ 2850.000142] ? _cond_resched+0x19/0x40
[ 2850.000194] zio_vdev_io_start+0xa8/0x340 [zfs]
[ 2850.000241] ? vdev_disk_open+0x540/0x540 [zfs]
[ 2850.000290] ? zio_vdev_io_start+0xa8/0x340 [zfs]
[ 2850.000337] zio_nowait+0xb1/0x150 [zfs]
[ 2850.000381] zio_flush+0x2d/0x30 [zfs]
[ 2850.000426] zil_commit_writer+0x769/0x7c0 [zfs]
[ 2850.000471] zil_commit.part.15+0x93/0x100 [zfs]
[ 2850.000514] zil_commit+0x17/0x20 [zfs]
[ 2850.000558] zfs_sync+0x6e/0xb0 [zfs]
[ 2850.000603] zpl_sync_fs+0x50/0x80 [zfs]
[ 2850.000607] sync_fs_one_sb+0x23/0x30
[ 2850.000611] iterate_supers+0xa2/0x110
[ 2850.000614] ? __ia32_sys_fdatasync+0x20/0x20
[ 2850.000618] ksys_sync+0x62/0xb0
[ 2850.000622] __ia32_sys_sync+0xe/0x20
[ 2850.000626] do_syscall_64+0x5a/0x120
[ 2850.000630] entry_SYSCALL_64_after_hwframe+0x44/0xa9

edit cache mode,CPU stuck

When i edit cache mode in our production environment like this:
eio_cli edit -c enhanceio1 -m ro

CPU stucks like this:
Jun 26 15:56:42 storage27 kernel: BUG: soft lockup - CPU#3 stuck for 22s! [eio_cli:29929]
Jun 26 15:56:42 storage27 kernel: Modules linked in: iptable_filter xt_multiport ip_tables enhanceio_rand(OF) enhanceio_lru(OF) enhanceio_fifo(OF) enhanceio(OF) 8021q garp stp mrp llc bonding nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul iTCO_wdt crc32_pclmul crc32c_intel iTCO_vendor_support ghash_clmulni_intel aesni_intel wmi lrw i2c_i801 gf128mul glue_helper ablk_helper cryptd pcspkr sb_edac edac_core ioatdma lpc_ich mfd_core ses enclosure mei_me mei shpchp ipmi_si ipmi_msghandler xfs libcrc32c sd_mod crc_t10dif crct10dif_common ast syscopyarea sysfillrect sysimgblt isci drm_kms_helper ttm libsas ixgbe drm ahci scsi_transport_sas libahci igb libata mdio ptp pps_core megaraid_sas i2c_algo_bit i2c_core dca
Jun 26 15:56:42 storage27 kernel: CPU: 3 PID: 29929 Comm: eio_cli Tainted: GF O-------------- 3.10.0-229.el7.x86_64 #1
Jun 26 15:56:42 storage27 kernel: Hardware name: Inspur SA5212H2/SA5212H2, BIOS 2.0.4 03/25/2014
Jun 26 15:56:42 storage27 kernel: task: ffff880ee8d5b8e0 ti: ffff880917994000 task.ti: ffff880917994000
Jun 26 15:56:42 storage27 kernel: RIP: 0010:[] [] eio_finish_nrdirty+0xd8/0x1a0 [enhanceio]
Jun 26 15:56:42 storage27 kernel: RSP: 0018:ffff880917997e18 EFLAGS: 00000246
Jun 26 15:56:42 storage27 kernel: RAX: 0000000000000200 RBX: 000000000000ac0e RCX: 0000000000009eda
Jun 26 15:56:42 storage27 kernel: RDX: 0000000000000000 RSI: 0000000000000246 RDI: 0000000000000246
Jun 26 15:56:42 storage27 kernel: RBP: ffff880917997e28 R08: 0000000000000246 R09: 00000000eb1c33c8
Jun 26 15:56:42 storage27 kernel: R10: 000000000150e700 R11: 0000000000000007 R12: 0000000000000246
Jun 26 15:56:42 storage27 kernel: R13: 00000000eb1c33c8 R14: 000000000150e700 R15: 0000000000000007
Jun 26 15:56:42 storage27 kernel: FS: 00007f52aa343740(0000) GS:ffff88085fc60000(0000) knlGS:0000000000000000
Jun 26 15:56:42 storage27 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 26 15:56:42 storage27 kernel: CR2: 0000000015cb3000 CR3: 0000000922584000 CR4: 00000000001407e0
Jun 26 15:56:42 storage27 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jun 26 15:56:42 storage27 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Jun 26 15:56:42 storage27 kernel: Stack:
Jun 26 15:56:42 storage27 kernel: 0000000000000002 ffff880854080124 ffff880917997e78 ffffffffa03df81b
Jun 26 15:56:42 storage27 kernel: ffff880079b48da0 0000000000d396d0 0000000500000000 0000000000d396d0
Jun 26 15:56:42 storage27 kernel: ffffc90016af6000 0000000000d396d0 0000000000d396d0 0000000000000000
Jun 26 15:56:42 storage27 kernel: Call Trace:
Jun 26 15:56:42 storage27 kernel: [] eio_cache_edit+0x3ab/0x650 [enhanceio]
Jun 26 15:56:42 storage27 kernel: [] eio_ioctl+0x265/0x310 [enhanceio]
Jun 26 15:56:42 storage27 kernel: [] do_vfs_ioctl+0x2e5/0x4c0
Jun 26 15:56:42 storage27 kernel: [] ? SYSC_newfstat+0x34/0x60
Jun 26 15:56:42 storage27 kernel: [] SyS_ioctl+0xa1/0xc0
Jun 26 15:56:42 storage27 kernel: [] system_call_fastpath+0x16/0x1b

Jun 26 16:03:02 storage27 kernel: BUG: soft lockup - CPU#3 stuck for 22s! [eio_cli:29929]
Jun 26 16:03:02 storage27 kernel: Modules linked in: iptable_filter xt_multiport ip_tables enhanceio_rand(OF) enhanceio_lru(OF) enhanceio_fifo(OF) enhanceio(OF) 8021q garp stp mrp llc bonding nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul iTCO_wdt crc32_pclmul crc32c_intel iTCO_vendor_support ghash_clmulni_intel aesni_intel wmi lrw i2c_i801 gf128mul glue_helper ablk_helper cryptd pcspkr sb_edac edac_core ioatdma lpc_ich mfd_core ses enclosure mei_me mei shpchp ipmi_si ipmi_msghandler xfs libcrc32c sd_mod crc_t10dif crct10dif_common ast syscopyarea sysfillrect sysimgblt isci drm_kms_helper ttm libsas ixgbe drm ahci scsi_transport_sas libahci igb libata mdio ptp pps_core megaraid_sas i2c_algo_bit i2c_core dca
Jun 26 16:03:02 storage27 kernel: CPU: 3 PID: 29929 Comm: eio_cli Tainted: GF O-------------- 3.10.0-229.el7.x86_64 #1
Jun 26 16:03:02 storage27 kernel: Hardware name: Inspur SA5212H2/SA5212H2, BIOS 2.0.4 03/25/2014
Jun 26 16:03:02 storage27 kernel: task: ffff880ee8d5b8e0 ti: ffff880917994000 task.ti: ffff880917994000
Jun 26 16:03:02 storage27 kernel: RIP: 0010:[] [] _raw_spin_unlock_irqrestore+0x1b/0x40
Jun 26 16:03:02 storage27 kernel: RSP: 0018:ffff880917997de0 EFLAGS: 00000246
Jun 26 16:03:02 storage27 kernel: RAX: 0000000000000246 RBX: 00000000000151ac RCX: 0000000000005c5a
Jun 26 16:03:02 storage27 kernel: RDX: 000000005c5a5c5a RSI: 0000000000000246 RDI: 0000000000000246
Jun 26 16:03:02 storage27 kernel: RBP: ffff880917997de8 R08: 0000000000000246 R09: 00000000eb1c33c8
Jun 26 16:03:02 storage27 kernel: R10: 000000000150e700 R11: 0000000000000007 R12: 00000000005ef860
Jun 26 16:03:02 storage27 kernel: R13: 00000000005ef860 R14: 000000000150e600 R15: ffff8808541f1bf0
Jun 26 16:03:02 storage27 kernel: FS: 00007f52aa343740(0000) GS:ffff88085fc60000(0000) knlGS:0000000000000000
Jun 26 16:03:02 storage27 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 26 16:03:02 storage27 kernel: CR2: 0000000015cb3000 CR3: 0000000922584000 CR4: 00000000001407e0
Jun 26 16:03:02 storage27 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jun 26 16:03:02 storage27 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Jun 26 16:03:02 storage27 kernel: Stack:
Jun 26 16:03:02 storage27 kernel: ffff880854080000 ffff880917997e08 ffffffffa03d9ef9 ffff880854080000
Jun 26 16:03:02 storage27 kernel: ffffffffa03f30a0 ffff880917997e28 ffffffffa03de1f0 0000000000000002
Jun 26 16:03:02 storage27 kernel: ffff880854080124 ffff880917997e78 ffffffffa03df81b ffff880079b48da0
Jun 26 16:03:02 storage27 kernel: Call Trace:
Jun 26 16:03:02 storage27 kernel: [] eio_clean_all+0xb9/0xe0 [enhanceio]
Jun 26 16:03:02 storage27 kernel: [] eio_finish_nrdirty+0xe0/0x1a0 [enhanceio]
Jun 26 16:03:02 storage27 kernel: [] eio_cache_edit+0x3ab/0x650 [enhanceio]
Jun 26 16:03:02 storage27 kernel: [] eio_ioctl+0x265/0x310 [enhanceio]
Jun 26 16:03:02 storage27 kernel: [] do_vfs_ioctl+0x2e5/0x4c0
Jun 26 16:03:02 storage27 kernel: [] ? SYSC_newfstat+0x34/0x60
Jun 26 16:03:02 storage27 kernel: [] SyS_ioctl+0xa1/0xc0
Jun 26 16:03:02 storage27 kernel: [] system_call_fastpath+0x16/0x1b

Jun 26 16:01:38 storage27 kernel: BUG: soft lockup - CPU#3 stuck for 22s! [eio_cli:29929]
Jun 26 16:01:38 storage27 kernel: Modules linked in: iptable_filter xt_multiport ip_tables enhanceio_rand(OF) enhanceio_lru(OF) enhanceio_fifo(OF) enhanceio(OF) 8021q garp stp mrp llc bonding nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul iTCO_wdt crc32_pclmul crc32c_intel iTCO_vendor_support ghash_clmulni_intel aesni_intel wmi lrw i2c_i801 gf128mul glue_helper ablk_helper cryptd pcspkr sb_edac edac_core ioatdma lpc_ich mfd_core ses enclosure mei_me mei shpchp ipmi_si ipmi_msghandler xfs libcrc32c sd_mod crc_t10dif crct10dif_common ast syscopyarea sysfillrect sysimgblt isci drm_kms_helper ttm libsas ixgbe drm ahci scsi_transport_sas libahci igb libata mdio ptp pps_core megaraid_sas i2c_algo_bit i2c_core dca
Jun 26 16:01:38 storage27 kernel: CPU: 3 PID: 29929 Comm: eio_cli Tainted: GF O-------------- 3.10.0-229.el7.x86_64 #1
Jun 26 16:01:38 storage27 kernel: Hardware name: Inspur SA5212H2/SA5212H2, BIOS 2.0.4 03/25/2014
Jun 26 16:01:38 storage27 kernel: task: ffff880ee8d5b8e0 ti: ffff880917994000 task.ti: ffff880917994000
Jun 26 16:01:38 storage27 kernel: RIP: 0010:[] [] _raw_spin_lock_irqsave+0x9/0x60
Jun 26 16:01:38 storage27 kernel: RSP: 0018:ffff880917997de8 EFLAGS: 00000246
Jun 26 16:01:38 storage27 kernel: RAX: 00000000000151ad RBX: 00000000000151ac RCX: 0000000000000008
Jun 26 16:01:38 storage27 kernel: RDX: 0000000000000000 RSI: 0000000000000246 RDI: ffff880854080124
Jun 26 16:01:38 storage27 kernel: RBP: ffff880917997de8 R08: 0000000000000246 R09: 00000000eb1c33c8
Jun 26 16:01:38 storage27 kernel: R10: 000000000150e700 R11: 0000000000000007 R12: 00000000005ef860
Jun 26 16:01:38 storage27 kernel: R13: 00000000005ef860 R14: 000000000150e600 R15: ffff8808541f1bf0
Jun 26 16:01:38 storage27 kernel: FS: 00007f52aa343740(0000) GS:ffff88085fc60000(0000) knlGS:0000000000000000
Jun 26 16:01:38 storage27 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jun 26 16:01:38 storage27 kernel: CR2: 0000000015cb3000 CR3: 0000000922584000 CR4: 00000000001407e0
Jun 26 16:01:38 storage27 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Jun 26 16:01:38 storage27 kernel: DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Jun 26 16:01:38 storage27 kernel: Stack:
Jun 26 16:01:38 storage27 kernel: ffff880917997e08 ffffffffa03d9ee7 ffff880854080000 ffffffffa03f30a0
Jun 26 16:01:38 storage27 kernel: ffff880917997e28 ffffffffa03de1f0 0000000000000002 ffff880854080124
Jun 26 16:01:38 storage27 kernel: ffff880917997e78 ffffffffa03df81b ffff880079b48da0 0000000000d396d0
Jun 26 16:01:38 storage27 kernel: Call Trace:
Jun 26 16:01:38 storage27 kernel: [] eio_clean_all+0xa7/0xe0 [enhanceio]
Jun 26 16:01:38 storage27 kernel: [] eio_finish_nrdirty+0xe0/0x1a0 [enhanceio]
Jun 26 16:01:38 storage27 kernel: [] eio_cache_edit+0x3ab/0x650 [enhanceio]
Jun 26 16:01:38 storage27 kernel: [] eio_ioctl+0x265/0x310 [enhanceio]
Jun 26 16:01:38 storage27 kernel: [] do_vfs_ioctl+0x2e5/0x4c0
Jun 26 16:01:38 storage27 kernel: [] ? SYSC_newfstat+0x34/0x60
Jun 26 16:01:38 storage27 kernel: [] SyS_ioctl+0xa1/0xc0
Jun 26 16:01:38 storage27 kernel: [] system_call_fastpath+0x16/0x1b

Before and during the editing,there were some error for the SATA disk:

Jun 26 13:17:36 storage27 kernel: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Jun 26 13:17:36 storage27 kernel: sd 0:2:1:0: [sdb] CDB:
Jun 26 13:17:36 storage27 kernel: Write(16): 8a 00 00 00 00 00 02 73 72 a8 00 00 00 08 00 00
Jun 26 13:17:36 storage27 kernel: end_request: I/O error, dev sdb, sector 41120424
Jun 26 13:17:36 storage27 kernel: sd 0:2:1:0: [sdb]
Jun 26 13:17:36 storage27 kernel: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Jun 26 13:17:36 storage27 kernel: sd 0:2:1:0: [sdb] CDB:
Jun 26 13:17:36 storage27 kernel: Write(16): 8a 00 00 00 00 00 02 73 72 90 00 00 00 08 00 00
Jun 26 13:17:36 storage27 kernel: end_request: I/O error, dev sdb, sector 41120400

Jun 26 15:56:16 storage27 kernel: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Jun 26 15:56:16 storage27 kernel: sd 0:2:1:0: [sdb] CDB:
Jun 26 15:56:16 storage27 kernel: Write(16): 8a 00 00 00 00 00 00 95 02 c8 00 00 00 08 00 00
Jun 26 15:56:16 storage27 kernel: end_request: I/O error, dev sdb, sector 9765576
Jun 26 15:56:16 storage27 kernel: sd 0:2:1:0: [sdb]
Jun 26 15:56:16 storage27 kernel: Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Jun 26 15:56:16 storage27 kernel: sd 0:2:1:0: [sdb] CDB:
Jun 26 15:56:16 storage27 kernel: Write(16): 8a 00 00 00 00 00 00 95 00 68 00 00 00 08 00 00
Jun 26 15:56:16 storage27 kernel: end_request: I/O error, dev sdb, sector 9764968
Jun 26 15:56:42 storage27 kernel: BUG: soft lockup - CPU#3 stuck for 22s! [eio_cli:29929]

Maybe the I/O error on the SATA disk cause the CPU stuck,any ideas?

Sudo make install fails with missing Makefile

EnhanceIO compiled correctly, but gave an error when trying to install it using sudo:

$ sudo make install
make -C /lib/modules/4.9.0-3-amd64/build M= modules V=0
make[1]: Entering directory '/usr/src/linux-headers-4.9.0-3-amd64'
/usr/src/linux-headers-4.9.0-3-common/scripts/Makefile.build:44: /usr/src/linux-headers-4.9.0-3-common/scripts/basic/Makefile: No such file or directory
make[5]: *** No rule to make target '/usr/src/linux-headers-4.9.0-3-common/scripts/basic/Makefile'.  Stop.
/usr/src/linux-headers-4.9.0-3-common/Makefile:450: recipe for target 'scripts_basic' failed
make[4]: *** [scripts_basic] Error 2
/usr/src/linux-headers-4.9.0-3-common/scripts/Makefile.build:44: /usr/src/linux-headers-4.9.0-3-common/arch/x86/entry/syscalls/Makefile: No such file or directory
make[4]: *** No rule to make target '/usr/src/linux-headers-4.9.0-3-common/arch/x86/entry/syscalls/Makefile'.  Stop.
arch/x86/Makefile:192: recipe for target 'archheaders' failed
make[3]: *** [archheaders] Error 2
Makefile:150: recipe for target 'sub-make' failed
make[2]: *** [sub-make] Error 2
Makefile:8: recipe for target 'all' failed
make[1]: *** [all] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-4.9.0-3-amd64'
Makefile:28: recipe for target 'modules' failed
make: *** [modules] Error 2

I also found the solution:

$ sudo bash
# make install
make -C /lib/modules/4.9.0-3-amd64/build M=/home/sjoerd/dev/EnhanceIO/Driver/enhanceio modules V=0
make[1]: Entering directory '/usr/src/linux-headers-4.9.0-3-amd64'
  CC [M]  /home/sjoerd/dev/EnhanceIO/Driver/enhanceio/eio_mem.o
  CC [M]  /home/sjoerd/dev/EnhanceIO/Driver/enhanceio/eio_policy.o
  CC [M]  /home/sjoerd/dev/EnhanceIO/Driver/enhanceio/eio_procfs.o
...

Is this weirdness on my system, or weirdness in EnhanceIO? Anyway, I wanted to file an issue in case others would encounter the same error.

cache creation failed

here is the error message which the system shows me when I am trying to create a cache.
cache creation failed: get_device for source device failed.

and here is the command which I used to create cache:

sudo eio_cli create -d /home/ubuntu/Downloads/story1 -s /home/ubuntu/Downloads/story2 -m wb -c my_cache

how should I solve this problem?
Many thanks.

Test 0

I am pretty sure I did something wrong.... I got somehow worse performance after enabling the cache.

HDD Info

Model Family: Fujitsu MHZ BK
Device Model: FUJITSU MHZ2080BK G2
Serial Number: K85CT8A255YM
LU WWN Device Id: 5 00000e 042fa9a72
Firmware Version: 8A1F
User Capacity: 80,026,361,856 bytes [80.0 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS T13/1699-D revision 3f
SATA Version is: SATA 2.6, 3.0 Gb/s
Local Time is: Tue Aug 22 16:05:16 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

SDD Info

Model Family: SandForce Driven SSDs
Device Model: SanDisk SDSSDX120GG25
Serial Number: 120947401598
LU WWN Device Id: 5 001b44 7349bdf7e
Firmware Version: R211
User Capacity: 120,034,123,776 bytes [120 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS, ACS-2 T13/2015-D revision 3
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Aug 22 16:09:47 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

File System on HDD

FS: btrfs, no cache

Label: none uuid: 7d2a05fc-6c1b-40e0-afd3-1aafb7bdfcf1
Total devices 1 FS bytes used 112.00KiB
devid 1 size 74.53GiB used 2.02GiB path /dev/sdi101

Benchmark test

Benchmark is done with gnome-disk-utility 3.24.1

HDD with BTRFS

read: 55.7 MB/s
write: 51.3 MB/s
access: 13.6 ms

SSD

read: 372.1 MB/s
write: 299.1 MB/s
access: 0.2 ms

HDD with BTRFS + cache on SSD

This is the final result

read: 24.8 MB/s
write: 54.4 MB/s
access: 18.26 ms

How the cache was created

sudo python2 CLI/eio_cli create -d /dev/sdi101 -s /dev/sdh2 -mwb -c CACHE

Will writeback module cause data inconsistencies ?

While learning the enhanceio, I have a question about status change.

According to my understanding, if an IO(A) do cache_write , and some cache block will turn into VALID|CACHEWRITEINPROG in function eio_write_peek().
At the same time , if another IO(B) hit the same cache block ,it will do uncached_write which actually only write disk.

When the IO(B) finished , the callback function will update these blocks status. Because of the IO(A) had finished ,the block in VALID|CACHEWRITEINPROG changes to ALREADY_DIRTY . It will not be invalid by IO(B) although it has old data.

Will it lead to data inconsistencies ?
Maybe my expression is not very clear. I try to draw the flowchart.

79p 4n3x nmsf xkyig 84

Would you please explain it ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.