GithubHelp home page GithubHelp logo

CompressionType about rust-rocksdb HOT 116 OPEN

tikv avatar tikv commented on July 26, 2024
CompressionType

from rust-rocksdb.

Comments (116)

siddontang avatar siddontang commented on July 26, 2024

Hi @zengqingfu1442

The compression type must be already supported in RocksDB, so you should first support this in RocksDB.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes, I know. It is supported in RocksDB now and installed it on my machine. But should I rewrite it with rust language?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

You just need to add them in https://github.com/pingcap/rust-rocksdb/blob/master/librocksdb_sys/crocksdb/crocksdb/c.h#L949 and https://github.com/pingcap/rust-rocksdb/blob/master/librocksdb_sys/src/lib.rs#L87 if RocksDB supports it.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, but how can I build the TiKV with the new storage engine RocksDB supporting new compression type ?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

you can build rocksdb dynamically, use your own rust-rocksdb in https://github.com/pingcap/tikv/blob/master/Cargo.toml#L78 then run make release.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, I see. Should I also put the source code of rocksdb which I have modified in my own rust-rocksdb repository? And put where?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

TiKV only needs linking your RocksDB lib dynamically, it doesn't care the source.

BTW, if you want to link your own RocksDB statically, you can update https://github.com/pingcap/rust-rocksdb/blob/master/librocksdb_sys/build.sh#L127

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, I see. Now, i have used tidb-ansible to deploy a TiDB cluster(including TiDB server, PD server and TiKV server), should I install my own RocksDB on each node?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024
  1. If you let TiKV link RocksDB dynamically, you should install on each node.
  2. If you build RocksDB statically in TiKV, you don't need to do it.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, but why don't rocksdb for TiKV use JEMALLOC when compiling?
https://github.com/pingcap/rust-rocksdb/blob/master/librocksdb_sys/build.sh#L138

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

/cc @BusyJay

from rust-rocksdb.

BusyJay avatar BusyJay commented on July 26, 2024

Allocation will be taken over by rustc when building a binary, so link to jemalloc is unnecessary.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, Jay Lee. Can use the librocksdb_sys/build.sh other than build.rs written in librocksdb_sys/Cargo.toml when compiling rust-rocksdb? @BusyJay

from rust-rocksdb.

BusyJay avatar BusyJay commented on July 26, 2024

I don't get it. Do you mean you want to use librocksdb_sys/build.sh to build a native rocksdb library? I think it should work but it's recommended to follow the instruction in rocksdb repository instead.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, I now have built the tikv link rocksdb dynamically supporting a new compression type. I used the rocksdb5.7.0 though the code in https://github.com/pingcap/rust-rocksdb/blob/master/librocksdb_sys/crocksdb/c.cc#L40 only support 5.7.3+, but I changed the 3 to 0, and tikv built successfully. What problem will it happen or come across using rocksdb5.7.0?thanks

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

I remember that 5.7.3+ fixes some bugs, but I think it doesn't matter in your test.

But If you want to use in production, I suggest using a newer one.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, I have integrated a new compression which is supported by rocksdb5.7.0 into tikv-v1.0.1, but now I use tikv-v1.0.0, it come across this error when compilation, it's the file "tikv/src/config.rs"

Compiling tikv v1.0.0 (file:///home/dzeng/tidb/build-tidb/deps/tikv)
error[E0061]: this function takes 4 parameters but 1 parameter was supplied
--> src/config.rs:92:39
|
92 | block_base_opts.set_lru_cache($opt.block_cache_size.0 as usize);
| ^
93 | | block_base_opts.set_cache_index_and_filter_blocks($opt.cache_index_and_filter_blocks);
94 | | block_base_opts.set_pin_l0_filter_and_index_blocks_in_cache(
95 | | $opt.pin_l0_filter_and_index_blocks);
... |
157 | | pub fn build_opt(&self) -> ColumnFamilyOptions {
158 | | let mut cf_opts = build_cf_opt!(self);
| | -----------------^-
| |
|
____|
| | expected 4 parameters
| in this macro invocation
....

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@zengqingfu1442

TiKV doesn't use the latest rust RocksDB, we will update it later. Now you can complement the missing args by yourself.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

The default compression of tidb is snappy,right? So, if I wanna use another compression type, should I set the parameter related to compression in tidb-ansible/conf/tikv.yml before deploying tidb?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

No, the default is lz4 and zstd for different levels.

You must update with your own type before deploying.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, it's in tidb-ansible/conf/tikv.yml to update?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

yes, you must also put your tikv-server in the resource/bin

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

If I put the my built tikv-server in tidb-ansible/downloads, will it work? Must I put it under tidb-ansible/resource/bin? Thanks.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

tidb-ansible/resource/bin then run rolling update.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I tried to put my built tikv-server,tikv-ctl,tikv-fail into tidb-v1.0.1.tar.gz which replaced the original tikv*,and then deploy, at last, I can also get the same tikv-* binary under the deploy directory (just need to copy the share_lib *.so to preper directory).

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I wiil also put the built tikv binaries to tidb-ansible/resources/bin to replace the original binary. Thanks.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

What's the meaning of the compression-per-level?How can I use the specific compression type in rocksdb defaultcf?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

you must first know rocksdb konws your compression, and then you can set the compression name for the level you want to compress.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes, I have let rocksdb know it, and tikv-server, tikv-ctl have also linked to the dynamical linking library of the compression.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Now I have let rocksdb know the new compression, but tikv doesn't know it.
I have put the new tikv binary under resources/bin and then rolling_update it.When I use tidb-ansible to start the tidb cluster, tikv always can't start up. The tikv_stderr.log says:
invalid configuration file "conf/tikv.toml": invalid value: string "csszlib", expected invalid compression type for key rocksdb.defaultcf.compression-per-level

The new tikv binary has been able to linked to the dynamical linking library of the new compression.
[dzeng@dzeng bin]$ ldd tikv-server
linux-vdso.so.1 => (0x00007ffc89bf0000)
librocksdb.so.5.7 => /usr/local/lib/librocksdb.so.5.7 (0x00007f8960065000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f895fd5c000)
libdl.so.2 => /usr/lib64/libdl.so.2 (0x00007f895fb58000)
librt.so.1 => /usr/lib64/librt.so.1 (0x00007f895f950000)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f895f733000)
libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f895f51d000)
libc.so.6 => /usr/lib64/libc.so.6 (0x00007f895f15a000)
/lib64/ld-linux-x86-64.so.2 (0x000055a81029f000)
libm.so.6 => /usr/lib64/libm.so.6 (0x00007f895ee57000)
libsnappy.so.1 => /usr/lib64/libsnappy.so.1 (0x00007f895ec51000)
libgflags.so.2.1 => /usr/lib64/libgflags.so.2.1 (0x00007f895ea30000)
libz.so.1 => /usr/lib64/libz.so.1 (0x00007f895e819000)
libcssz.so => /usr/lib64/libcssz.so (0x00007f895e571000)
libbz2.so.1 => /usr/lib64/libbz2.so.1 (0x00007f895e361000)
libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007f895e154000)
libtbb.so.2 => /usr/lib64/libtbb.so.2 (0x00007f895df1f000)

How can I let tikv know it when useing tidb-ansible to depoly the cluster? Should I also modify some code in tidb-ansible?Thanks.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

Do you add your compression type here https://github.com/pingcap/rust-rocksdb/blob/master/librocksdb_sys/src/lib.rs#L88 and here https://github.com/pingcap/rust-rocksdb/blob/master/librocksdb_sys/crocksdb/crocksdb/c.h#L956?

Does your RocksDB already know your compression type?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes, I have added and use the modified rust-rocksdb to compile tikv successfully. The rocksdb have already know the compression.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Now, I think that maybe I should change the rocksdb defaultcf compression type in tikv/src/config.rs from lz4 or zstd to the new compression before compiling tikv:

impl Default for DefaultCfConfig {
fn default() -> DefaultCfConfig {
DefaultCfConfig {
block_size: ReadableSize::kb(64),
block_cache_size: ReadableSize::mb(memory_mb_for_cf(false, CF_DEFAULT) as u64),
cache_index_and_filter_blocks: true,
pin_l0_filter_and_index_blocks: true,
use_bloom_filter: true,
whole_key_filtering: true,
bloom_filter_bits_per_key: 10,
block_based_bloom_filter: false,
read_amp_bytes_per_bit: 0,
compression_per_level: [
DBCompressionType::No,
DBCompressionType::No,
DBCompressionType::Lz4,
DBCompressionType::Lz4,
DBCompressionType::Lz4,
DBCompressionType::Zstd,
DBCompressionType::Zstd,
],

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

you should add your compression type here too https://github.com/pingcap/tikv/blob/master/src/util/config.rs#L52

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I have done it. Otherwise, the compilation should raise error.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

As well as here https://github.com/pingcap/tikv/blob/master/src/util/rocksdb/mod.rs#L40

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

How can we check what compression type the tikv support? In rocksdb, after make release, we can use binary db_bench or db_sanity_test to run a sanity test checking what compression the rocksdb supports,
./db_sanity_test /mnt/sfx-card-root/rocksdb570/ create
./db_sanity_test /mnt/sfx-card-root/rocksdb570/ verify

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

you can add your compression type to the fast compression directly, like

const COMPRESSION_PRIORITY: [DBCompressionType; 4] = [
    DBCompressionType::Lz4,
    DBCompressionType::Snappy,
    DBCompressionType::Zstd,  
    your
];

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes, I have done before.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

what is the compilation error now?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Can I change the number of supported compression type from 7 to 10, which including 7 default compression as well as csszlib(my new compression type), ZstdNotFinal, Disable ? I think this should not raise error in tikv.log or tikv_stderr.log when I set the new compression type in tidb-ansible/conf/tikv.yml and then deploy and start.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

No compilation error now. I have built tikv with new compression successfully.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

But if I specify the compression type in tidb-ansible/conf/tikv.yml with the new compression type,
rocksdb.defaultcf: compression-per-level: ["no", "no", "csszlib", "csszlib", "csszlib", "snappy", "snappy"]
rocksdb.writecf: compression-per-level: ["no", "no", "csszlib", "csszlib", "csszlib", "snappy", "snappy"]
raftdb.defaultcf: compression-per-level: ["no", "no", "csszlib", "csszlib", "csszlib", "snappy", "snappy"]
Then ansible-playbook start.yml won't start tikv, the tikv_stderror.log will say "invalid string value csszlib".

But when I don't specify it in tidb-ansible/conf/tikv.yml and leave ti as default, tikv can be started up, and the tikv.log will show that the tikv recognized the new compression.

from rust-rocksdb.

zhangjinpeng87 avatar zhangjinpeng87 commented on July 26, 2024

please use zlib not csszlib

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@zengqingfu1442

You said that you have already fixed the problem, why still meet "invalid string value" error? Do you build your TiKV and put to resource/bin again?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes, I have fixed the problem. I didn't change the default number of compaction level which is 7 in the tikv/src/config.rs, so when I start the tikv using ansible-playbook start.yml, the binary still just recognize the default 7 compression(no, snappy, zlib, bzip2, lz4, lz4hc, zstd) though the default compression-per-level has changed to [no,no,csszlib,csszlib,csszlib,snappy,snappy].

Now, I have changed the number of compression level from 7 to 10, including no,snappy,zlib,bzip2, csszlib, lz4, lz4hc, zstd, zstdnotfinal, disable. After building, I put the new binary tikv-* under tisb-ansible/resources/bin, then I change(uncomment) the compression-per-level in tidb-ansible/conf/tikv.yml from 7 compression for 7 levels to 10 compression for 10 levels. If I don't uncomment the compression-per-level , the default compression for 10 levels is just decided by the code
in /tikv/src/config.rs which is changed by me to [no,no, csszlib,csszlib,csszlib,csszlib,csszlib,csszlib,snappy,snappy].
Thanks.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Is there any way or tools to run sanity test checking what compression type the built tikv-server support?
In rocksdb, we can use ./db_sanity_test to check.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@zengqingfu1442

We have no way now, maybe you can help us write a rust tool.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

One of the three tikv instances is down, and the tikv.log show these messages:(there is no errors in tikv_stderr.log). Can you help me analysis the errors?Thanks.

2017/11/28 13:10:03.476 tikv-server.rs:122: [INFO] start prometheus client
2017/11/28 13:10:03.476 mod.rs:209: [INFO] starting working thread: split check worker
2017/11/28 13:10:03.476 tikv-server.rs:234: [INFO] start storage
2017/11/28 13:10:03.477 mod.rs:209: [INFO] starting working thread: snapshot worker
2017/11/28 13:10:03.478 mod.rs:209: [INFO] starting working thread: raft gc worker
2017/11/28 13:10:03.480 mod.rs:209: [INFO] starting working thread: compact worker
2017/11/28 13:10:03.485 future.rs:115: [INFO] starting working thread: pd worker
2017/11/28 13:10:03.485 region.rs:142: [INFO] [region 2] begin apply snap data
2017/11/28 13:10:03.486 mod.rs:209: [INFO] starting working thread: consistency check worker
2017/11/28 13:10:03.488 mod.rs:209: [INFO] starting working thread: apply worker
2017/11/28 13:10:03.497 region.rs:237: [ERROR] failed to apply snap: Other(StringError("Corruption: external file have corrupted keys"))!!!
2017/11/28 13:10:03.508 mod.rs:209: [INFO] starting working thread: end-point-worker
2017/11/28 13:10:03.508 mod.rs:209: [INFO] starting working thread: snap-handler
2017/11/28 13:10:03.513 server.rs:155: [INFO] TiKV is ready to serve
2017/11/28 13:10:04.678 panic_hook.rs:99: [ERROR] thread 'raftstore-4' panicked '[region 2] 6 applying snapshot failed' at "src/raftstore/store/peer_storage.rs:1009"
stack backtrace:
0: 0x7f0c1f42495e - backtrace::backtrace::libunwind::trace
at /home/dzeng/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.2.3/src/backtrace/libunwind.rs:54
- backtrace::backtrace::trace
at /home/dzeng/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.2.3/src/backtrace/mod.rs:70
1: 0x7f0c1f4250c3 - backtrace::capture::{{impl}}::new
at /home/dzeng/tidb/integrate-csszlib-into-tidb-1.0.1/deps/tikv/target/release/build/backtrace-f2668981c6548436/out/capture.rs:79
2: 0x7f0c1f2b8766 - tikv::util::panic_hook::set_exit_hook::{{closure}}

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

does your disk have any error? Do you add a TiKV and then it panics?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I'm not sure if the disk have any error. I don't add a TIKV.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I just started the cluster and after soon, one of the tikv instance is down.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

What can result in TiKV panic?

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

failed to apply snap: Other(StringError("Corruption: external file have corrupted keys"))

The panic log shows clearly that the SST file has a corrupted key. The SST file is generated by RocksDB SSTWriter, because we will check the SST file CRC32 before ingesting it, so I guess the key has already been corrupted when we generate the SST file, because you use your own compress library, I also think maybe there is something wrong when we use the compress lib to generate the SST.

Maybe you can add a simple test like https://github.com/pingcap/rust-rocksdb/blob/master/tests/test_ingest_external_file.rs#L97 with your own compress lib to verify it.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I tried to put tikv on other disks and it's the same errors. So there is no disk error.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

But I tried the default compression [no,no,lz4,lz4,lz4,zstd,zstd], it's also the same error.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

dmesg | grep -i OOM* and there is no OOM information.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

You can use RocksDB tool sst_dump to dump the wrong SST to show which compression it uses. This is not relevant with your default compression configuration, but with the compression type which you used before to generate the SST.

/cc @huachaohuang

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks, I will try it.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

After the tidb cluster start up, before I run sysbench benchmark, one of the tikv instances is down.
It seems that the tow sst file are OK.
[tcn@sfx-008 rocksdb-master]$ ./sst_dump --file=/mnt/sfx-card-root/tikv1_3/data/db --command=check
from [] to []
Process /mnt/sfx-card-root/tikv1_3/data/db/000012.sst
Sst file format: block-based
Process /mnt/sfx-card-root/tikv1_3/data/db/000014.sst
Sst file format: block-based

[tcn@sfx-008 rocksdb-master]$ ./sst_dump --file=/mnt/sfx-card-root/tikv1_3/data/db --show_compression_sizes --command=check --verify_checksum
from [] to []
Process /mnt/sfx-card-root/tikv1_3/data/db/000012.sst
Sst file format: block-based
Block Size: 16384
Compression: kNoCompression Size: 852
Compression: kSnappyCompression Size: 845
Compression: kZlibCompression Size: 827
Compression: kCSSZlibCompression Size: 846
Unsupported compression type: kBZip2Compression.
Unsupported compression type: kLZ4Compression.
Unsupported compression type: kLZ4HCCompression.
Unsupported compression type: kXpressCompression.
Unsupported compression type: kZSTD.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

can you use sst_dump to dump all KVs?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes,
[tcn@sfx-008 rocksdb-master]$ ./sst_dump --file=/mnt/sfx-card-root/tikv1_3/data/db --command=scan --output_hex
from [] to []
Process /mnt/sfx-card-root/tikv1_3/data/db/000012.sst
Sst file format: block-based
'0101' seq:1, type:1 => 08F3A4F6879DBFC68E5A1001
'0102' seq:5, type:0 =>
Process /mnt/sfx-card-root/tikv1_3/data/db/000014.sst
Sst file format: block-based
'0102000000000000000403' seq:7, type:0 =>
'0102000000000000000503' seq:10, type:1 => 08AF01120508AF011006
'0102000000000000000504' seq:9, type:1 => 0A070806100018AF0110AF01
'0103000000000000000401' seq:6, type:0 =>
'0103000000000000000501' seq:8, type:1 => 08011218080512001A002204080210012A04080810022A04080A1001

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@zengqingfu1442

/mnt/sfx-card-root/tikv1_3/data/db only contains the valid SST files, you should check /mnt/sfx-card-root/tikv1_3/data/snap instead.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Now, the compression-per-level is [no,no,z4,lz4,lz4,zstd,zstd]:

[tcn@sfx-008 sfx-card-root]$ ./sst_dump --file=/mnt/sfx-card-root/tikv1_3/data/snap --command=scan --output_hex
from [] to []
Process /mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_write.sst
Sst file format: block-based
/mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_write.sst: Corruption: LZ4 not supported or corrupted LZ4 compressed block contents
Process /mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_default.sst
Sst file format: block-based
/mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_default.sst: Corruption: LZ4 not supported or corrupted LZ4 compressed block contents
Process /mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_lock.sst
/mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_lock.sst: Corruption: file is too short (1 bytes) to be an sstable: /mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_lock.sst

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

But after deployment and start, the tikv-server binary on the machine which is deployed with 3 tikv instances can linked to the liblz4.so :

[tcn@sfx-008 bin]$ ls
node_exporter tikv-server
[tcn@sfx-008 bin]$ ldd tikv-server
linux-vdso.so.1 => (0x00007fff7f2f5000)
librocksdb.so.5.7 => /lib64/librocksdb.so.5.7 (0x00007f1b96db2000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f1b96a36000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f1b96832000)
librt.so.1 => /lib64/librt.so.1 (0x00007f1b9662a000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f1b9640d000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f1b961f7000)
libc.so.6 => /lib64/libc.so.6 (0x00007f1b95e36000)
/lib64/ld-linux-x86-64.so.2 (0x00007f1b98586000)
libm.so.6 => /lib64/libm.so.6 (0x00007f1b95b33000)
libsnappy.so.1 => /lib64/libsnappy.so.1 (0x00007f1b9592d000)
libgflags.so.2.1 => /lib64/libgflags.so.2.1 (0x00007f1b9570c000)
libz.so.1 => /lib64/libz.so.1 (0x00007f1b954f5000)
libcssz.so => /lib64/libcssz.so (0x00007f1b9524d000)
libbz2.so.1 => /lib64/libbz2.so.1 (0x00007f1b9503d000)
liblz4.so.1 => /lib64/liblz4.so.1 (0x00007f1b94e29000)
libzstd.so.1 => /lib64/libzstd.so.1 (0x00007f1b94bbe000)
libnuma.so.1 => /lib64/libnuma.so.1 (0x00007f1b949b2000)
libtbb.so.2 => /lib64/libtbb.so.2 (0x00007f1b9477c000)

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

/mnt/sfx-card-root/tikv1_3/data/snap/rev_5_6_175_write.sst: Corruption: LZ4 not supported or corrupted LZ4 compressed block contents

Seem that the SST file is corrupted, I still doubt that is caused by your compression library because we use LZ4 and zstd for a long time and they both work well.

What is the origin compression level in the TiKV? Do you use the same name like lz4 but in fact use your own compression library?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

1.The CSSZlib compression has been used in hadoop, hbase, spark mysql and mongodb so on, it works well.
2. I changed the default compression-per-level from [no,no,lz4,lz4,lz4,zstd,zstd] to [no,no,csszlib,csszlib,csszlib,csszlib,csszlib] in tikv/src/config.rs.
3. The CSSZlib is completely different from Lz4, and the CSSZlib compression library is similar to Zlib.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I found the modified rocksdb used in tikv can't support lz4 compression,

[dzeng@dzeng rocksdb-master]$ ./db_sanity_test /mnt/sfx-card-root/rocksdb570/ create
Creating...
Basic -- OK
SpecialComparator -- OK
ZlibCompression -- OK
ZlibCompressionVersion2 -- OK
CSSZlibCompression -- OK
CSSZlibCompressionVersion2 -- OK
LZ4Compression -- Corruption: LZ4 not supported or corrupted LZ4 compressed block contents
FAIL
LZ4HCCompression -- Corruption: LZ4HC not supported or corrupted LZ4HC compressed block contents
FAIL
ZSTDCompression -- OK
PlainTable -- OK
BloomFilter -- OK

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

But the log of rocksdb in /mnt/sfx-card-root/tikv1_3/data/db/LOG show that the tikv support it.
...
2017/11/29-15:57:02.651205 7f76e2b62e40 Compression algorithms supported:
2017/11/29-15:57:02.651206 7f76e2b62e40 Snappy supported: 1
2017/11/29-15:57:02.651207 7f76e2b62e40 Zlib supported: 1
2017/11/29-15:57:02.651208 7f76e2b62e40 Bzip supported: 1
2017/11/29-15:57:02.651210 7f76e2b62e40 LZ4 supported: 1
2017/11/29-15:57:02.651216 7f76e2b62e40 ZSTD supported: 1
2017/11/29-15:57:02.651220 7f76e2b62e40 Fast CRC32 supported: 0
....
...

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@zengqingfu1442

How do you build the LZ4?

I think supporting LZ4 only means that RocksDB has already linked LZ4 and can use it, but it doesn't know whether this lib can work well or not.

from rust-rocksdb.

BusyJay avatar BusyJay commented on July 26, 2024

Did you build db_sanity_test or sst_dump with lz4 enabled?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

The rocksdb5.7.0 I used for tikv didn't support lz4. In Beijing local TiDB cluster, it work well though it doesn't support lz4 because I don't use it.

[dzeng@dzeng rocksdb-master]$ ./db_sanity_test /mnt/sfx-card-root/rocksdb570/ create
Creating...
Basic -- OK
SpecialComparator -- OK
ZlibCompression -- OK
ZlibCompressionVersion2 -- OK
CSSZlibCompression -- OK
CSSZlibCompressionVersion2 -- OK
LZ4Compression -- Corruption: LZ4 not supported or corrupted LZ4 compressed block contents
FAIL
LZ4HCCompression -- Corruption: LZ4HC not supported or corrupted LZ4HC compressed block contents
FAIL
ZSTDCompression -- OK
PlainTable -- OK
BloomFilter -- OK

[dzeng@centos7-2-cdh rocksdb-master]$ ./sst_dump --file=/mnt/sfx-card-root/tikv1_1/data/raft --command=check --show_compression_sizes
from [] to []
Process /mnt/sfx-card-root/tikv1_1/data/raft/001682.sst
Sst file format: block-based
Block Size: 16384
Compression: kNoCompression Size: 26299543
Compression: kSnappyCompression Size: 5771933
Compression: kZlibCompression Size: 3617874
mypid = 19772, dev_name = /dev/sfx0, dev_num = 0
Compression: kCSSZlibCompression Size: 4536131
Compression: kBZip2Compression Size: 2808291
Unsupported compression type: kLZ4Compression.
Unsupported compression type: kLZ4HCCompression.
Unsupported compression type: kXpressCompression.
Unsupported compression type: kZSTD.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I just yum install lz4-devel before "make release" rocksdb.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@zengqingfu1442

Because we don't use your compression lib before, so we can't know whether this is caused by your compiled RocksDB or not. I think we can start a new TiKV cluster which disables all compression at first and then check whether it can panic or not.

If not, you can all use LZ4 instead to check again. And at last, you can use your own compression library.

Every check should clean up all data and use a new cluster.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I destroyed the cluster and set the compression-per-level as [no,no,no,no,no,no,no], and then I start it , after a while, one of the tikv instance is still down. The tikv.log shows the same error.

[tcn@sfx-008 rocksdb-master]$ ./sst_dump --file=/mnt/sfx-card-root/tikv1_3/data/snap --command=scan --output_hex
from [] to []
Process /mnt/sfx-card-root/tikv1_3/data/snap/rev_3_6_200_default.sst
Sst file format: block-based
/mnt/sfx-card-root/tikv1_3/data/snap/rev_3_6_200_default.sst: Corruption: LZ4 not supported or corrupted LZ4 compressed block contents
Process /mnt/sfx-card-root/tikv1_3/data/snap/rev_3_6_200_lock.sst
/mnt/sfx-card-root/tikv1_3/data/snap/rev_3_6_200_lock.sst: Corruption: file is too short (1 bytes) to be an sstable: /mnt/sfx-card-root/tikv1_3/data/snap/rev_3_6_200_lock.sst
Process /mnt/sfx-card-root/tikv1_3/data/snap/rev_3_6_200_write.sst
Sst file format: block-based
/mnt/sfx-card-root/tikv1_3/data/snap/rev_3_6_200_write.sst: Corruption: LZ4 not supported or corrupted LZ4 compressed block contents

[tcn@sfx-008 rocksdb-master]$ ./db_sanity_test /mnt/sfx-card-root/rocksdb570/ create
Creating...
Basic -- OK
SpecialComparator -- OK
ZlibCompression -- OK
ZlibCompressionVersion2 -- OK
CSSZlibCompression -- OK
CSSZlibCompressionVersion2 -- OK
LZ4Compression -- Corruption: LZ4 not supported or corrupted LZ4 compressed block contents
FAIL
LZ4HCCompression -- Corruption: LZ4HC not supported or corrupted LZ4HC compressed block contents
FAIL
ZSTDCompression -- OK
PlainTable -- OK
BloomFilter -- OK

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@huachaohuang

Why do we still use the LZ4 here?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

No, I set the compression-per-level as [no,no,no,no,no,no,no]. But due to the rocksdb I use doesn't support LZ4, so when using sst_dump to analyze the sst files, it still raise such error.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@zengqingfu1442

Can you run cargo test directly to check whether your RocksDB can work well?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I deployed the original tidb cluster successfully and ran sysbench benchmark. Then I replaced the tidb-ansible/resources/bin/tikv-* with my own tikv-* binary, and then rolling_update, but the tikv can't start up, the tikv.log said "2017/11/30 15:30:44.985 tikv-server.rs:230: [ERROR] failed to start node: RaftServer(RocksDb("Corruption: LZ4 not supported or corrupted LZ4 compressed block contents"))".
But I didn't use lz4 compression in tikv.yml.
So it seems that when starting tikv, it will check that if the rocksdb support LZ4 compression, if not, tikv won't start up. BTW,my tikv links to librocksdb.so dynamically.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Cargo test inrust-rocksdb?

[dzeng@dzeng rust-rocksdb]$ make test
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running target/debug/deps/rocksdb-7707f1ef9082226d

running 18 tests
test rocksdb::test::snapshot_test ... ok
No
0
Snappy
1
Zlib
2
CSSZlib
10
ZstdNotFinal
64
Lz4hc
5
Bz2
3
Lz4
4
Zstd
7
test rocksdb::test::test_supported_compression ... ok
retrieved utf8 value: abcdefgh
test merge_operator::test::mergetest ... ok
Hello k1: v1111
Hello k2: v2222
Hello k3: v3333
test rocksdb::test::log_dir_test ... ok
test rocksdb::test::errors_do_stuff ... ok
test rocksdb::test::external ... ok
test rocksdb::test::iterator_test ... ok
test rocksdb::test::test_get_approximate_memtable_stats ... ok
test rocksdb::test::single_delete_test ... ok
test rocksdb::test::writebatch_works ... ok
test rocksdb::test::test_pause_bg_work ... ok
test rocksdb::test::list_column_families_test ... ok
test rocksdb::test::test_get_all_key_versions ... ok
test rocksdb::test::block_cache_usage ... ok
test rocksdb::test::backup_db_test ... ok
test rocksdb::test::approximate_size_test ... ok
test rocksdb::test::flush_cf ... ok
test rocksdb::test::property_test ... ok

test result: ok. 18 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

 Running target/debug/deps/rocksdb-d816a6ab89f53acf

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

 Running target/debug/deps/test-c3d4eb64fe9a3ee5

running 82 tests
test test_delete_range::test_delete_range ... ok
cf1 created successfully
successfully opened db with column family
test test_delete_range::test_delete_range_ingest_file ... ok
test test_delete_range::test_delete_range_case_2 ... ok
test test_delete_range::test_delete_range_prefix_bloom_case_2 ... ok
successfully opened db with column family
test test_compaction_filter::test_compaction_filter ... ok
cf1 successfully dropped.
test test_column_family::test_column_family ... ok
test test_delete_range::test_delete_range_case_1 ... ok
test test_delete_range::test_delete_range_prefix_bloom_case_1 ... ok
test test_delete_range::test_delete_range_case_4 ... ok
test test_event_listener::test_event_listener_ingestion ... ok
test test_delete_range::test_delete_range_case_6 ... ok
test test_delete_range::test_delete_range_compact ... ok
test test_delete_range::test_delete_range_case_5 ... ok
test test_ingest_external_file::test_ingest_external_file ... ok
test test_delete_range::test_delete_range_case_3 ... ok
test test_ingest_external_file::test_ingest_external_file_new ... ok
test test_iterator::read_with_upper_bound ... ok
test test_compact_range::test_compact_range ... ok
test test_ingest_external_file::test_ingest_external_file_new_cf ... ok
test test_iterator::test_iterator ... ok
test test_iterator::test_seek_for_prev ... ok
test test_rate_limiter::test_rate_limiter ... ok
test test_rate_limiter::test_rate_limiter_sendable ... ok
test test_delete_range::test_delete_range_prefix_bloom_case_3 ... ok
test test_read_only::test_open_for_read_only ... ok
test test_delete_range::test_delete_range_prefix_bloom_case_4 ... ok
test test_rocksdb_options::test_allow_concurrent_memtable_write ... ok
test test_multithreaded::test_multithreaded ... ok
test test_delete_range::test_delete_range_prefix_bloom_case_5 ... ok
test test_rocksdb_options::test_clone_options ... ok
test test_rocksdb_options::test_auto_roll_max_size_info_log ... ok
test test_rocksdb_options::test_bottommost_compression ... ok
test test_read_only::test_open_cf_for_read_only ... ok
test test_delete_range::test_delete_range_prefix_bloom_case_6 ... ok
test test_rocksdb_options::test_compaction_readahead_size ... ok
test test_rocksdb_options::test_db_paths ... ok
test test_rocksdb_options::test_enable_statistics ... ok
test test_rocksdb_options::test_direct_read_write ... ok
test test_rocksdb_options::test_enable_pipelined_write ... ok
test test_rocksdb_options::test_fifo_compaction_options ... ok
test test_rocksdb_options::test_get_compression ... ok
test test_rocksdb_options::test_get_compression_per_level ... ok
test test_rocksdb_options::test_flush_wal ... ok
test test_delete_range::test_delete_range_prefix_bloom_compact_case ... ok
test test_rocksdb_options::test_log_file_opt ... ok
test test_rocksdb_options::test_manual_wal_flush ... ok
test test_delete_files_in_range::test_delete_files_in_range_with_snap ... ok
test test_rocksdb_options::test_memtable_insert_hint_prefix_extractor ... ok
test test_iterator::test_fixed_suffix_seek ... ok
test test_rocksdb_options::test_pending_compaction_bytes_limit ... ok
test test_rocksdb_options::test_set_bytes_per_sync ... ok
test test_rocksdb_options::test_read_options ... ok
test test_rocksdb_options::test_set_cache_index_and_filter_blocks_with_high_priority ... ok
test test_delete_files_in_range::test_delete_files_in_range_with_iter ... ok
test test_rocksdb_options::test_set_max_manifest_file_size ... ok
test test_rocksdb_options::test_set_compaction_pri ... ok
test test_rocksdb_options::test_set_delayed_write_rate ... ok
test test_rocksdb_options::test_set_level_compaction_dynamic_level_bytes ... ok
test test_rocksdb_options::test_set_lru_cache ... ok
test test_rocksdb_options::test_set_max_background_jobs ... ok
test test_rocksdb_options::test_set_max_subcompactions ... ok
test test_rocksdb_options::test_set_num_levels ... ok
test test_rocksdb_options::test_set_optimize_filters_for_hits ... ok
test test_rocksdb_options::test_set_pin_l0_filter_and_index_blocks_in_cache ... ok
test test_rocksdb_options::test_set_ratelimiter ... ok
test test_rocksdb_options::test_set_wal_opt ... ok
test test_iterator::test_send_iterator ... ok
test test_rocksdb_options::test_sync_wal ... ok
test test_rocksdb_options::test_writable_file_max_buffer_size ... ok
test test_slice_transform::test_slice_transform ... ok
test test_rocksdb_options::test_block_based_options ... ok
test test_rocksdb_options::test_compact_options ... ok
test test_delete_range::test_delete_range_sst_files ... ok
test test_rocksdb_options::test_get_block_cache_usage ... ok
test test_event_listener::test_event_listener_basic ... ok
test test_iterator::test_total_order_seek ... ok
test test_prefix_extractor::test_prefix_extractor_compatibility ... ok
test test_rocksdb_options::test_write_options ... ok
test test_statistics::test_db_statistics ... ok
test test_ingest_external_file::test_ingest_simulate_real_world ... ok
test test_table_properties::test_table_properties_collector_factory ... ok
test test_rocksdb_options::test_create_info_log ... ok

test result: ok. 82 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

Doc-tests rocksdb

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

PTAL @huachaohuang

from rust-rocksdb.

huachaohuang avatar huachaohuang commented on July 26, 2024

@zengqingfu1442
Hi, I see a long conversation here, try to figure out what happened, can you summarize the situation here for me?
And I see that you are using some modified version of tikv/rust-rocksdb/rocksdb, it will be more efficient if you can show us your changes, so we can review them easily, thanks.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

I integrated a new compression named CSSZlib into rocksdb and tikv. I built the tikv dynamically linked to librocksdb.so. But the rocksdb doesn't support LZ4 compression, so when I replaced the official tikv binary in tidb-ansible/resources/bin with my built tikv binary and rolling_update, the tikv instances can't start up, and the tikv.log says "2017/11/30 18:52:47.982 tikv-server.rs:230: [ERROR] failed to start node: RaftServer(RocksDb("Corruption: LZ4 not supported or corrupted LZ4 compressed block contents"))".
BTW,there isn't any messages in tikv_stderr.log._

Thanks.

from rust-rocksdb.

huachaohuang avatar huachaohuang commented on July 26, 2024

You mean your modified rocksdb doesn't support LZ4, right?
When you first bootstrap the cluster with the official tikv, does that tikv use LZ4 compression?

I guess this can happen if:

  1. Bootstrap a new cluster with the official tikv using LZ4 compression
  2. The official tikv writes some data with LZ4 compression
  3. Replace the official tikv with the modified tikv and restart
  4. The modified tikv try to read data with LZ4 compression written by the official tikv, but it failed since the modified tikv doesn't support LZ4 compression

Am I missing something?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes, you're right.
However,I tried unsafe_cleanup the official tidb cluster and then replaced tikv binary, deployed, started, the tikv instance will be down after soon.

Part of the tikv.log:
....
....
2017/11/30 20:24:29.714 tikv-server.rs:122: [INFO] start prometheus client
2017/11/30 20:24:29.714 mod.rs:209: [INFO] starting working thread: split check worker
2017/11/30 20:24:29.715 tikv-server.rs:234: [INFO] start storage
2017/11/30 20:24:29.716 mod.rs:209: [INFO] starting working thread: snapshot worker
2017/11/30 20:24:29.717 mod.rs:209: [INFO] starting working thread: raft gc worker
2017/11/30 20:24:29.718 mod.rs:209: [INFO] starting working thread: compact worker
2017/11/30 20:24:29.724 future.rs:115: [INFO] starting working thread: pd worker
2017/11/30 20:24:29.724 region.rs:142: [INFO] [region 4] begin apply snap data
2017/11/30 20:24:29.724 mod.rs:209: [INFO] starting working thread: consistency check worker
2017/11/30 20:24:29.726 mod.rs:209: [INFO] starting working thread: apply worker
2017/11/30 20:24:29.739 region.rs:237: [ERROR] failed to apply snap: Other(StringError("Corruption: external file have corrupted keys"))!!!
2017/11/30 20:24:29.755 mod.rs:209: [INFO] starting working thread: end-point-worker
2017/11/30 20:24:29.757 mod.rs:209: [INFO] starting working thread: snap-handler
2017/11/30 20:24:29.762 server.rs:155: [INFO] TiKV is ready to serve
2017/11/30 20:24:30.936 panic_hook.rs:99: [ERROR] thread 'raftstore-2' panicked '[region 4] 8 applying snapshot failed' at "src/raftstore/store/peer_storage.rs:1009"
stack backtrace:
0: 0x7f5df786f95e - backtrace::backtrace::libunwind::trace
at /home/dzeng/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.2.3/src/backtrace/libunwind.rs:54
- backtrace::backtrace::trace
at /home/dzeng/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.2.3/src/backtrace/mod.rs:70
....
....

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

sst_dump messages:

[tcn@sfx-008 rocksdb-master]$ ./sst_dump --file=/mnt/sfx-card-root/tikv1_1/data/snap --show_compression_sizes
from [] to []
Process /mnt/sfx-card-root/tikv1_1/data/snap/rev_4_6_141_write.sst
Sst file format: block-based
Block Size: 16384
Compression: kNoCompression Size: 788
Compression: kSnappyCompression Size: 781
Compression: kZlibCompression Size: 779
Compression: kCSSZlibCompression Size: 782
Compression: kBZip2Compression Size: 780
*** Error in `./sst_dump': free(): invalid next size (fast): 0x00000000023632b0 ***
======= Backtrace: =========
/usr/lib64/libc.so.6(+0x7c503)[0x7f3c686a7503]
/usr/lib64/libstdc++.so.6(_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE9_M_mutateEmmPKcm+0xed)[0x7f3c6901c6ad]
/usr/lib64/libstdc++.so.6(_ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE14_M_replace_auxEmmmc+0xc8)[0x7f3c6901ce08]
./sst_dump[0x645bd6]
./sst_dump[0x646897]
./sst_dump[0x64846d]
./sst_dump[0x78a34e]
./sst_dump[0x78b141]
./sst_dump[0x78e5db]
./sst_dump[0x40a636]
/usr/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f3c6864cb35]
./sst_dump[0x4d102e]
======= Memory map: ========
00400000-0083a000 r-xp 00000000 103:00 10225499 /nvme0n1/RocksDB-5.7.0/rocksdb-master/sst_dump
00a3a000-00a3b000 r--p 0043a000 103:00 10225499 /nvme0n1/RocksDB-5.7.0/rocksdb-master/sst_dump
00a3b000-00a3c000 rw-p 0043b000 103:00 10225499 /nvme0n1/RocksDB-5.7.0/rocksdb-master/sst_dump
00a3c000-00a4c000 rw-p 00000000 00:00 0
02158000-0239e000 rw-p 00000000 00:00 0 [heap]
7f3c64000000-7f3c64021000 rw-p 00000000 00:00 0
7f3c64021000-7f3c68000000 ---p 00000000 00:00 0
7f3c6862b000-7f3c687e1000 r-xp 00000000 fd:00 201330474 /usr/lib64/libc-2.17.so
7f3c687e1000-7f3c689e1000 ---p 001b6000 fd:00 201330474 /usr/lib64/libc-2.17.so
7f3c689e1000-7f3c689e5000 r--p 001b6000 fd:00 201330474 /usr/lib64/libc-2.17.so
7f3c689e5000-7f3c689e7000 rw-p 001ba000 fd:00 201330474 /usr/lib64/libc-2.17.so
7f3c689e7000-7f3c689ec000 rw-p 00000000 00:00 0
7f3c689ec000-7f3c68a01000 r-xp 00000000 fd:00 207220294 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7f3c68a01000-7f3c68c00000 ---p 00015000 fd:00 207220294 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7f3c68c00000-7f3c68c01000 r--p 00014000 fd:00 207220294 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7f3c68c01000-7f3c68c02000 rw-p 00015000 fd:00 207220294 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7f3c68c02000-7f3c68d02000 r-xp 00000000 fd:00 201330482 /usr/lib64/libm-2.17.so
7f3c68d02000-7f3c68f02000 ---p 00100000 fd:00 201330482 /usr/lib64/libm-2.17.so
7f3c68f02000-7f3c68f03000 r--p 00100000 fd:00 201330482 /usr/lib64/libm-2.17.so
7f3c68f03000-7f3c68f04000 rw-p 00101000 fd:00 201330482 /usr/lib64/libm-2.17.so
7f3c68f04000-7f3c6906f000 r-xp 00000000 fd:00 220075613 /usr/lib64/libstdc++.so.6.0.21
7f3c6906f000-7f3c6926f000 ---p 0016b000 fd:00 220075613 /usr/lib64/libstdc++.so.6.0.21
7f3c6926f000-7f3c69279000 r--p 0016b000 fd:00 220075613 /usr/lib64/libstdc++.so.6.0.21
7f3c69279000-7f3c6927b000 rw-p 00175000 fd:00 220075613 /usr/lib64/libstdc++.so.6.0.21
7f3c6927b000-7f3c6927f000 rw-p 00000000 00:00 0
7f3c6927f000-7f3c692e9000 r-xp 00000000 fd:00 214257086 /usr/lib64/libzstd.so.1.3.2
7f3c692e9000-7f3c694e8000 ---p 0006a000 fd:00 214257086 /usr/lib64/libzstd.so.1.3.2
7f3c694e8000-7f3c694e9000 r--p 00069000 fd:00 214257086 /usr/lib64/libzstd.so.1.3.2
7f3c694e9000-7f3c694ea000 rw-p 0006a000 fd:00 214257086 /usr/lib64/libzstd.so.1.3.2
7f3c694ea000-7f3c694fc000 r-xp 00000000 fd:00 214257074 /usr/lib64/liblz4.so.1.7.3
7f3c694fc000-7f3c696fb000 ---p 00012000 fd:00 214257074 /usr/lib64/liblz4.so.1.7.3
7f3c696fb000-7f3c696fc000 r--p 00011000 fd:00 214257074 /usr/lib64/liblz4.so.1.7.3
7f3c696fc000-7f3c696fd000 rw-p 00012000 fd:00 214257074 /usr/lib64/liblz4.so.1.7.3
7f3c696fd000-7f3c6970c000 r-xp 00000000 fd:00 201330716 /usr/lib64/libbz2.so.1.0.6
7f3c6970c000-7f3c6990b000 ---p 0000f000 fd:00 201330716 /usr/lib64/libbz2.so.1.0.6
7f3c6990b000-7f3c6990c000 r--p 0000e000 fd:00 201330716 /usr/lib64/libbz2.so.1.0.6
7f3c6990c000-7f3c6990d000 rw-p 0000f000 fd:00 201330716 /usr/lib64/libbz2.so.1.0.6
7f3c6990d000-7f3c6995d000 r-xp 00000000 fd:00 202956129 /usr/lib64/libcssz.so
7f3c6995d000-7f3c69b5c000 ---p 00050000 fd:00 202956129 /usr/lib64/libcssz.so
7f3c69b5c000-7f3c69b5d000 r--p 0004f000 fd:00 202956129 /usr/lib64/libcssz.so
7f3c69b5d000-7f3c69b5f000 rw-p 00050000 fd:00 202956129 /usr/lib64/libcssz.so
7f3c69b5f000-7f3c69bb5000 rw-p 00000000 00:00 0
7f3c69bb5000-7f3c69bca000 r-xp 00000000 fd:00 201330632 /usr/lib64/libz.so.1.2.7
7f3c69bca000-7f3c69dc9000 ---p 00015000 fd:00 201330632 /usr/lib64/libz.so.1.2.7
7f3c69dc9000-7f3c69dca000 r--p 00014000 fd:00 201330632 /usr/lib64/libz.so.1.2.7
7f3c69dca000-7f3c69dcb000 rw-p 00015000 fd:00 201330632 /usr/lib64/libz.so.1.2.7
7f3c69dcb000-7f3c69deb000 r-xp 00000000 fd:00 217465304 /usr/lib64/libgflags.so.2.1
7f3c69deb000-7f3c69fea000 ---p 00020000 fd:00 217465304 /usr/lib64/libgflags.so.2.1
7f3c69fea000-7f3c69feb000 r--p 0001f000 fd:00 217465304 /usr/lib64/libgflags.so.2.1
7f3c69feb000-7f3c69fec000 rw-p 00020000 fd:00 217465304 /usr/lib64/libgflags.so.2.1
7f3c69fec000-7f3c69ff1000 r-xp 00000000 fd:00 201750197 /usr/lib64/libsnappy.so.1.1.4
7f3c69ff1000-7f3c6a1f0000 ---p 00005000 fd:00 201750197 /usr/lib64/libsnappy.so.1.1.4
7f3c6a1f0000-7f3c6a1f1000 r--p 00004000 fd:00 201750197 /usr/lib64/libsnappy.so.1.1.4
7f3c6a1f1000-7f3c6a1f2000 rw-p 00005000 fd:00 201750197 /usr/lib64/libsnappy.so.1.1.4
7f3c6a1f2000-7f3c6a1f9000 r-xp 00000000 fd:00 201330504 /usr/lib64/librt-2.17.so
7f3c6a1f9000-7f3c6a3f8000 ---p 00007000 fd:00 201330504 /usr/lib64/librt-2.17.so
7f3c6a3f8000-7f3c6a3f9000 r--p 00006000 fd:00 201330504 /usr/lib64/librt-2.17.so
7f3c6a3f9000-7f3c6a3fa000 rw-p 00007000 fd:00 201330504 /usr/lib64/librt-2.17.so
7f3c6a3fa000-7f3c6a411000 r-xp 00000000 fd:00 201330500 /usr/lib64/libpthread-2.17.so
7f3c6a411000-7f3c6a610000 ---p 00017000 fd:00 201330500 /usr/lib64/libpthread-2.17.so
7f3c6a610000-7f3c6a611000 r--p 00016000 fd:00 201330500 /usr/lib64/libpthread-2.17.so
7f3c6a611000-7f3c6a612000 rw-p 00017000 fd:00 201330500 /usr/lib64/libpthread-2.17.so
7f3c6a612000-7f3c6a616000 rw-p 00000000 00:00 0
7f3c6a616000-7f3c6a636000 r-xp 00000000 fd:00 201330467 /usr/lib64/ld-2.17.so
7f3c6a828000-7f3c6a835000 rw-p 00000000 00:00 0
7f3c6a835000-7f3c6a836000 r--p 0001f000 fd:00 201330467 /usr/lib64/ld-2.17.so
7f3c6a836000-7f3c6a837000 rw-p 00020000 fd:00 201330467 /usr/lib64/ld-2.17.so
7f3c6a837000-7f3c6a838000 rw-p 00000000 00:00 0
7ffe8001f000-7ffe80040000 rw-p 00000000 00:00 0 [stack]
7ffe80170000-7ffe80172000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Aborted (core dumped)

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

[tcn@sfx-008 rocksdb-master]$ ldd sst_dump
linux-vdso.so.1 => (0x00007ffed15f1000)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x00007f4ce3313000)
librt.so.1 => /usr/lib64/librt.so.1 (0x00007f4ce310a000)
libsnappy.so.1 => /usr/lib64/libsnappy.so.1 (0x00007f4ce2f04000)
libgflags.so.2.1 => /usr/lib64/libgflags.so.2.1 (0x00007f4ce2ce3000)
libz.so.1 => /usr/lib64/libz.so.1 (0x00007f4ce2acc000)
libcssz.so => /usr/lib64/libcssz.so (0x00007f4ce2824000)
libbz2.so.1 => /usr/lib64/libbz2.so.1 (0x00007f4ce2614000)
liblz4.so.1 => /usr/lib64/liblz4.so.1 (0x00007f4ce2400000)
libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007f4ce2195000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007f4ce1e1a000)
libm.so.6 => /usr/lib64/libm.so.6 (0x00007f4ce1b17000)
libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007f4ce1901000)
libc.so.6 => /usr/lib64/libc.so.6 (0x00007f4ce1540000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4ce3530000)

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

I guess this may be caused by https://github.com/pingcap/tikv/blob/master/src/raftstore/store/snap.rs#L514

/cc @huachaohuang

from rust-rocksdb.

huachaohuang avatar huachaohuang commented on July 26, 2024

Is your enum value of kCSSZlibCompression (and others) consistent between rocksdb and the modified rust-rocksdb?
What's your DBCompressionType here? Maybe you can just replace them with only DBCompressionType::CSSZlib in your test.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Yes, I have changed and it's consistent between rocksdb and the modified rust-rocksdb.
It's
const COMPRESSION_PRIORITY: [DBCompressionType; 4] = [
DBCompressionType::Lz4,
DBCompressionType::Snappy,
DBCompressionType::Zstd,
DBCompressionType::CSSZlib,
];

You say just leave only CSSZlib like this?
const COMPRESSION_PRIORITY: [DBCompressionType; 1] = [
// DBCompressionType::Lz4,
// DBCompressionType::Snappy,
// DBCompressionType::Zstd,
DBCompressionType::CSSZlib,
];

Thanks.

from rust-rocksdb.

huachaohuang avatar huachaohuang commented on July 26, 2024

Yep, you can give it a try. If it crashes again, I guess maybe something is wrong with the modified rocksdb, you can run sst_dump with gdb to see what's wrong.

from rust-rocksdb.

siddontang avatar siddontang commented on July 26, 2024

@huachaohuang

Maybe this is a bug for us. If the user doesn't want to use compression, we can't use it for the snapshot SST too.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Thanks,

  1. I change the COMPRESSION_PRIORITY:
    tikv/src/util/rocksdb/mod.rs
    // Zlib and bzip2 are too slow.
    const COMPRESSION_PRIORITY: [DBCompressionType; 3] = [
    // DBCompressionType::Lz4,
    DBCompressionType::Snappy,
    DBCompressionType::Zstd,
    DBCompressionType::CSSZlib,
    ];
  2. Changed the default compression-per-level for rocksdb-defaultcf, rocksdb-writecf and raftrocksdb-defaultcf from [no,no,lz4,lz4,lz4,zstd,zstd] to [no,no,csszlib,csszlib,csszlib,zstd,zstd].
    tikv/src/config.rs
    compression_per_level: [
    DBCompressionType::No,
    DBCompressionType::No,
    DBCompressionType::CSSZlib,
    DBCompressionType::CSSZlib,
    DBCompressionType::CSSZlib,
    DBCompressionType::Zstd,
    DBCompressionType::Zstd,
    ],

Then I re-built tikv.
Now, I have setup the tidb cluster with my built tikv binary supported csszlib.
The tikv instances don't crash this time.
And now I'm running prepare.sh in tidb-bench/sysbench/ to generate data for test and it appears to work now.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

The default parameter of rocksdb: max_bytes_for_level_multiplier = 10, then how about the value of max_bytes_for_level_multiplier in rocksdb for tikv?

from rust-rocksdb.

huachaohuang avatar huachaohuang commented on July 26, 2024

We use the same default as rocksdb for tikv.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Can you please tell me how you run the sysbench benchmark?

  1. ./prepare.sh Generate data fisrt
  2. ./oltp.sh
  3. ./select.sh
  4. ./insert.sh
    Is this the order that you ran sysbench?
    Thanks, now I'm running sysbench to evaluate the performance of CSSZlib as well as our CSS card.

from rust-rocksdb.

huachaohuang avatar huachaohuang commented on July 26, 2024

Sorry, I don't know exactly how that benchmark run, feel free to open an issue there, I think you can get an answer there.
By the way, since the compression problem is fixed now, can we close this issue now?

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

Can you please tell me how to check that the data is really compressed by lz4 or zstd or any other compression library in tidb?

from rust-rocksdb.

zhangjinpeng87 avatar zhangjinpeng87 commented on July 26, 2024

You can check it in rocksdb's LOG file.

from rust-rocksdb.

zengqingfu1442 avatar zengqingfu1442 commented on July 26, 2024

In the log of rocksdb used in tikv, there are some following messages,
2017/12/13-11:21:38.935983 7f4f220c0e40 RocksDB version: 5.7.0
2017/12/13-11:21:38.936317 7f4f220c0e40 Git sha rocksdb_build_git_sha:9f6165a2eb71fc32f74518a2884fb5811fbea693
2017/12/13-11:21:38.936324 7f4f220c0e40 Compile date Dec 1 2017
2017/12/13-11:21:38.936331 7f4f220c0e40 DB SUMMARY
2017/12/13-11:21:38.936391 7f4f220c0e40 SST files in /mnt/sfx-card-root/tikv3_1/data/db dir, Total Num: 0, files:
...
...
2017/12/13-11:21:38.936573 7f4f220c0e40 Compression algorithms supported:
2017/12/13-11:21:38.936578 7f4f220c0e40 Snappy supported: 1
2017/12/13-11:21:38.936581 7f4f220c0e40 Zlib supported: 1
2017/12/13-11:21:38.936583 7f4f220c0e40 Bzip supported: 1
2017/12/13-11:21:38.936585 7f4f220c0e40 LZ4 supported: 1
2017/12/13-11:21:38.936595 7f4f220c0e40 ZSTD supported: 1
2017/12/13-11:21:38.936599 7f4f220c0e40 Fast CRC32 supported: 0
2017/12/13-11:21:38.936683 7f4f220c0e40 [db/db_impl_open.cc:216] Creating manifest 1
...
...
2017/12/13-11:21:38.938269 7f4f220c0e40 Options.write_buffer_size: 134217728
2017/12/13-11:21:38.938271 7f4f220c0e40 Options.max_write_buffer_number: 5
2017/12/13-11:21:38.938273 7f4f220c0e40 Options.compression[0]: NoCompression
2017/12/13-11:21:38.938275 7f4f220c0e40 Options.compression[1]: NoCompression
2017/12/13-11:21:38.938277 7f4f220c0e40 Options.compression[2]: CSSZlib
2017/12/13-11:21:38.938279 7f4f220c0e40 Options.compression[3]: CSSZlib
2017/12/13-11:21:38.938280 7f4f220c0e40 Options.compression[4]: CSSZlib
2017/12/13-11:21:38.938282 7f4f220c0e40 Options.compression[5]: CSSZlib
2017/12/13-11:21:38.938283 7f4f220c0e40 Options.compression[6]: CSSZlib
2017/12/13-11:21:38.938285 7f4f220c0e40 Options.bottommost_compression: Disabled

So can I make sure that my compression has been integrated into tikv?

from rust-rocksdb.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.