Comments (3)
If you look at https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks#2-bulk-load-of-keys-in-random-order, you will find the command line options to load data in bulk-load fashion.
echo "Bulk load database into L0...."
bpl=10485760;overlap=10;mcz=2;del=300000000;levels=2;ctrig=10000000; delay=10000000; stop=10000000; wbn=30; mbc=20; mb=1073741824;wbs=268435456; dds=1; sync=0; r=1000000000; t=1; vs=800; bs=65536; cs=1048576; of=500000; si=1000000; ./db_bench --benchmarks=fillrandom --disable_seek_compaction=1 --mmap_read=0 --statistics=1 --histogram=1 --num=$r --threads=$t --value_size=$vs --block_size=$bs --cache_size=$cs --bloom_bits=10 --cache_numshardbits=4 --open_files=$of --verify_checksum=1 --db=/data/mysql/leveldb/test --sync=$sync --disable_wal=1 --compression_type=zlib --stats_interval=$si --compression_ratio=50 --disable_data_sync=$dds --write_buffer_size=$wbs --target_file_size_base=$mb --max_write_buffer_number=$wbn --max_background_compactions=$mbc --level0_file_num_compaction_trigger=$ctrig --level0_slowdown_writes_trigger=$delay --level0_stop_writes_trigger=$stop --num_levels=$levels --delete_obsolete_files_period_micros=$del --min_level_to_compress=$mcz --max_grandparent_overlap_factor=$overlap --stats_per_interval=1 --max_bytes_for_level_base=$bpl --memtablerep=vector --use_existing_db=0 --disable_auto_compactions=1 --source_compaction_factor=10000000
echo "Running manual compaction to do a global sort map-reduce style...."
bpl=10485760;overlap=10;mcz=2;del=300000000;levels=2;ctrig=10000000; delay=10000000; stop=10000000; wbn=30; mbc=20; mb=1073741824;wbs=268435456; dds=1; sync=0; r=1000000000; t=1; vs=800; bs=65536; cs=1048576; of=500000; si=1000000; ./db_bench --benchmarks=compact --disable_seek_compaction=1 --mmap_read=0 --statistics=1 --histogram=1 --num=$r --threads=$t --value_size=$vs --block_size=$bs --cache_size=$cs --bloom_bits=10 --cache_numshardbits=4 --open_files=$of --verify_checksum=1 --db=/data/mysql/leveldb/test --sync=$sync --disable_wal=1 --compression_type=zlib --stats_interval=$si --compression_ratio=50 --disable_data_sync=$dds --write_buffer_size=$wbs --target_file_size_base=$mb --max_write_buffer_number=$wbn --max_background_compactions=$mbc --level0_file_num_compaction_trigger=$ctrig --level0_slowdown_writes_trigger=$delay --level0_stop_writes_trigger=$stop --num_levels=$levels --delete_obsolete_files_period_micros=$del --min_level_to_compress=$mcz --max_grandparent_overlap_factor=$overlap --stats_per_interval=1 --max_bytes_for_level_base=$bpl --memtablerep=vector --use_existing_db=1 --disable_auto_compactions=1 --source_compaction_factor=10000000
du -s -k test
504730832 test
from rocksdb.
My guess is that these are the key values as they prevent compaction from
L0->L1 during the load...
ctrig=10000000; delay=10000000; stop=10000000
On Fri, Nov 29, 2013 at 6:07 PM, dhruba borthakur
[email protected]:
If you look at
https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks#2-bulk-load-of-keys-in-random-order,
you will find the command line options to load data in bulk-load fashion.echo "Bulk load database into L0...."
bpl=10485760;overlap=10;mcz=2;del=300000000;levels=2;ctrig=10000000;
delay=10000000; stop=10000000; wbn=30; mbc=20; mb=1073741824;wbs=268435456;
dds=1; sync=0; r=1000000000; t=1; vs=800; bs=65536; cs=1048576; of=500000;
si=1000000; ./db_bench --benchmarks=fillrandom --disable_seek_compaction=1
--mmap_read=0 --statistics=1 --histogram=1 --num=$r --threads=$t
--value_size=$vs --block_size=$bs --cache_size=$cs --bloom_bits=10
--cache_numshardbits=4 --open_files=$of --verify_checksum=1
--db=/data/mysql/leveldb/test --sync=$sync --disable_wal=1
--compression_type=zlib --stats_interval=$si --compression_ratio=50
--disable_data_sync=$dds --write_buffer_size=$wbs
--target_file_size_base=$mb --max_write_buffer_number=$wbn
--max_background_compactions=$mbc
--level0_file_num_compaction_trigger=$ctrig
--level0_slowdown_writes_trigger=$delay --level0_stop_writes_trigger=$stop
--num_levels=$levels --delete_obsolete_files_period_micros=$del
--min_level_to_compress=$mcz --max_grandparent_overlap_fac tor=$overlap
--stats_per_interval=1 --max_bytes_for_level_base=$bpl --memtablerep=vector
--use_existing_db=0 --disable_auto_compactions=1
--source_compaction_factor=10000000
echo "Running manual compaction to do a global sort map-reduce style...."
bpl=10485760;overlap=10;mcz=2;del=300000000;levels=2;ctrig=10000000;
delay=10000000; stop=10000000; wbn=30; mbc=20; mb=1073741824;wbs=268435456;
dds=1; sync=0; r=1000000000; t=1; vs=800; bs=65536; cs=1048576; of=500000;
si=1000000; ./db_bench --benchmarks=compact --disable_seek_compaction=1
--mmap_read=0 --statistics=1 --histogram=1 --num=$r --threads=$t
--value_size=$vs --block_size=$bs --cache_size=$cs --bloom_bits=10
--cache_numshardbits=4 --open_files=$of --verify_checksum=1
--db=/data/mysql/leveldb/test --sync=$sync --disable_wal=1
--compression_type=zlib --stats_interval=$si --compression_ratio=50
--disable_data_sync=$dds --write_buffer_size=$wbs
--target_file_size_base=$mb --max_write_buffer_number=$wbn
--max_background_compactions=$mbc
--level0_file_num_compaction_trigger=$ctrig
--level0_slowdown_writes_trigger=$delay --level0_stop_writes_trigger=$stop
--num_levels=$levels --delete_obsolete_files_period_micros=$del
--min_level_to_compress=$mcz --max_grandparent_overlap_factor =$overlap
--stats_per_interval=1 --max_bytes_for_level_base=$bpl --memtablerep=vector
--use_existing_db=1 --disable_auto_compactions=1
--source_compaction_factor=10000000
du -s -k test
504730832 test—
Reply to this email directly or view it on GitHubhttps://github.com//issues/21#issuecomment-29544041
.
Mark Callaghan
[email protected]
from rocksdb.
The above comments show how to do bulk load which should solve your problem
from rocksdb.
Related Issues (20)
- DirectIO WAL Write does not honor DBOptions.use_fsync HOT 2
- New LZ4 encoding HOT 1
- release 8.10.2 on maven central HOT 1
- tailing iterator ambiguity
- When run in two consecutive times DB::Open has quite different time cost HOT 2
- Is it possible to make the db.get operation after db.close without exception? HOT 3
- Feature Request: Graceful upgrade
- There is an error when I am compilling rocksDB version above 8.10.0 with zenFS 2.1.4 HOT 1
- Doc bug: CustomFilterPolicy example
- Feature request: check if prefix exists without seeking HOT 5
- intermittent segfault with small database with multiple column families HOT 2
- coredump when trying to repair database
- Rocksdb crash when upgrade from 6.6.3 to 8.0 HOT 1
- ZSTD_TrainDictionary runs even when the compression is set to kNoCompression for a given level HOT 4
- can I reuse a rocksdb writebatch by calling rocksdb_writebatch_clear?
- TtlDb.java doesn't set default column family handle HOT 4
- Discussion: requiring minimum of C++20 HOT 10
- Enable/Disable CacheEntryRoleOptions::chargedfor CacheEntryRole DataBlock is not supported on Ubuntu only HOT 5
- rocksdb Open may read over 160MB from an sst file in one Read request to file system
- Windows GitHub Action build of RocksDB failing silently without marking job as failed HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rocksdb.