Comments (13)
Bot detected the issue body's language is not English, translate it automatically.
Title: Where can I find the changes in how TTL information is stored in BlobIndex mentioned in the new engine floyd design?
I saw that the TTL information mentioned in the new engine floyd design is stored in BlobIndex, but I looked through the code of the main branch and didn't seem to find it.
This should maintain a private rocksdb? Where can I find the relevant implementation, thank you!
from pika.
@AlexStocks hi, alex, could u help me reslove this, thanks !
from pika.
你能把原始文档贴下吗?
from pika.
Bot detected the issue body's language is not English, translate it automatically.
Can you post the original document?
from pika.
你能把原始文档贴下吗?
#2052
我是在这里面看到的:
blobdb使用优化
RocksDB支持了key-value分离的实现,即通过将大value存储到blob文件中,在sst文件中存储大value在blob文件的索引信息,从而减少写写放大,有效提升大value场景下的写入性能。pika依赖自定义的compactionFilter实现过期数据的处理,ttl存储在value中,因此在compaction过程中不可避免导致额外的blob文件IO。一种方法是修改sst文件中存储的blobindex,在blobindex的相同offset位置存储value的ttl值,这样compaction过程中对过期数据的清理的逻辑,就不需要查询blob文件,减少额外的磁盘IO。对于非string类型的数据,通过实现FilterByBlobKey的方式,省掉额外的BLOB文件IO开销。
from pika.
你能把原始文档贴下吗?
#2052 我是在这里面看到的:
blobdb使用优化 RocksDB支持了key-value分离的实现,即通过将大value存储到blob文件中,在sst文件中存储大value在blob文件的索引信息,从而减少写写放大,有效提升大value场景下的写入性能。pika依赖自定义的compactionFilter实现过期数据的处理,ttl存储在value中,因此在compaction过程中不可避免导致额外的blob文件IO。一种方法是修改sst文件中存储的blobindex,在blobindex的相同offset位置存储value的ttl值,这样compaction过程中对过期数据的清理的逻辑,就不需要查询blob文件,减少额外的磁盘IO。对于非string类型的数据,通过实现FilterByBlobKey的方式,省掉额外的BLOB文件IO开销。
您好,考虑到 rocksdb 之后可能升级的问题, 所以这个 pr 当时提交了之后并没有 merge 进去。以下是 pr 地址https://github.com/pikiwidb/rocksdb/pull/2/files
from pika.
Bot detected the issue body's language is not English, translate it automatically.
Can you post the original document?
#2052 I saw it here:
blobdb usage optimization RocksDB supports the implementation of key-value separation, that is, by storing large values in blob files and storing the index information of large values in blob files in sst files, thereby reducing write amplification and effectively improving large value scenarios. write performance below. Pika relies on a custom compaction filter to process expired data, and ttl is stored in value, so additional blob file IO is inevitably caused during the compaction process. One method is to modify the blobindex stored in the sst file and store the ttl value of the value at the same offset position of the blobindex. In this way, the logic of cleaning up expired data during the compaction process does not need to query the blob file and reduces additional disk IO. For non-string type data, additional BLOB file IO overhead is eliminated by implementing FilterByBlobKey.
Hello, considering the possible upgrade of rocksdb later, this PR was not merged after it was submitted. The following is the PR address https://github.com/pikiwidb/rocksdb/pull/2/files
from pika.
你能把原始文档贴下吗?
#2052 我是在这里面看到的:
blobdb使用优化 RocksDB支持了key-value分离的实现,即通过将大value存储到blob文件中,在sst文件中存储大value在blob文件的索引信息,从而减少写写放大,有效提升大value场景下的写入性能。pika依赖自定义的compactionFilter实现过期数据的处理,ttl存储在value中,因此在compaction过程中不可避免导致额外的blob文件IO。一种方法是修改sst文件中存储的blobindex,在blobindex的相同offset位置存储value的ttl值,这样compaction过程中对过期数据的清理的逻辑,就不需要查询blob文件,减少额外的磁盘IO。对于非string类型的数据,通过实现FilterByBlobKey的方式,省掉额外的BLOB文件IO开销。您好,考虑到 rocksdb 之后可能升级的问题, 所以这个 pr 当时提交了之后并没有 merge 进去。以下是 pr 地址https://github.com/pikiwidb/rocksdb/pull/2/files
好的,多谢老哥!
from pika.
Bot detected the issue body's language is not English, translate it automatically.
Can you post the original document?
#2052 I saw it here:
blobdb usage optimization RocksDB supports the implementation of key-value separation, that is, by storing large values in blob files and storing the index information of large values in blob files in sst files, thereby reducing write amplification and effectively improving large values. Write performance under the scenario. Pika relies on a custom compaction filter to process expired data, and ttl is stored in value, so additional blob file IO is inevitably caused during the compaction process. One method is to modify the blobindex stored in the sst file and store the ttl value of value at the same offset position of the blobindex. In this way, the logic of cleaning up expired data during the compaction process does not need to query the blob file and reduces additional disk IO. For non-string type data, additional BLOB file IO overhead is eliminated by implementing FilterByBlobKey.Hello, considering the possible upgrade of rocksdb later, this PR was not merged after it was submitted. The following is the PR address https://github.com/pikiwidb/rocksdb/pull/2/files
Okay, thank you bro!
from pika.
@guoxiangCN 你加入 pikiwidb 微信群了吗
from pika.
Bot detected the issue body's language is not English, translate it automatically.
@guoxiangCN Have you joined the pikiwidb WeChat group?
from pika.
@guoxiangCN 你加入 pikiwidb 微信群了吗
已申请
from pika.
Bot detected the issue body's language is not English, translate it automatically.
@guoxiangCN Have you joined the pikiwidb WeChat group?
Already applied
from pika.
Related Issues (20)
- abnormal log output when exec slaveof no one
- please push docker image v3.5.4 and v4.0.0 HOT 4
- some binlog maybe re-applied when slave and master connecting by bgsave's dump
- using rocky linux instead of centos in github ci workflow HOT 1
- An existing key cannot be modified by multi
- INFO、warning、error日志 需要区分开,不同的日志写在不同的文件夹里面,目前有的ERROR日志写了三份 HOT 3
- info keyspace 格式保持一致 HOT 1
- binlog_sender 工具支持持续拉取 HOT 1
- floyd save ttl with seconds
- 基于codis搭建的pika集群,pipeline导入数据时proxy报错,无法导入 HOT 4
- Read BadRecord record, will decode failed, this record may dbsync padded record, not processed here HOT 2
- 版本4.0.0 PipeLine 对一个Hash Key 进行 hincrBy 操作时,会导致pika直接崩溃! HOT 5
- Pika 智能化参数 HOT 3
- dbsize command not supoort in pika:3.5.3 HOT 4
- using jemalloc as default malloc lib HOT 2
- bug: execute hincrby cmd more than one times after delete a field which is existing.
- feat: Optimize the logic of the HIncrby command with MergeOperator (Merge) HOT 16
- how to migrate pika to redis?
- 有些 Pika 命令在传输 binlog 或是 缓存 的时候,过期时间会发生精度损失 HOT 1
- pika启动失败,报错【pika_conf.cc:135] db-instance-num load error】 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pika.