GithubHelp home page GithubHelp logo

fjall-rs / fjall Goto Github PK

View Code? Open in Web Editor NEW
311.0 5.0 13.0 2.66 MB

LSM-based embeddable key-value storage engine written in safe Rust

Home Page: https://fjall-rs.github.io/

License: Apache License 2.0

Rust 99.36% JavaScript 0.64%
database lsm lsm-tree lsmt rust rust-lang storage-engine mit-license embedded-database embedded-kv

fjall's Introduction

CI docs.rs Crates.io MSRV Discord

Fjall is an LSM-based embeddable key-value storage engine written in Rust. It features:

  • Thread-safe BTreeMap-like API
  • 100% safe & stable Rust
  • Range & prefix searching with forward and reverse iteration
  • Cross-partition snapshots (MVCC)
  • Automatic background maintenance
  • Single-writer transactions (optional)

Each Keyspace is a single logical database and is split into partitions (a.k.a. column families) - you should probably only use a single keyspace for your application. Each partition is physically a single LSM-tree and its own logical collection; however, write operations across partitions are atomic as they are persisted in a single keyspace-level journal, which will be recovered on restart.

It is not:

  • a standalone server
  • a relational database
  • a wide-column database: it has no notion of columns

Keys are limited to 65536 bytes, values are limited to 65536 bytes. As is normal with any kind of storage engine, larger keys and values have a bigger performance impact.

Like any typical key-value store, keys are stored in lexicographic order. If you are storing integer keys (e.g. timeseries data), you should use the big endian form to adhere to locality.

Basic usage

cargo add fjall
use fjall::{Config, PersistMode, Keyspace, PartitionCreateOptions};

let keyspace = Config::new(folder).open()?;

// Each partition is its own physical LSM-tree
let items = keyspace.open_partition("my_items", PartitionCreateOptions::default())?;

// Write some data
items.insert("a", "hello")?;

// And retrieve it
let bytes = items.get("a")?;

// Or remove it again
items.remove("a")?;

// Search by prefix
for kv in items.prefix("prefix") {
  // ...
}

// Search by range
for kv in items.range("a"..="z") {
  // ...
}

// Iterators implement DoubleEndedIterator, so you can search backwards, too!
for kv in items.prefix("prefix").rev() {
  // ...
}

// Sync the journal to disk to make sure data is definitely durable
// When the keyspace is dropped, it will try to persist
// Also, by default every second the keyspace will be persisted asynchronously
keyspace.persist(PersistMode::SyncAll)?;

Details

  • Partitions (a.k.a. column families) with cross-partition atomic semantics (atomic write batches)
  • Sharded journal for concurrent writes
  • Cross-partition snapshots (MVCC)
  • anything else implemented in lsm-tree

For the underlying LSM-tree implementation, see: https://crates.io/crates/lsm-tree.

Durability

To support different kinds of workloads, Fjall is agnostic about the type of durability your application needs. After writing data (insert, remove or committing a write batch), you can choose to call Keyspace::persist which takes a PersistMode parameter. By default every 1000ms data is fsynced asynchronously. Also when dropped, the keyspace will try to persist the journal synchronously.

Multithreading, Async and Multiprocess

Fjall is internally synchronized for multi-threaded access, so you can clone around the Keyspace and Partitions as needed, without needing to lock yourself. Common operations like inserting and reading are generally lock free.

For an async example, see the tokio example.

A single keyspace may not be loaded in parallel from separate processes however.

Feature flags

bloom

Uses bloom filters to reduce disk I/O for non-existing keys. Improves point read performance, but increases memory usage.

Disabled by default.

single_writer_tx

Allows opening a transactional Keyspace for single-writer (serialized) transactions, allowing RYOW (read-your-own-write), fetch-and-update and other atomic operations.

Enabled by default.

Stable disk format

The disk format is stable as of 1.0.0. Future breaking changes will result in a major version bump and a migration path.

Examples

See here for practical examples.

And checkout Smoltable, a standalone Bigtable-inspired mini wide-column database using fjall as its storage engine.

Contributing

How can you help?

License

All source code is licensed under MIT OR Apache-2.0.

All contributions are to be licensed as MIT OR Apache-2.0.

fjall's People

Contributors

beckend avatar dharanad avatar marvin-j97 avatar renovate[bot] avatar stephanvanschaik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

fjall's Issues

Database reached a seemingly irrecoverable state

Hey,

Since today I am experiencing an issue with my application that uses fjall 1.1.3, in that it can no longer resume operation when starting the application and it fails on decoding one of the UTF-8 strings in the journal as shown by the backtrace below:

thread 'main' panicked at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/journal/marker.rs:126:65:
should be utf-8: Utf8Error { valid_up_to: 26, error_len: Some(1) }
stack backtrace:
   0: rust_begin_unwind
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/std/src/panicking.rs:645:5
   1: core::panicking::panic_fmt
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/panicking.rs:72:14
   2: core::result::unwrap_failed
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/result.rs:1653:5
   3: core::result::Result<T,E>::expect
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/result.rs:1034:23
   4: <fjall::journal::marker::Marker as lsm_tree::serde::Deserializable>::deserialize
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/journal/marker.rs:126:33
   5: <fjall::journal::reader::JournalShardReader as core::iter::traits::iterator::Iterator>::next
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/journal/reader.rs:46:15
   6: fjall::journal::shard::JournalShard::recover_and_repair
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/journal/shard.rs:101:25
   7: fjall::journal::Journal::recover_memtables
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/journal/mod.rs:44:17
   8: fjall::journal::Journal::recover
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/journal/mod.rs:66:25
   9: fjall::keyspace::Keyspace::find_active_journal
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/keyspace.rs:508:32
  10: fjall::keyspace::Keyspace::recover
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/keyspace.rs:528:13
  11: fjall::keyspace::Keyspace::create_or_recover
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/keyspace.rs:279:13
  12: fjall::keyspace::Keyspace::open
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/keyspace.rs:264:24
  13: fjall::config::Config::open
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.1.3/src/config.rs:199:9
  14: my_project::db::Database::open
             at ./src/db.rs:39:24
  15: my_project::main::{{closure}}
             at ./src/main.rs:380:14
  16: tokio::runtime::park::CachedParkThread::block_on::{{closure}}
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/park.rs:282:63
  17: tokio::runtime::coop::with_budget
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/coop.rs:107:5
  18: tokio::runtime::coop::budget
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/coop.rs:73:5
  19: tokio::runtime::park::CachedParkThread::block_on
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/park.rs:282:31
  20: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/context/blocking.rs:66:9
  21: tokio::runtime::scheduler::multi_thread::MultiThread::block_on::{{closure}}
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/multi_thread/mod.rs:87:13
  22: tokio::runtime::context::runtime::enter_runtime
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/context/runtime.rs:65:16
  23: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/scheduler/multi_thread/mod.rs:86:9
  24: tokio::runtime::runtime::Runtime::block_on
             at /home/stephan/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.35.0/src/runtime/runtime.rs:350:45
  25: my_project::main
             at ./src/main.rs:522:5
  26: core::ops::function::FnOnce::call_once
             at /rustc/82e1608dfa6e0b5569232559e3d385fea5a93112/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Unfortunately, due to a license agreement I have for some of the data used in this database, I cannot share the files, but I would like to debug this myself and I am looking for some pointers on how to approach this since I am not familiar with the underlying disk format used by lsm-tree/fjall. Any tips would be appreciated.

Find MSRV

cargo msrv said 1.68.2, but it's not building on Mac.


Search for TODO: #8

[Tracking] Breaking changes in V2

API

  • Remove FlushMode alias
  • Enable bloom feature by default

Data format

  • Start journal markers at 1, to prevent the zeroed pre-allocated bytes from matching the start marker tag, causing unnecessary logging of an unfinished batch at journal tail #53
  • #58 PartitionCreateOptions need to be stored to be recovered
  • Fix journal length of values #68
  • Set max value length to u32
  • SingleDelete #56
  • Key-Value separation #34
  • Correctly track lowest closed instant/snapshot seqno #61

Optimistic concurrency control through sequence number

Is your feature request related to a problem? Please describe.
Currently it's not possible to ensure you work with latest key version.

Describe the solution you'd like
Make possible to keys have sequence number, which may be used for denying modifying operations, if last sequence of stream and requested number doesn't match.

thinking through: persistence if keyspace is dropped before opened partitions

I'm noticing if I call keyspace.open_partition, it's possible to keep the partition around, and drop the keyspace.

e.g. something like:

pub struct Store {
    partition: PartitionHandle,
}

impl Store {
    pub fn new(path: &str) -> Store {
        let config = Config::new(path);
        let keyspace = config.open().unwrap();
        let partition = keyspace
            .open_partition("main", PartitionCreateOptions::default())
            .unwrap();
        Store {
            partition,
        }
    }

    pub fn put(&mut self) -> Frame {
        let frame = Frame {
            id: scru128::new(),
        };
        let encoded: Vec<u8> = bincode::serialize(&frame).unwrap();
        self.partition.insert(frame.id.to_bytes(), encoded).unwrap();
        frame
    }
}

let store = Store::new("./store")
store.put();

This caught me off guard, as I was expecting a fsync when the process ended. But fsync only occurs automatically when keyspace is dropped (in this experiment I'm not keeping the process around for a complete second).

I've updated to keep the keyspace on the Store too, and that works as expected.

unexpected warning ๏ผš shard.rs:106: Invalid batch: found batch start inside batch

I'm writing a command line application. When my program exits, I use keyspace.persist(FlushMode::SyncAll)?;
I not close the database (is there a shutdown function?)

The following warn occasionally appears at 2 times start my cli. ( video https://www.loom.com/share/c1c1a71d06fb495bad85bacda73b219e )

Is there a way to block this warning?

  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:106: Invalid batch: found batch start inside batch
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 88
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:213: Invalid batch: missing terminator, but last batch, so probably incomplete, discarding to keep atomicity
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 88
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:106: Invalid batch: found batch start inside batch
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 0
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:213: Invalid batch: missing terminator, but last batch, so probably incomplete, discarding to keep atomicity
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 0
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:106: Invalid batch: found batch start inside batch
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 0
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:213: Invalid batch: missing terminator, but last batch, so probably incomplete, discarding to keep atomicity
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 0
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:106: Invalid batch: found batch start inside batch
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 0
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:213: Invalid batch: missing terminator, but last batch, so probably incomplete, discarding to keep atomicity
  WARN fjall::journal::shard: /Users/z/.cargo/registry/src/index.crates.io-6f17d22bba15001f/fjall-1.0.3/src/journal/shard.rs:70: Truncating shard to 0

`tree.get` function returned the unexpected value

tree.get function seems using prefix to match key.

Here is my code:

let folder = "";

// A tree is a single physical keyspace/index/...
// and supports a BTreeMap-like API
// but all data is persisted to disk.
let tree = Config::new(folder).open().unwrap();

// Note compared to the BTreeMap API, operations return a Result<T>
// So you can handle I/O errors if they occur
tree.insert("hello-key-999991", "hello-value-999991")
    .unwrap();

let item = tree.get("hello-key-99999").unwrap();
match item {
    Some(value) => {
        println!("value: {}", std::str::from_utf8(&value).unwrap());
    }
    None => println!("Not Found"),
}

// Flush to definitely make sure data is persisted
tree.flush().unwrap();

Expect Not Found, but printed value: hello-value-999991

Think about backup strategies

Discussed in #50

Originally posted by jeromegn May 19, 2024
If one wanted to do a backup of the database, what's the best practice here? Is there a way to do an "online" (not shutting down the process using the database) backup?

Since there are many directories and files, I assume the safest way is to persist from the process, exit and then create a tarball of the whole directory.

With SQLite, for example, it's possible to run a VACUUM INTO that will create a space-efficient snapshot of the database. SQLite can support multiple processes though, so it's a different ball game.

SingleDelete

fjall-rs/lsm-tree#38

Update queue example


SingleDeletes are a very useful operation if any given key is only ever inserted once and never updated (like a Queue)... With standard tombstones, the tombstones would fill up one side of the keyspace, making operations like first_key_value become scan operations until the tombstones are cleaned up by arriving in the last level.

A SingleDelete would vanish when paired with an insert during compaction, causing no tombstone problem, the caveat being that older data is resurrected, so SingleDeletes are only useful for very specific workloads.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

cargo
Cargo.toml
  • byteorder 1.5.0
  • crc32fast 1.4.2
  • lsm-tree 1.5.0
  • log 0.4.21
  • std-semaphore 0.1.0
  • tempfile 3.10.1
  • fs_extra 1.3.0
  • path-absolutize 3.1.1
  • criterion 0.5.1
  • nanoid 0.4.0
  • test-log 0.2.16
  • rand 0.8.5
github-actions
.github/workflows/release.yml
  • actions/checkout v4
  • katyo/publish-crates v2
.github/workflows/test.yml
  • actions/checkout v4
  • Swatinem/rust-cache v2
  • actions/checkout v4

  • Check this box to trigger a request for Renovate to run again on this repository

Periodic (maintenance) compaction

On demand and/or on a regular, but infrequent, interval, check each level and maybe compact it into itself (need a SingleLevelCompactor), or pull down into last level ideally (to remove tombstones).

Data partitioning/Column families

Column families/partitions are the missing piece to having a great storage engine. Each column family is its own partition inside the LSM, but all share the same journal, enabling atomic writes. So each column family is physically stored separately from the others. Compaction then looks at each partition instead, never mixing tables of different partitions. It's like a big meta-LSM tree, instead of using multiple trees, which don't have atomic semantics, unless putting another super structure on top. That super structure will be the new LSM tree which contains partitions basically. Compaction may even be set differently for each column family.

image

When creating no column family, everything gets put into the "default" column family.
Creating column families is pretty simple, you add a new memtable. Dropping a column family is simple. Delete its memtable and all the segments inside the partition. The journal needs to be handled accordingly, because flushing one column family doesn't necessarily mean the others are flushed too. And if the column family was deleted, the journal should not flush parts to the partition at all.

There needs to be new semantics for writing to a column family:

  • insert(col_family, key, value)
  • remove(col_family, key)
  • batch() ... needs col_family support

https://github.com/facebook/rocksdb/wiki/Column-Families

Example use cases

  • One partition could be an index in a relational database for example, so writing a row into "default" also atomically inserts/updates rows inside "index-1" (https://github.com/facebook/mysql-5.6/wiki/MyRocks-data-dictionary-format)
  • Dropping the index just drops the column family "index-1", which is a O(1) operation
  • another example would be locality groups in Bigtable and column families in RocksDB and Cassandra

Optional TTL for FIFO strategy

  1. Look at each segment's metadata
  2. Compare with TTL setting
  3. If now > (segment.created_at + ttl_seconds), drop that segment (use DropSegments)
  4. Then check the max L0 size setting, and use that to maybe drop segments as well

Insert-during-iterate deadlock

Discussed in #73

Originally posted by dbbnrl August 3, 2024
In my application I sometimes generate DB entries from existing entries. The pattern is that I iterate over a range of existing keys, occasionally creating new entries (outside the iterator range) as I go.

It looks like Fjall is deadlocking in this scenario after some number of insertions. I thought perhaps using the snapshot feature would help, but it makes no difference.

Glancing at the code, I can see that create_iter takes a bunch of Read locks which it holds until the iterator is dropped. Presumably the insert code is attempting to grab a Write lock at some point (memtable flush?) so this isn't a surprising outcome...

I thought about reporting this as an issue but if this kind of usage is an explicit non-goal of Fjall I didn't want to do that.

Is this supported? Supportable?

Create axum-kv

The examples are supposed to show you best practices when using fjall inside a web server, but turns out I'm absolutely terrible at axum!

If an axum master mind comes by, this is your chance.

See actix-kv for inspiration, it has:

  • a custom error responder for ? syntax
  • unit testing
  • web::block calls to prevent blocking the tokio event loop

Recreating a deleted partition may be undefined behaviour

Reproduce:

  1. Delete a partition
  2. A partition may not be cleaned up until all handles to it are dropped, the partition folder is marked with a .deleted marker
  3. At this point, the keyspace forgot about the partition, now recreate the partition
  4. No idea what happens: Need to check if the partition folder exists AND .deleted marker exists
  5. When the handles of the "old" partition are dropped, the partition folder will be deleted, even after we "recreated" it, see step 4 for solution

Correctly track lowest closed instant/snapshot seqno

Currently, MVCC versions of items may be GC'd if there is no open snapshot and the level is deep enough.
However, if there's always some snapshot hanging around, it may completely prevent GC of old items.
It would be better to keep track of open snapshots, and store the lowest seqno that can be definitely be dropped without affecting existing snapshots, and using that number inside compaction filtering.

Example:

LowestSeqno = 0 // means nothing can be cleaned up

OpenSnapshot(1)
OpenSnapshot(2)

CloseSnapshot(2)
LowestSeqno = 0 // because Snapshot=1 is still hanging around, the counter was not changed

CloseSnapshot(1)
LowestSeqno = 2 // means we can remove any value that has seqno < 2

Note: There can be multiple snapshots with the same seqno

Possible API

SnapshotTracker::default() => starts at 0
SnapshotTracker::register(seqno)
SnapshotTracker::release(seqno)
SnapshotTracker::get_lowest() -> Option => gets the highest seqno we can safely free

Bloom Filters

Need a bloom (or XOR or ribbon or cuckoo) filter, that:

  1. can be restored from disk/raw bytes

This is obvious. On recovery, load all bloom filters back into memory from disk. So the bloom filter needs to be able to give us its internal bytes and needs a constructor to recreate it from raw bytes.

  1. can be resized while writing to it

RocksDB used to store a bloom filter per data block, so filter construction is simple. However, its read path is much worse because you need to travel through the SST file. The new full filter format just stores one big bloom filter per SST file. That requires much more memory when flushing and compacting because all keys or hashes need to be buffered until the SST is written because the number of items is unknown until everything is written. Then the bloom filter needs to be built from the buffer.

Using a scalable filter may solve this memory issue.

  1. Also...

M O N K E Y may maximize efficiency for a given amount of memory: http://daslab.seas.harvard.edu/monkey/.

Allow more characters in partition name.

motivation

I am seeking a lsm-tree based database as my in-memory database backend. Firstly, I found crate lsm_tree, but it doesn't contain auto flushing and partition, the only thing i can do is to store different table in one folder, that's fine. Then I find this crate, it exactly is the wheel that I am looking for. After a few minutes migration, a big problem occurs.

the problem

I was using a reflection of a type as the name of the lsm_tree file to store different types in different trees. Let's say, the dictionary-like generic api for my storage is Dense<K, V, D>, D is for delta. When lib user want to store a type like <u64,u64,()>, the name after reflection of this type is something like Dense<u64, u64, ()>
As you can see, the name contains character <>, and blank space, which cannot used as the name of partition.

solution

remove the limit for partition name
or, use a AsRef<[u8]> as the partition name api.

Transactions

We should have all the pieces to get transactions working:

  • Snapshots achieved by MVCC
  • Atomicity achieved by Batch

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.