GithubHelp home page GithubHelp logo

vmemcache's Introduction

libvmemcache: buffer based LRU cache

Build Status Coverage Status

⚠️ Discontinuation of the project

The vmemcache project will no longer be maintained by Intel.

  • Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.
  • Intel no longer accepts patches to this project.
  • If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.
  • You will find more information here.

Introduction

libvmemcache is an embeddable and lightweight in-memory caching solution. It's designed to fully take advantage of large capacity memory, such as Persistent Memory with DAX, through memory mapping in an efficient and scalable way.

The things that make it unique are:

  • Extent-based memory allocator which sidesteps the fragmentation problem that affects most in-memory databases and allows the cache to achieve very high space utilization for most workloads.
  • Buffered LRU, which combines a traditional LRU doubly-linked list with a non-blocking ring buffer to deliver high degree of scalability on modern multi-core CPUs.
  • Unique indexing structure, critnib, which delivers high-performance while being very space efficient.

The cache is tuned to work optimally with relatively large value sizes. The smallest possible size is 256 bytes, but libvmemcache works best if the expected value sizes are above 1 kilobyte.

Building The Source

Requirements:

  • cmake >= 3.3

Optional:

  • valgrind (for tests)
  • pandoc (for documentation)

For all systems:

$ git clone https://github.com/pmem/vmemcache.git
$ cd vmemcache
$ mkdir build
$ cd build

And then:

On RPM-based Linux distros (Fedora, openSUSE, RHEL, SLES)

$ cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCPACK_GENERATOR=rpm
$ make package
$ sudo rpm -i libvmemcache*.rpm

On DEB-based Linux distros (Debian, Ubuntu)

$ cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCPACK_GENERATOR=deb
$ make package
$ sudo dpkg -i libvmemcache*.deb

On other Linux distros

$ cmake .. -DCMAKE_INSTALL_PREFIX=~/libvmemcache-bin
$ make
$ make install

Statistics

Statistics are enabled by default. They can be disabled at the compile time of the libvmemcache library if the STATS_ENABLED CMake option is set to OFF.

See the man page for more information about statistics.

vmemcache's People

Contributors

djbw avatar kilobyte avatar ldorau avatar llugin avatar lplewa avatar marcinslusarz avatar pbalcer avatar szadam avatar wlemkows avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vmemcache's Issues

FS-test-heap-usage fails: negative heap usage

full log, Fedora bug

Found during Fedora 33 mass rebuild. Doesn't reproduce on current Debian unstable.

FATAL ERROR at /builddir/build/BUILD/vmemcache-0.8/tests/vmemcache_test_heap_usage.c:251 in test_heap_usage(): Final heap usage is different than 0 (-735776): possible memory leak
CMake Error at helpers.cmake:78 (message):
  
  /builddir/build/BUILD/vmemcache-0.8/tests/../tests//vmemcache_test_heap_usage
  /builddir/build/BUILD/vmemcache-0.8/tests/test exit code Child aborted
  doesn't match expectation 0

Inconsistent naming: 'fragment' vs 'segment'

in libvmemcache.h:
VMEMcache *vmemcache_new(const char *path, size_t max_size, size_t segment_size,
enum vmemcache_replacement_policy replacement_policy);

However, in a macro variable for minimum segment size it's called fragment, not segment:
#define VMEMCACHE_MIN_FRAG ((size_t)8) /* minimum fragment size: 8B */

test-heap-usage fails with gcc-9

There's some 40 byte usable 32 byte real allocation that doesn't get freed.

Reproduces on gcc-9 on multiple machines, doesn't reproduce on gcc-8 nor clang.

Missing Coverity scans

We're missing Coverity scans in vmemcache repo. It is probably be a good idea to add it.

By this addition it should also be fixed usage of "COVERITY" variable (see similar change for e.g. libpmemobj-cpp repo) - see attachment : coverity_variable_usage.patch.txt

Guarantee get after on_miss?

There are two recommended ways to use the cache, one of them being supplying an on_miss callback. Alas, if there's a lot of puts going on (likely if there's a "preload" operation going on a live server) or the callback is very slow after put (writes a log/etc which stalls), it's possible for the entry to have already been evicted before get gets around to retry the request.

And, knowing people, no one is going to check for that. A rare unexpected failure will crash the program.

One way to fix this would be a TLS variable that says we're in an on_miss callback. The put done by that thread would have its refcount bumped, decreased only once the get completes (there might be more than one put, but only one successful for that specific key).

Thoughts...?

warning: ‘get_granular_rand_size’ defined but not used [-Wunused-function]

Scanning dependencies of target vmemcache_test_mt
[ 60%] Building C object tests/CMakeFiles/vmemcache_test_mt.dir/vmemcache_test_mt.c.o
In file included from /home/ldorau/work/vmemcache/tests/vmemcache_test_mt.c:43:
/home/ldorau/work/vmemcache/tests/test_helpers.h:128:1: warning: ‘get_granular_rand_size’ defined but not used [-Wunused-function]
 get_granular_rand_size(size_t val_max, size_t granularity)
 ^~~~~~~~~~~~~~~~~~~~~~
[ 62%] Linking C executable vmemcache_test_mt
[ 67%] Built target vmemcache_test_mt
Scanning dependencies of target bench_simul
[ 70%] Building C object benchmarks/CMakeFiles/bench_simul.dir/bench_simul.c.o
In file included from /home/ldorau/work/vmemcache/benchmarks/bench_simul.c:49:
/home/ldorau/work/vmemcache/tests/test_helpers.h:128:1: warning: ‘get_granular_rand_size’ defined but not used [-Wunused-function]
 get_granular_rand_size(size_t val_max, size_t granularity)
 ^~~~~~~~~~~~~~~~~~~~~~
[ 72%] Linking C executable bench_simul
[ 82%] Built target bench_simul
Scanning dependencies of target bench_micro
[ 85%] Building C object benchmarks/CMakeFiles/bench_micro.dir/bench_micro.c.o
In file included from /home/ldorau/work/vmemcache/benchmarks/bench_micro.c:45:
/home/ldorau/work/vmemcache/tests/test_helpers.h:128:1: warning: ‘get_granular_rand_size’ defined but not used [-Wunused-function]
 get_granular_rand_size(size_t val_max, size_t granularity)
 ^~~~~~~~~~~~~~~~~~~~~~
[ 87%] Linking C executable bench_micro

FEAT: Atomic Put If Absent

vmemcache_put_if_absent()

If the key is not present in the cache put the key to the cache atomically.

Value existingValue = vmemcache_get(key);
if (existingValue == null) {
      vmemcache_put(key, value);
      return null;
} else {
      return existingValue;
}

But combine the two steps atomically.

From: Xie, Qi [email protected]

How is LRU policy implemented?

Specifically- how does the LRU policy code make use of the ring buffer? I'm assuming doubly linked list is used to keep track of value access pattern.

vmemcache perf can't scaling with multi-thread.

Since the mutex, the multi-thread performance scaling is not good. With 4 threads, we see most of time consuming is on the mutex lock and unlock.
We run the benchmark:
[root@wilson03 benchmarks]# taskset -c 0-3 ./bench_micro /mnt/pmem0 put 4 10000000 104857600 4096 10 1024 1024

callback_on_evict doesn't provide value_size

I expect that most of the time an user writes an on_evict callback, it's to do something to the entry that's being evicted. That usually requires knowing the value's size.

With our current API, this requires two gets, going through the index and repl twice. That's a waste of time — we already know value_size.

It'd be nice to pass it to the callback. Alas, that'd be an API break...

I'm thus saving this as a note we may want to consider in case we'd want to break the API before 1.0. Another option would be to have a callback2, but that's ugly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.