elastic / go-freelru Goto Github PK
View Code? Open in Web Editor NEWGC-less, fast and generic LRU hashmap library for Go
License: Apache License 2.0
GC-less, fast and generic LRU hashmap library for Go
License: Apache License 2.0
other languages can also use a shared mmap version of lru cache. is this on the roadmap?
We are reaching a panic
in setHead
(excerpt below) in one of our apps.
panic({0x18b5fc0?, 0x400301d47c?})
runtime/panic.go:770 +0x124
github.com/elastic/go-freelru.(*LRU[...]).setHead(...)\n\tgithub.com/elastic/[email protected]/lru.go:178
github.com/elastic/go-freelru.(*LRU[...]).get(0x173feb4, 0x4606100?, {0x40000b65a0?, 0x4005e1ef00?})
github.com/elastic/[email protected]/lru.go:423 +0x214
github.com/elastic/go-freelru.(*LRU[...]).Get(0x211f040, {0x40000b65a0, 0xa0})
github.com/elastic/[email protected]/lru.go:416 +0x58
I was wondering which conditions could be having us reach that point, and why should the library panic there?
Right now this is causing a problem in our service because we wrapped the call to freelru.Get
in a mutex lock/unlock block.
After the panic happens, we don't release the lock anymore.
Currently, we use the realtime clock. So lifetime may be skewed by NTP adjustments.
We use this library for caching a few different things in Fleet Server: https://github.com/dgraph-io/ristretto
It would be great to add this library to the list of benchmarks run here to help us understand if this would help Fleet Server performance.
This LRU implementation is very impressive, and I believe its performance should be one of the top, thanks for this brilliant job!
But we really need a sharded version instead of synced version to perform well in a high parallelism.
For example, a "shardedlru.go" like
type ShardedLRU[K comparable, V any] struct {
mu []sync.RWMutex
lru []*LRU[K, V]
}
Background:
I happen to be currently implementing a simple lru cache as well, and would like to know how its performance compares to popular libraries. Here is my results on
const (
keysize = 16
cachesize = 16384
parallelism = 1000
)
goos: linux
goarch: amd64
cpu: Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
BenchmarkCloudflareGet
BenchmarkCloudflareGet-8 100000000 60.48 ns/op 16 B/op 1 allocs/op
BenchmarkElasticGet
BenchmarkElasticGet-8 15093165 424.4 ns/op 0 B/op 0 allocs/op
BenchmarkHashicorpGet
BenchmarkHashicorpGet-8 27390493 349.8 ns/op 0 B/op 0 allocs/op
BenchmarkPhusluGet
BenchmarkPhusluGet-8 199958748 30.45 ns/op 0 B/op 0 allocs/op
PASS
ok command-line-arguments 32.350s
Obviously freelru (and hashicorp) does not perform as well as it should (I think it should be top-notch) due to the use of a single lock.
Please refer to https://github.com/phuslu/lru?tab=readme-ov-file#benchmarks for the code, thanks.
for single threaded "mode" do you still need hash?
is it possible to disable hash for single threaded mode?
how do you use single threaded mode by the way?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.