baotonglu / apex Goto Github PK
View Code? Open in Web Editor NEWHigh Performance Learned Index on Persistent Memory
License: MIT License
High Performance Learned Index on Persistent Memory
License: MIT License
Hi,
I try to skip the bulk load and insert some KV pairs into an empty index, but the program seems to enter an endless loop.
The modifications are as follows.
found flag keys_file = longitudes-200M.bin.data
found flag keys_file_type = binary
found flag keys_type = double
found flag init_num_keys = 0
found flag workload_keys = 10000
found flag total_num_keys = 10000
found flag operation = insert
found flag insert_frac = 0
found flag lookup_distribution = uniform
found flag theta = 0.99
found flag using_epoch = 1
found flag thread_num = 1
found flag index = apex
found flag random_shuffle
found flag sort_bulkload = 1
The key type is double
The epoch is used
creating a new pool
pool opened at: 0x7f4480000000
Intial allocator: 1
Recover/Initialize time (ms) = 162.361
I try to insert data to an empty ALEX and it works.
Hope the author could fix the problem. Thanks a lot!
There may be some bugs with bulk_load function, when bulk_load size isn't very great (so the root node will be a leaf node at this time).
int main(int argc, char** argv) {
size_t bulk_cnt = std::stol(argv[1]);
Tree<uint64_t, uint64_t>* index = generate_index<uint64_t, uint64_t>();
auto values = new std::pair<uint64_t, uint64_t>[bulk_cnt];
for(size_t i = 0; i < bulk_cnt; i++) {
values[i].first = i;
values[i].second = i;
}
index->bulk_load(values, bulk_cnt);
for(size_t i = 0; i < bulk_cnt; i++) {
size_t value;
if(index->search(i, &value)) {
if(value != i) std::cout << "Error find: " << i << value << std::endl;
} else {
std::cout << "Fail find: " << i << " " << value << std::endl;
}
}
my_alloc::BasePMPool::ClosePool();
return 0;
}
Hi @baotonglu ,
Thanks for your great work.
I would like to run your code on a normal PC without Persistent Memory hardware.
How can I setup to run it?
Thank you!
Hi.
I'm currently executing your code and found some issues.
When running one of my workloads, it exits with the following error message.
arr_idx = 2805; num_keys = 2707
data capacity = 3674
Count errror!!!
I'm working on a workload which incurs out of bound inserts.
I've tracked that the error occurs when calling the get_max_key() function inside expand_root().
Can you give any suggestions for this situation?
Thanks a lot.
Ju Young
Hi, I am trying to change the parameter:PROBING_LENGTH to 128, however, when i change it, something wrong happens:
arr_idx = 11065; num_keys = 12425
data capacity = 13115
Count errror!!!
Does there anything else needs to be changed? Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.