GithubHelp home page GithubHelp logo

Comments (9)

kanekoshoyu avatar kanekoshoyu commented on May 28, 2024 1

Hi @jeromegn
Frankly we haven't figured out how to solve its root cause. Instead I came up with 2 workarounds I can share with you.

  1. any real time data analysis I just use in-memory DB, I only use persistent DB for storing/querying data, like a more traditional use of database.

This is because I had it run with flamegraph and I saw like a quarter of its runtime doing file IO. Plus one thing I found is that the stock sled storage does not do concurrency well. Between a start and end transaction, it locks the file, so each transaction has to be "atomic". I have a modded version which I wrapped with Arc<RwLock> that you can run it concurrently without database being locked. (You need to gracefully terminate the program though)

  1. Instead of having 100 symbols in a single table, I split into 100 individual tables.

Although the table storage structure might be in a hashmap, filtering and sorting still makes the result 0(n+) by nature, so a query minimised by 100 is at least 100 times faster.

from gluesql.

devgony avatar devgony commented on May 28, 2024

Did you mean selecting 360,000 (100 * 60 * 60) rows from sled storage took 2.7 seconds or 1 row from 360,000 rows??
Did it include client latency to get the all rows or just exeuction time of query?

And i wonder what was the full query text.

from gluesql.

JakkuSakura avatar JakkuSakura commented on May 28, 2024

https://gist.github.com/JakkuSakura/4bb9678501dbabf56c1b6d95269740aa
This is the source code used for benchmark
Not the exact one but still shows something

persistent database insertion for 1s data: 7ms
volatile database insertion for 1s data: 1ms
volatile shared database selection with 1 symbols 24hr data: 1581ms, 1hr=false
volatile shared database selection with 1 symbols 24hr data: 1587ms, 1hr=true
persistent database selection with 1 symbols 1hr data: 93ms, 1hr=false
volatile shared database selection with 1 symbols 1hr data: 75ms, 1hr=false
persistent database selection with 1 symbols 24hr data: 2067ms, 1hr=false
persistent database selection with 1 symbols 24hr data: 2071ms, 1hr=true
It's completely useless
There're no simple way we can improve
1hr=true means that we select 1hour's data from whole dataset

from gluesql.

kanekoshoyu avatar kanekoshoyu commented on May 28, 2024

Did you mean selecting 360,000 (100 * 60 * 60) rows from sled storage took 2.7 seconds or 1 row from 360,000 rows?? Did it include client latency to get the all rows or just exeuction time of query?

And i wonder what was the full query text.

It is 3,600 rows from 360,000, as in getting one symbol out of 100 symbols for 1 hr data.

from gluesql.

kanekoshoyu avatar kanekoshoyu commented on May 28, 2024

Jakku has set up a test with slight modification to above, changing symbol string to symbol ID. It did help quite a lot, so my next optimization guess is to change from Decimal to f64.

persistent database insertion for 1s data: 2ms
volatile database insertion for 1s data: 1ms
volatile shared database selection with 1 symbols 24hr data: 380ms, 1hr=false
volatile shared database selection with 1 symbols 24hr data: 385ms, 1hr=true
persistent database selection with 1 symbols 1hr data: 36ms, 1hr=false
volatile shared database selection with 1 symbols 1hr data: 17ms, 1hr=false
persistent database selection with 1 symbols 24hr data: 826ms, 1hr=true
persistent database selection with 1 symbols 24hr data: 838ms, 1hr=false

But again, query for selecting 86,400 rows out of 8,640,000 in persistent table taking 800+ms is quite slow, we are aiming at somewhere below 50ms.

from gluesql.

kanekoshoyu avatar kanekoshoyu commented on May 28, 2024

I'm wondering what are the limiting factors to the performances here. I get that changing symbol string to id improves performance, because it reduces the time for symbol comparison. But say for size of each row, do they make big difference in terms of performance in GlueSQL selection?

Roughly speaking, Sled is structured in BTreeMap, so the best possible for query should be O(logN). However, currently 1hr vs 24hr data is taking 36ms vs 838ms, at roughly 24 times O(N) instead of 4.5 times O(logN).

Same applies to SharedMemoryStorage, 17ms vs 385ms is around the 24 times.

Operation per row that's taking too much time that the O(logN) issue is becoming a O(N) issue.

from gluesql.

jeromegn avatar jeromegn commented on May 28, 2024

@kanekoshoyu I'm interested in knowing if you've figured this out.

Since SharedMemoryStorage also shows degradation, I wonder if you could find the cause with a CPU profile. I suspect this is not a sled problem and possibly a glue sql internal thing?

from gluesql.

kanekoshoyu avatar kanekoshoyu commented on May 28, 2024

https://github.com/kanekoshoyu/gluesql_shared_sled_storage

This is the link to the modded sled storage

from gluesql.

kanekoshoyu avatar kanekoshoyu commented on May 28, 2024

Also a little micro-optimisations here and there.

Do not do string sorting/filtering, they are too slow. Try using an index instead.

Try using AST directly instead of the query string. Each query string is converted to AST at run-time.

from gluesql.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.