Adding a single key increases load time, what is happening 2 โ 4 seconds
Time to load/open increases fairly rapidly which number of keys/entries, does not take much to add a second. Have a feeling its more than linear.
Worst is cached data shows similar increases, even though its locally available, whats happen here? I would assume this should be fast? Is sig verification slow, maybe could cache that verification for each client?
When rapidly adding keys, all show up at first when fetching all keys, but then page reload (even with cache) only returns some of them after about 1/10. Same is found from pinning server after, sometimes even less. Even though adds are reported successful. Will have to find tighter bounds on time here? what if adds spaced by second?
There is some interdependence between stores/boxes. Open 2 for the first time (load from server) slows the other down. Both 8 secs, after one 2 secs the other 4/5 when loaded the second time from cache
Also interdependence with onsync done, open two different boxes, only one will call on sync done.
Boxes with more keys seem to open less reliably, sometimes not opening at all. While another box with fewer keys opens at same time. Often happens first reload after adds, and then works on subsequent ones again. This is is not reliably true.
Eventually go one to not open at all again, by adding many keys over and over, and stopping mid process.
MBs get added quite to slow to indexed db? some processing not data has to be slow
MBs increase when adding keys, but even though many keys are lost as mentioned above, the storage still increases on next open.
^ until you delete and reload, then more keys and entries are lost, also don't understand why the underlying log is not longer than the number of keys if am constantly changing keys. Once stuff was loft they were often equal in length.
When using cache, openbox time increase quite a bit keys/data. 500+ keys(or entries) and 30mb of data increase load time to about 15/20 seconds. Concerning this using local cache.
Exact values not simulate here yet, but Kenzo had close to 500 entries and 100mb and was seeing 30 second or more load times.
So this would align roughly with what @michaelsena and @oznekenzo have reported.
Also don't know where all the data overhead is coming from (have to look at log entry format), since added KBs worth of string chars and end up with 30mb in indexeddb