mchidk / binaryrage Goto Github PK
View Code? Open in Web Editor NEWchecksum
checksum
The binary serializer fails when dealing with very large objects.
What is the purpose of the Try Catch on Storage.cs? It seems to have a performance impact.
Is there a way get all the collection keys?
Currently BinaryRage is using a hard-coded directory separator (). It's better to use Path.DirectorySeparator so it will be compatible with Linux and Mono.
It would be great if the NuGet package could support .NET Core and UAP/UWA
ThreadPool uses background threads, so uncommitted inserts are discarded when the application exits.
There is currently no way to wait for pending work items.
Is anyone maintaining the NuGet package?
Its unfortunately way outdated...
Hello, I want to implement some queues and stuff that can be sensitive to order of operations. Is there locking mechanism for each path?
Thanks.
Hi, while browsing the source I've felt over the Storage.WritetoStorage function. It eats all exceptions. Which means that my data get lost without noticing it when the disk runs out of space.
Lingering file handles (from fast successive reads/writes to the same file) on a Windows system can cause file updates to fail.
An exception is thrown and cache and data on disk are left in an inconsistent state in this case.
It would be great to support null values (Insert<double?>(null,...) for example).
Could you please update BinaryRange on nuget? Last version there is from January 2013.
Thank you for this project!
I am stripping some code because I only need some parts, but now I do not understand some things. Given the following part:
Interlocked.Increment(ref Cache.counter);
SimpleObject simpleObject = new SimpleObject {Key = key, Value = value, FileLocation = filelocation};
sendQueue.Add(simpleObject);
var data = sendQueue.Take(); //this blocks if there are no items in the queue.
//Add to cache
Cache.CacheDictionary[filelocation + key] = simpleObject;
ThreadPool.QueueUserWorkItem(state =>
{
Storage.WriteToStorage(data.Key, Compress.CompressGZip(ConvertHelper.ObjectToByteArray(value)), data.FileLocation);
});
My questions:
Best regards,
Evert
When i get the ZIP and load this project into another project i get this error
Version 3.10 of mono doesn't support ConcurrentDictionary. I suggest to use locks and regular dictionary instead so BinaryRage becomes more compatible with mono. (Version 3.12 of mono supports ConcurrentDictionary, but the platform I'm on is hard to upgrade to 3.12 at the moment).
In Key.Splitkey the key length is devided by 4, and this value is used to decide the substring length when creating the file path. If key has length 3 or less, this turns into an infinite loop in the SplitByLength method.
Didn't find any information about which license the software is created for. Can you please clarify?
Thanks.
This code doesn't work for me unless I comment out the WaitForCompletion method.
Maybe it's a race condition, the data is not (yet) written to disk, and the exists method only checks
on disk?
BinaryRage.DB.Insert(key, document, StoreDirectory);
//BinaryRage.DB.WaitForCompletion();
bool exists = BinaryRage.DB.Exists(key, StoreDirectory);
Console.WriteLine(exists); // False!
I had a key that used : and it caused a path error when BinaryRage was trying to create the subfolders with that characters. I suggest that the key is filtered before creating folders.
Hi,
first of all, thanks for a nice piece of code.
Unfortunately, I discovered a race condition issue:
I used the BinaryRage for two different file locations and value types, but after a while I ended up with a SerializationException when doing a DB.Get. This was during heavy concurrency between the different DB calls.
The culprit seems to be in DB.Insert:
The "sendQueue" is static and shared among all Inserts.
The "add" and the "take" can act on objects with different file location and value types.
After removing the sendQueue, and using "simpleObject" instead of "data", I was unable to reproduce the bug.
The ObjectToByteArray acts on "value" and not on "data.Value" which could be two widely different types.
Lines 55 and 56 in WritetoStorage() in Storage.cs reads
//Calculate pause based on amount of bytes
Thread.Sleep(value.Length / 100);
I am trying to use BinaryRage for writing blobs of about 1-100MB. The code then sleeps for 10-1000 seconds, which is clearly suboptimal when SSD:s today could sustain several hundred megabytes per second.
The discussion for the commit mentions improved performance in some use cases. What would a generic sleep time that would work for both small and largish values look like? Would something like Math.Log(1+value.Length, 2) + value.Length/1e8
be better? See graph below.
Another alternative might be to either have an option to disable the sleep or just making the sleep constant...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.