jdataview / jbinary Goto Github PK
View Code? Open in Web Editor NEWHigh-level API for working with binary data.
Home Page: jdataview.github.io/jBinary/
License: MIT License
High-level API for working with binary data.
Home Page: jdataview.github.io/jBinary/
License: MIT License
My blob starts with a string0
followed by an object whose structure is dependent on this previously read string0
. So I figured something like this:
var typeSet = {
Data: {
name: 'string0',
data: [ 'object', function(context) {
return context.name;
}]
},
Dimensions: {
width: 'int32',
height: 'int32'
}
};
For example if the string0
was "Dimensions", Data.data
would become an object of type Dimensions. I've spent hours reading through the Wiki and getting a grasp on how jBinary works. I've also tried changing the jBinary.Type example from array
to object
without success.
Any help or suggestions would be appreciated.
Create common builder that both jBinary and jDataView/jDataView#59 could use.
While try to reading a file specification like
var typeSet = {
count: 'uint8',
arr : ['array', 'uint8','count']
}
fails with the following error (in chrome):
Uncaught TypeError: Cannot read property 'count' of undefinedjbinary.js:174
jBinary.Type.toValuejbinary.js:174
proto.typeSet.array.jBinary.Template.readjbinary.js:294
jbinary.js:581
proto._actionjbinary.js:574
proto.readjbinary.js:578
(resp. firefox)
TypeError: this.binary.getContext(...) is undefined @ http://jdataview.github.io/dist/jbinary.js:1
You could check out this plnkr http://www.plnkr.co/edit/ZQ6tnsrWQ10hAC9ck2E8?p=preview for reproducing. Check console for results and errors. Setting hard boundaries (['array','uint8', 10] ) works like a charm.
Is there a way to dispatch based on context? Say I have:
var typedef = {
Message: {
header: "Header",
payload: "Shape",
},
Header: {
category: [ "enum", "uint8", [ "Shape", "Meal" ]],
id: "uint8"
},
Shape: {
color: "uint8",
shape: [ "enum", "uint8", [ "circle", "square" ]]
},
Meal: {
name: [ "string", 20 ],
prepTime: "uint8"
}
}
new Binary(arrayBuffer, typedef).read("Message")
// => {
// header: {
// category: "Shape"
// id: 209,
// },
// payload: {
// color: 13,
// shape: "square",
// }
// }
Here, I've hard-coded the payload type to "Shape", but could I get this value based on the context parsed so far? Something along the lines of:
var typedef = {
Message: {
header: "Header",
payload: function (context) {
return context.header.category
}
},
Header: {
category: [ "enum", "uint8", [ "Shape", "Meal" ]],
id: "uint8"
},
Shape: {
color: "uint8",
shape: [ "enum", "uint8", [ "circle", "square" ]]
},
Meal: {
name: [ "string", 20 ],
prepTime: "uint8"
}
}
I know there are other ways of accomplishing this (splitting my calling code into two parts (parsing the header, then parsing the payload), writing one huge typeset and my own reading/writing methods), but I wanted to ask if there was an idiomatic way of doing this. Happy to add an example to the docs if there is, happy to write a type if there isn't.
I have binary data that looks like the following:
Originally, my typeSet looked like this:
var typeSet = {
'jBinary.all': 'BBoxes',
'jBinary.littleEndian': true,
BBox: {
character: ['char'],
x0: ['float32'],
y0: ['float32'],
x1: ['float32'],
y1: ['float32'],
x2: ['float32'],
y2: ['float32']
},
BBoxes: ['array', 'BBox']
};
This works beautifully for the majority of my data. However, some of the characters are not a single byte (for example, the copyright symbol).
Is there a supported way to read a UTF-8 character, consuming as many bytes as needed to read that character?
Slightly odd bug to report that might be me simply misusing this (great!) library. In nodejs 0.10.33 everything works great, but in the latest nodejs (0.12.4) I am seeing an error when using jBinary.loadData in conjunction with a ReadableStream
buffer.js:201
length += list[i].length;
^
TypeError: Cannot read property 'length' of null
at Function.Buffer.concat (buffer.js:201:24)
at PassThrough.<anonymous> (.....\node_modules\jbinary\dist\node\jbinary.js:451:39)
at PassThrough.emit (events.js:129:20)
at _stream_readable.js:908:16
at process._tickDomainCallback (node.js:381:11)
I'm using nodejs v0.12.4 and the 'async' library. Here is the offending code:
async.waterfall([
var stream = require('fs').createReadStream('path/to/file');
next(null, stream);
},
function parseFile(stream, next) {
jBinary.loadData(stream, MY_TYPESET_HERE, function(err, binary) {
// execution does not get this far
var myData = jbinary.readAll();
});
},
jBinary.loadData basically blows up when I pass a readableStream to it. I'm very new to nodejs so I'm not sure what the issue is, but presumably nodejs' ReadableStream has changed from node 0.10.x -> 0.12.x and maybe jBinary does not officially support ReadableStream as a data source? Is this even an issue in jBinary to start with?
This code fragment is actually part of a much larger project that uses AWS's S3 storage API. I'm retrieving the file from an S3 bucket and not simply using a static file. I can work around this issue by either downloading the file to temp storage and then loading it that way (and avoiding use of ReadbleStream), or avoiding storing the S3 file to disk entirely, and keeping everything in-memory Buffer.
As an aside, what is jBinary's preferred approach to data sources? When loading a file from disk does it load it all into memory anyway, or does it read the file chunk-by-chunk? If the file is always loaded into ram then I guess I can just use Buffers and be done with it
cheers & thanks for a really nice library
@vjeux @RReverser
I use this example of tar archive reading from
jBinary = require('jbinary');
TAR = require('../tar.js'); //just like the tar.js in jBinary.Repo
jBinary.load('sample.tar', TAR, function (jb) {
// read everything using type aliased in TAR['jBinary.all']
var files = jb.readAll();
files.forEach(function (file) {
if (element.mode == 509) {
// This is a folder, create it using fs.mkdirSync, this work well!
} else {
// Trying to save the file.content.view.buffer as file but doesn't work using fs
}
});
});
I'm trying to save each file in by using many method to save a buffer to file in node.js but no one of them work.
The question is how can I save a jBinary generated buffer to file in node.js?
and sorry for my english :)
Hi
I need to write the contents of an image returned from an xhr call with arraybuffer
on android this works but on ios it is a nightmare ... mainly to write the blob to file
can this library help me solve this?
thanks
Hi,
I'm trying to port a working app from electron (node based) to cordova. This app makes intesive usage of jBinary and works really fine on electron.
What i found is that BROWSER version does not generate the same jBinary::view type as when using NODE version. For the the same code;
var writeBuffer = new Buffer(2048);
var writeFrame = new jBinary(writeBuffer, bdFrameStruct);
NODE version ->
var jBinary = require('jbinary'); is generating view as 'jDataView'
writeFrame
jBinary {view: jDataView, contexts: Array[0], typeSet: Object, cacheKey: "jBinary.Cache.1"}
BROWSER version ->
<script type="text/javascript" src="js/jdataview.js"></script>
<script type="text/javascript" src="js/jbinary.js"></script> is generating view as 'e'
writeFrame
h {view: e, contexts: Array[0], typeSet: Object, cacheKey: "jBinary.Cache.1"}
How do can I get the same result as in NODE using BROWSER?
Or maybe there is a better way to use it on cordova?
Thanks.
How can I load simple text file with unknown length?
My use case is pretty simple: I have a single large csv file gzipped and I need to access it's contents as a string.
So far I understand that I need to start with:
jBinary.load(zipfile, GZIP).then(function (jb) {
var data = jb.readAll();
console.log(data);
// future code for parsing csv
console.log('Unzipping finished');
});
I'm stuck after that: what object is in data now? hot do I access the actual compressed data of the archive?
Hello! Can you help me, please?
I want to add some additional data to file from external source. I load file with jBinary.load
method. Then how can I add some addition bytes to it?
I tried to create a new buffer with binary.view.byteLength + nededSize
size. Then I iterate through binary.view.buffer
and copy it's bytes. But I found that it is empty structure.
Also I tried this:
var newBinary = new jBinary(binary.view.byteLength + 128);
for (var i = 0; i < binary.view.byteLength; i++) {
newBinary.write('uint8', binary.read('uint8'));
}
}
It's really terrible for browser
So is it possible to change buffer size?
it is possible?
var typeSet = {
jBinary.all:'msh',
msh: {
artworks: ['string0', 9],
version: ['array', 'uint8', 8],
indices: ['array', 'uint8', 8],
versionName: ['string0', 'indices[4]']
}
};
'indices[4]' as variable
The wiki page "Loading and saving data" only contains information about loading data. I want to serialize a JSON structure to a binary type but I can't find any documentation or sample code for this. Here a small sample what I basically want to do:
var types = {
'jBinary.littleEndian': true,
MyType: jBinary.Type({
prop1: 'uint8',
prop2: 'string0'
})
};
var data = {
prop1: 7,
prop2: 'Hello World',
};
var bytes = jBinary.write(types, data, 'MyType');
// bytes = [7, 72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100, 0]
But jBinary seems not to support dynamic growing buffers so I tried something like this:
var data = new Uint8Array(1 + data.prop2.length + 1);
var binary = new jBinary(data, types);
binary.write('MyType', data);
// expected: data = [7, 72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100, 0]
// actual: data = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Am I doing something wrong or is there possibly a bug?
Lookins for as lightweight as possible solution for implementing Promises for methods like loadData
, load
, future saveAs
method etc.
Considering also solution with attaching to external Promise framework (which should be included by user then).
May be needed for prefilling arrays.
Implement getting size for jBinary.Type automatically so lazy
could get it directly from type instead of requesting additional argument.
I'm getting my input as a sequence of ArrayBuffer
s. I can merge them all into a single ArrayBuffer
for jBinary, but I'd save time & memory if I could feed the individual ArrayBuffer
s in one at a time. Is this possible with jBinary/jDataView?
Implement lazy
type support for dynamic property reading/writing.
RequireJS or other solutions may be used, but compiled JS should be loadable via Node.js, <script> modes as well as via AMD.
Hi, I'm pretty new to jBinary.
I'm looking for a way to read binary blobs that were created using the C# BinaryWriter class. Everything is working nicely, except that I need to handle the special encoding that BinaryWriter uses to store strings.
It stores strings by writing a dynamic length encoded int32 in front of the actual string data ... see BinaryWriter.Write7BitEncodedInt(int value) http://referencesource.microsoft.com/#mscorlib/system/io/binarywriter.cs#2daa1d14ff1877bd#references
Would it be possible to define a custom type in a jBinary typeset (like shown here https://github.com/jDataView/jBinary/wiki/Typesets) to handle this special kind of string serialization ? or do I need to write some external custom boilerplate code to handle it ?
Thanks
I needed to implement a SaveAsSync due to using a jbin in a module that couldn't work well with a call back. I can add a PR to implement a SaveAsSync if any interest.
return proto.saveAsSync = promising(function(dest, mimeType, callback) {
if ("string" == typeof dest) {
var buffer = this.read("blob", 0);
is(buffer, Buffer) || (buffer = new Buffer(buffer)), require("fs").writeFileSync(dest, buffer);
} else callback(new TypeError("Unsupported storage type."));
}), jBinary;
Prevent another one asynchronous module load call while first one is in progress.
I am using jBinary in a webapp that uses require.js.
jDataView is loaded dynamically at https://github.com/jDataView/jBinary/blob/master/src/jBinary.js#L922
Before jDataView can load, there is a TypeError thrown due to the race condition at https://github.com/jDataView/jBinary/blob/master/src/jBinary.js#L932
jDataView.prototype.toBinary = function (typeSet) {
return new jBinary(this, typeSet);
};
Functions returned by lazy
should be Knockout observables when Knockout is included along with jBinary.
Hi, thanks for making such a great tool in jBinary. I used it to write a parser for mat files and have around 400 tests showing it works. However, when I try to run it on a very simple 9.3 MB mat file, parsing takes up to 60 seconds. I am running the test on the node console, but even in interpreted mode I would hope that it would be faster. I plan for this to run in-browser primarily, though.
I profiled and found that a lot of time is spent in Type.read and garbage collection. This makes sense since what happens is that the data type (e.g. int16) and byte length are read (the 'tag'), then items of that type are read and pushed to an array (see this line) for the provided length (the 'tagData'). At that step, the parser is reading hundreds of thousands of ints or doubles or whatever and pushing them to a regular array, which is then returned.
Is there a smarter way to do this? Preallocation provided some improvement, but not enough. I also tried typed arrays to avoid reading and pushing 100k times, but never got them to work successfully (typed array newb here). I'm hoping to tap into the jbinary experience here to make the parser faster in a smart way. Any tips much appreciated, thanks!
Specification: http://wiki.ecmascript.org/doku.php?id=harmony:typed_objects
Pros:
ArrayBuffer
slices.Cons:
Hello,
In my project, I need to unpack a binary file where there are a few of the half-precision float (i.e. "float16") values. As of this writing, I read it from the binary stream by using the specification API "this.binary.read('uint16')".
Could you please teach me how to convert the "uint16" number back to the "float16" using JavaScript?
I am using jBinary to construct pdus for a networking application. I send out many requests and my memory usage is climbing over time. It appears that this is a result of the typeset being cached each time i call jBinary. I'm having trouble locating in the source exactly where the references to types are being stored... privately in the module it seems.
It's pretty easy to reconstruct:
for (var i = 0; i < 1000000; i++) {
jBinary(new Buffer(200), {test: 'int32'});
};
This results in memory that does not get GCd.
Is there a way of disabling the typeset caching?
Or maybe i'm fooling myself altogether here, i can't find where the typesets are referenced in the module, I'm not sure why they are not released. I will keep reading the source in the meantime.
Hi,
Reading through the API it seems the writeAll method is a little short on documentation and I had a couple question. I am writing a header of various typed values(couple hundred bytes) then a large float32 array(3-100MB). Currently I'm using bin.write('float32',array[counter]). I am experiencing slow performance.
Thanks,
Karl
I hope this is an easy question to answer. Where do the BROWSER and NODE variables get set and how do I change them? I am compiling with browserify but it keeps including NODE code and excluding BROWSER code, which is the opposite of what I want. When I try to load a user-selected file in the browser, I get that the type is unsupported because File objects aren't handled in the NODE code (from load.js). I'm sure it's an easy question to answer but the parameters of the question make searching for it online very difficult.
Unlike .read()
, .readAll()
does not advance your current position in the file. So if you want to read a sequence of TypeSets, there's no hope.
One way would be to make the top-level type in my TypeSet an array, but I'd prefer not to do this as I may be able to avoid reading a large chunk of my buffer (i.e. I want to early out).
Given:
let typeSet = {
'jBinary.all': 'myAll',
myAll: ['string0', 30],
};
let filePathToLoad = proc.argv[1];
jBinary.load(filePathToLoad, typeSet).then(bin => {
bin.readAll();
console.log(bin.tell());
}).catch((err) => {
console.info(err);
});
If you do bin.read('myAll'); console.log(bin.tell());
the result will be 30
as is expected.
But bin.readAll();
does not yield a position of 30
, it gives you back 0
.
This can also be observed if myAll
is, for example, changed to 'uint8'
or ['array', 'uint8', N]
.
I believe this should be noted in the documentation (if I missed this please correct me and close this).
This makes sense, if done intentionally, so that data may be read, mutated, then written back to the data source.
Seems like the service provided by http://www.corsproxy.com/ has been discontinued, which breaks the HLS demo.
Would allow to include only jBinary script and dependency would be added and resolved automatically (useful for most needs).
There is next statement in the wiki:
"All the arguments marked with @(references) can be passed not only as direct values, but also as getter functions callback(context) or string property names inside current context chain."
Could you give little example, how to use callbacks?
I'm using require.js within a webapp. It evaluates to true for the hasRequire conditional
At line
https://github.com/jDataView/jBinary/blob/master/src/jBinary.js#L691
The browser attempts to execute
require('stream').Readable
and fails to find the stream context.
The same thing happens in the block of code at https://github.com/jDataView/jBinary/blob/master/src/jBinary.js#L762
I was able to correct this by adding a clause to the if statements to check for the window global object:
if (window === undefined && ...
But I'm not sure if this is the idiomatic way to separate browser code from server code?
del
Hi,
I am new to jBinary and have not used it before. Sorry if this issue is more like a question than a real issue or bug, but I really could not figure it out from the wiki documents.
I have an array of a struct in a file, the below code does not work. Am I using jBinary incorrectly? Maybe someone can correct me or point me to an example?.
var fs = require('fs');
var jBinary = require('jbinary');
var TIME_T = {
'u32': 'uint32',
'time': {
hour: 'int16',
min: 'int8',
type: 'int8'
}
};
var DATE_T = {
u32: 'uint32',
date: {
day: 'int8',
month: 'int8',
year: 'int16'
}
};
var LOG_T = {
time: TIME_T, // time of log
date: DATE_T, // date of log
temp: 'float', // chamber temperature
tempSp: 'float', // temperature set point
tmpRamp: 'float', // temperature ramp
hum: 'float', // chamber humidity
humSp: 'float', // humidity set point
light: 'uint8', // chamber light
fan: 'uint8', // chamber fan
tmpMax: 'float', // maximum temperature allowed
tmpMin: 'float', // minimum temperature allowed
humMax: 'float', // maximum humidity allowed
humMin: 'float', // minimum humidity allowed
roomTemp: 'uint8', // room temperature
alarmStatus: 'uint16' // alarm status (refer to alarm mask defines)
};
For reading it as an array, the below code does not do the job:
FLGTypeSet = {
'jBinary.all': 'All',
Log: LOG_T,
All : ['array', 'Log']
}
jBinary.load('10.FLG', FLGTypeSet, function(err, binary) {
console.log(binary.readAll());
};
or maybe I should define another type, which does not work too:
FLGTypeSet = {
'jBinary.all': 'All',
// declaring custom type by wrapping structure
Log: jBinary.Template({
setParams: function (itemType) {
this.baseType = {
length: LOG_T,
values: LOG_T
};
},
read: function () {
return this.baseRead().values;
},
write: function (values) {
this.baseWrite({
length: values.length,
values: values
});
}
}),
All : ['array', 'Log']
}
jBinary.load('10.FLG', FLGTypeSet, function(err, binary) {
console.log(binary.readAll());
};
Thanks
While creating custom types, it's sometimes helpful to know the size (in bytes) of a base type. In general this isn't possible to compute (because of dynamically-sized elements), but for simple types like int32
and objects composed of these simple types, it is.
It would be helpful if there were a computeSize(type)
method on the jBinary object, or perhaps a computeSize
method on type descriptors which would produce this information when it's available.
The types I define for jBinary often contain fields which are only relevant for parsing & can be disposed of afterwards. For example:
var TYPE_SET = {
'jBinary.littleEndian': true,
'Header': {
_magic: ['const', 'uint32', 0x8789F2EB, true],
version: ['const', 'uint16', 4, true],
zoomLevels: 'uint16',
zoomHeaders: ['array', 'ZoomHeader', 'zoomLevels'],
}
}
In this example, _magic
, version
and zoomLevels
must be read to parse the file, but they're useless in the resulting data structure.
It would be nice if there were a way to tell jBinary to strip them out, perhaps adopting the convention that fields beginning with _
or __
don't make it into the output Object.
I have tried to use jDataView in my project and have insert a Double and the rest Integer in my document. I used getFloat64() and then start the byte with 8 and getInt32(). For such, Firefox is returning correct value, but not Chrome and IE10. Any potential bugs is foreseen here?
By the way the document I wrote in Binary was using C# Binary Writer, therefore the littleEndian is set to True.
Thanks
Bernad
Type
instances are created less often than used, we can achieve some performance gain by moving properties from prototypes to constructors so getting same hidden classes but not involving engine to look through prototype chain each time.inherit
utility function)read
/write
functions using new Function(...)
.Implement asynchronous typeSet definition (i.e., for cross-dependency support).
Probably Linux-only issue, can't check now since don't have any Linux installed.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.