GithubHelp home page GithubHelp logo

mtth / avsc Goto Github PK

View Code? Open in Web Editor NEW
1.2K 30.0 144.0 3.69 MB

Avro for JavaScript :zap:

License: MIT License

JavaScript 95.42% Python 2.48% Shell 0.06% Java 1.80% Ruby 0.25%
avro serialization javascript typescript schema-evolution big-data binary-format encoding

avsc's Introduction

Avsc NPM version Download count CI Coverage status

Pure JavaScript implementation of the Avro specification.

Features

Installation

$ npm install avsc

Documentation

Examples

const avro = require('avsc');
  • Encode and decode values from a known schema:

    const type = avro.Type.forSchema({
      type: 'record',
      name: 'Pet',
      fields: [
        {
          name: 'kind',
          type: {type: 'enum', name: 'PetKind', symbols: ['CAT', 'DOG']}
        },
        {name: 'name', type: 'string'}
      ]
    });
    
    const buf = type.toBuffer({kind: 'CAT', name: 'Albert'}); // Encoded buffer.
    const val = type.fromBuffer(buf); // = {kind: 'CAT', name: 'Albert'}
  • Infer a value's schema and encode similar values:

    const type = avro.Type.forValue({
      city: 'Cambridge',
      zipCodes: ['02138', '02139'],
      visits: 2
    });
    
    // We can use `type` to encode any values with the same structure:
    const bufs = [
      type.toBuffer({city: 'Seattle', zipCodes: ['98101'], visits: 3}),
      type.toBuffer({city: 'NYC', zipCodes: [], visits: 0})
    ];
  • Get a readable stream of decoded values from an Avro container file (see the BlockDecoder API for an example compressed using Snappy):

    avro.createFileDecoder('./values.avro')
      .on('metadata', function (type) { /* `type` is the writer's type. */ })
      .on('data', function (val) { /* Do something with the decoded value. */ });

avsc's People

Contributors

alexander-alvarez avatar amilajack avatar andrew8er avatar arminhaghi avatar artyom-88 avatar brianc avatar brianfitzgerald avatar collado-mike avatar dependabot[bot] avatar diogodoreto avatar fmsy avatar grisu118 avatar gurpreetatwal avatar jaforbes avatar joey-kendall avatar kmahoney avatar luma avatar mtth avatar nelsonsilva avatar ppershing avatar rdelvallej32 avatar rektide avatar reuzel avatar sankethkatta avatar simonbuchan avatar tboothman avatar thomastoye avatar tysonandre avatar valadaptive avatar yehonatanz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

avsc's Issues

Enhance RPC message API

  • timeout option for MessageEmitters. It's a common use-case and would make the API more robust (letting us also automatically clean up failed requests). The timeout could either be specified as a map keyed by message or a number (applying to all messages).
  • getPendingCallbacks method for MessageEmitter and MessageListener. Or maybe just return the count?
  • useful return value for emit. Something similar to how write returns false for writable streams.

toBuffer methods throws an exception TypeError: Objects.keys called on non-objects. when the schema has avsc schema has been generated using union {null, long} name=null.

See attached schema.

{
"type" : "record",
"name" : "id_tran",
"namespace" : "com.store",
"fields" : [ {
"name" : "hdr",
"type" : {
"type" : "record",
"name" : "header",
"fields" : [ {
"name" : "chn",
"type" : "int",
"default" : 0
}, {
"name" : "str",
"type" : "int",
"default" : 0
}, {
"name" : "lne",
"type" : "int",
"default" : 0
}, {
"name" : "ctry",
"type" : "int",
"default" : 0
}, {
"name" : "pos",
"type" : "int",
"default" : 0
}, {
"name" : "seq",
"type" : "long",
"default" : 0
}, {
"name" : "ts",
"type" : "int",
"default" : 0
}, {
"name" : "ses_id",
"type" : [ "null", "long" ],
"default" : null
} ]
}
} ]
}

optionally throw Exception when checking isValid

It's difficult to tell why an object may fail schema validation at runtime

it would be great if avsc could have isValid() optionally throw an Exception which would provide detail about which field failed the validation and perhaps why.

An example interface:

var avroSchema = avsc.parse(mySchema);
avroSchema.isValid(myObj, {throw:true});

or, ala node-avro-io:

var avroSchema = avsc.parse(mySchema);
avroSchema.isValidAndThrow(myObj);

Record type

I'm attempting to serialize a javascript object into avro that contains a nested record type. However, it seems to fail. Are nested record types not supported, or am I doing something wrong?

Here is my schema:

{
  "namespace": "com.myorg.email",
  "name": "Spam",
  "type": "record",
  "fields": [
    {"name": "body", "type": "string"},
    {"name": "raw", "type": "string"},
    {"name": "received", "type": {"type": "array", "items": "string"}},
    {"name": "additionalheaders", "type": {"type": "array", "items": {
        "name": "Header",
        "type": "record",
        "fields": [
            {"name": "key",   "type": "string"},
            {"name": "value", "type": "string"}
        ]
    }}}
  ]
}

and some test code:

var Avro = require('avsc');
var spam_schema = Avro.parse('./spam.avsc');
test_spam = {'body':'test', 'raw':'test', 
                     'received':'test', 
                     'additionalheaders':[{'testkey1':'testval1'}]}
avro_object = spam_schema.toBuffer(test_spam)

and the error:

node_modules/avsc/lib/types.js:2233
throw new Error(f('invalid %s: %j', type, val));
          ^
Error: invalid {"type":"array","items":"string"}: "test"

Questions about the future scope of this library

Hi,

I was unfortunately working on a fork of node-avro-io these days to make it work in the browser.

My primary goal was to support long numbers. The best solution I've found to do so is to use https://github.com/dcodeIO/Long.js. Would you (or did you) consider supporting long numbers, using int64-native for node and long.js (or another one) for the browser ?

At a higher level, is the work on this library supported by Linkedin ? If so I would expect this lib to become the de facto standard Avro javascript implementation.

Unrelated question, I'm curious about the use of Avro in the browser at Linkedin: is it already in production ? Are browser generated events used "as it" later on in Samza ?

Thanks

enum values not compatible with Typescript

When typescript generates javascript code for enums, it generates code that stores values as integers (the zero-index offset of the string label), which corresponds with the avro spec for serializing enums.

Avro IDL source:
enum Fruit { APPLE, BANANA, PEAR }
record Snack {
    Fruit fruit
}

Typescript source

 export enum Fruit { APPLE, BANANA, PEAR }
 export class Snack {
     private fruit: Fruit;
     public getFruit():Fruit { return this.fruit; }
     public setFruit( value: Fruit ) { this.fruit = value; }
 }

After this compiles to javascript, I can call setFruit(Fruit.APPLE) and it stores an int value for the fruit member. (in javascript, any reference to ENUM_TYPE.VALUE is an integer). However, if you try to serialize it with avsc, it fails, because

EnumType.prototype.isValid() 

assumes that enum fields are stored as strings.

The following patch works for me

--- types.js    2015-10-25 09:29:38.000000000 -0700
***************
*** 819,821 ****
  EnumType.prototype.isValid = function (s) {
!   return typeof s == 'string' && this._indices[s] !== undefined;
  };
--- 819,825 ----
  EnumType.prototype.isValid = function (s) {
!   if (typeof(s) === 'number' && Number.isInteger(s) && s >= 0 && s < Object.keys(this._indices).length)
!     return true;
!   if (typeof(s) === 'string' && typeof(this._indices[s]) !== 'unknown')
!     return true;
!   return false;
  };

I am using Typescript compiler version 1.5.3.

StatefulEmitter handshake does not conform to spec

StatefulEmitter's handshake request is not attached to a call. See https://avro.apache.org/docs/1.7.7/spec.html#Call+Format
"When the empty string is used as a message name a server should ignore the parameters and return an empty response. A client may use this to ping a server or to perform a handshake without sending a protocol message."

On the listener side on the Java implementation, you can see that the listener always looks for the request metadata and message name on every request: https://github.com/apache/avro/blob/master/lang/java/ipc/src/main/java/org/apache/avro/ipc/Responder.java#L124.

META_READER.read() will then throw an exception because there's no metadata in avsc's handshake request. This exception is caught and returned to the client as an error response. But this error response includes the handshake response as a prefix so everything looks fine from StatefulEmitter's side. The impact is simply noisy warnings on the Java listener's logs from the exception.

Add clarity to UnwrappedUnionType errors?

The Avro schema I'm using contains a field whose type is an array of possible types. See below for a simplified version:

var type = avro.parse({
  "type": "record",
  "fields": [
    {
      "name": "myFieldName",
      "type": ["null", "long"]
    }
  ]
});

var buf = type.toBuffer({
  myFieldName: '' // incorrectly passing string
});

I was passing the wrong type (a string instead of null or a long) and avsc threw the following error:

Uncaught Error: invalid ["null","long"]: ""

Although useful as the empty string value I was passing was output, I still had trouble finding which field was incorrectly passed (as I am passing many other fields as well, some that also have ["null", "long"] types).

I was wondering if there was a way to add the field's name ("myFieldName") in throwInvalidError's error message? I saw that the UnwrappedUnionType instance had an empty _name (since this is a "natural" union type..?), so I didn't know how to proceed to extract the field's name from which this type instance was created.

Thanks!

UnionType Items in Array Type

Hi there! I've hit an issue where encoding an ArrayType that has a UnionType for its items fails.

{
  "namespace": "example.avro",
  "type": "record",
  "name": "Example",
  "fields": [
    {
        "name": "values",
        "type":
            {
                "type": "array",
                "items": ["int", "string"]
            }
    }
  ]
}

Here's the example running this

var Avro = require('avsc');
var schema = Avro.parse('./example.avsc');
var data = {values: ["test"]}
schema.toBuffer(data);
TypeError: Object.keys called on non-object
    at Function.keys (native)
    at UnionType._write (avsc/lib/types.js:826:19)
    at ArrayType._write (avsc/lib/types.js:1367:19)
    at RecordType.writeExample [as _write] (eval at <anonymous> (avsc/lib/types.js:1678:10), <anonymous>:3:6)
    at RecordType.Type.toBuffer (avsc/lib/types.js:264:8)

Interestingly enough, this succeeds when passing in an empty array for values.

I've tested the same schema with the official python avro implementation to confirm that my schema is valid. Please let me know if there is any more information that I should provide!

OSX does not work

I have a simple sample code taken from the documentation:
test.js

var avsc = require('avsc');
    var type = avsc.parse('./schemas/pet.avsc')
    var pet = {kind: 'CAT', name: 'my cat'}
    var buf = type.toBuffer(pet); // Serialized object.
    var obj = type.fromBuffer(buf); // {kind: 'CAT', name: 'Albert'}

    console.log(buf)
    console.dir(obj)

And a simple avsc file in schemas/peg.avsc

{
  "name": "Pet",
  "type": "record",
  "fields": [
    {"name": "kind", "type": {"name": "Kind", "type": "enum", "symbols": ["CAT", "DOG"]}},
    {"name": "name", "type": "string"}
  ]
}

on ubuntu, running node test.js I get:

cromestant@tweb-lab-web-01:~/test$ node test.js 
< Buffer 00 0c 6d 79 20 63 61 74>
{ kind: 'CAT', name: 'my cat' }

however con El Capitรกn:

Charles-Romestant-MacBook-Air:activity cromestant$ node test.js 
/Users/cromestant/Codigos/node/activity/test.js:4
var buf = type.toBuffer(pet); // Serialized object.
               ^

TypeError: type.toBuffer is not a function
    at Object. (/Users/cromestant/Codigos/node/activity/test.js:4:16)
    at Module._compile (module.js:425:26)
    at Object.Module._extensions..js (module.js:432:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:313:12)
    at Function.Module.runMain (module.js:457:10)
    at startup (node.js:138:18)
    at node.js:974:3

important version & stuff

Charles-Romestant-MacBook-Air:activity cromestant$ node --version
v5.1.1
Charles-Romestant-MacBook-Air:tigo-activity cromestant$ cat node_modules/avsc/package.json 
{
  "_args": [
    [
      "avsc@^1.0.2",
      "/Users/cromestant/Codigos/node/tigo-activity"
    ]
  ],
  "_from": "avsc@>=1.0.2 <2.0.0",
  "_id": "[email protected]",
  "_inCache": true,
  "_installable": true,
  "_location": "/avsc",
  "_nodeVersion": "4.1.0",
  "_npmUser": {
    "email": "[email protected]",
    "name": "mtth"
  },
  "_npmVersion": "2.14.3",
  "_phantomChildren": {},
  "_requested": {
    "name": "avsc",
    "raw": "avsc@^1.0.2",
    "rawSpec": "^1.0.2",
    "scope": null,
    "spec": ">=1.0.2 <2.0.0",
    "type": "range"
  },
  "_requiredBy": [
    "/"
  ],
  "_resolved": "https://registry.npmjs.org/avsc/-/avsc-1.0.2.tgz",
  "_shasum": "aa9369896e0d1ea5e63fd74b3857df21fc61f86a",
  "_shrinkwrap": null,
  "_spec": "avsc@^1.0.2",
  "_where": "/Users/cromestant/Codigos/node/tigo-activity",
  "author": {
    "name": "Matthieu Monsch"
  },
  "bugs": {
    "url": "https://github.com/mtth/avsc/issues"
  },
  "dependencies": {},
  "description": "A serialization API to make you smile",
  "devDependencies": {
    "coveralls": "^2.11.4",
    "istanbul": "^0.3.19",
    "mocha": "^2.3.2"
  },
  "directories": {},
  "dist": {
    "shasum": "aa9369896e0d1ea5e63fd74b3857df21fc61f86a",
    "tarball": "http://registry.npmjs.org/avsc/-/avsc-1.0.2.tgz"
  },
  "engines": {
    "node": ">=0.11"
  },
  "files": [
    "lib"
  ],
  "gitHead": "75c09c68b5dc5a0510a5ebede5ce80f00e3e6aec",
  "homepage": "https://github.com/mtth/avsc",
  "keywords": [
    "avro",
    "avsc",
    "decoding",
    "encoding",
    "json",
    "schema"
  ],
  "license": "MIT",
  "main": "./lib",
  "maintainers": [
    {
      "name": "mtth",
      "email": "[email protected]"
    }
  ],
  "name": "avsc",
  "optionalDependencies": {},
  "readme": "ERROR: No README data found!",
  "repository": {
    "type": "git",
    "url": "git://github.com/mtth/avsc.git"
  },
  "scripts": {
    "bench": "python benchmarks >timings.json",
    "cover": "istanbul cover _mocha -- --ui tdd",
    "test": "mocha --ui tdd"
  },
  "version": "1.0.2"
}

Allow unwrapped unions

The current "wrapping" of union values is required to correctly support all Avro schemas but is overkill in most cases and less intuitive than exposing values directly (cf. #39, #37, #20, #16). Adding this option would also make using Avro much simpler for existing APIs.

Infer schemas

Adding a function (or stream) to infer Avro types from values.

For example:

avro.infer([123, 3.5]); // <FloatType>
avro.infer(['hi', ['hey']]); // <UnionType ["string", {"type": "array", "items": "string"}]>

Exposing a stream interface would be helpful as well. That would open up possibilities like compressing arbitrary long collections of records into an Avro file, without requiring a schema upfront (e.g. for logs).

Possible signature:

infer(vals, [opts])

  • vals {Array} Array of values to infer the type for.
  • opts {Object} Options:
    • objectHook {Function} Hook called each time an unrecognized object is encountered. It can for example be used to give explicit names to generated records or infer schemas which include logical types.
    • thresholds {Object} Map of thresholds used to determine when to switch from records to maps, enums to strings.
    • noNullDefault {Boolean} By default missing record fields will be coerced to null default values. This option suppresses this behavior. Setting this option might lead to infeasible inference.

Decode base64 string

I have binary Avro data from an Amazon Kinesis stream, however, when I save the data to a file using Node, and then create a fileDecoder object, it says that the magic bytes are invalid.

I am able to convert the data to a base64 string, which, when decoded from a generic online base64 decoder, is able to be run through the file decoder no problem.

What's the best way to parse a binary Avro data object (part of a JSON object) without knowing the fields ahead of time, and without having the data saved to a file?

Thanks!

What's up with the benchmarks? :(

Here is code:


var avro = require('avsc');





var os = require('os');

console.log(" OS: " + os.type() + " " + os.release() + " (" + os.arch() + ")");
console.log("RAM: " + os.totalmem()/1048576 + " MB (total), " + os.freemem()/1048576 + " MB (free)");
console.log("CPU: " + os.cpus()[0].speed + " MHz " + os.cpus()[0].model);

for(var r = 1; r < 2; r++) {
  console.log("\nRun #" + r + ":");
  // var obj = {'abcdef' : 1, 'qqq' : 13, '19' : [1, 2, 3, 4]};

  var obj = {
  name: 'Pet',
  type: 'record',
  fields: [
    {name: 'kind', type: {name: 'Kind', type: 'enum', symbols: ['CAT', 'DOG']}},
    {name: 'name', type: 'string'}
  ]
};



  var start = Date.now();
  for(var i = 0; i < 500000; i++) {
    (JSON.stringify(obj));
  }
  var stop = Date.now();
  console.log("\t      JSON: " + (stop - start) + "ms");

  start = Date.now();
  for(var i = 0; i < 500000; i++) {
     avro.parse(obj);


  }
  stop = Date.now();
  console.log("\t      Avro: " + (stop - start) + "ms");
}

Results:

Run #1:
JSON: 1216ms
Avro: 30162ms

I thought Avro was going to perform faster than JSON?

Error with logicalType

I tried run tis code:

var avsc = require('avsc');
var utils = require('./node_modules/avsc/lib/utils.js');

var schema = {
name: 'Transaction',
type: 'record',
fields: [
{name: 'amount', type: 'int'},
{name: 'time', type: 'long', logicalType: 'timestamp-millis'}
]
};

var LogicalType = avsc.types.LogicalType;

function DateType(attrs, opts) {
LogicalType.call(this, attrs, opts, [LongType]); // Require underlying long.
}
util.inherits(DateType, LogicalType);

DateType.prototype._fromValue = function (val) { return new Date(val); };
DateType.prototype._toValue = function (date) { return +date; };

var type = avsc.parse(schema, {logicalTypes: {'timestamp-millis': DateType}});

var transaction = {
amount: 32,
time: new Date('Thu Nov 05 2015 11:38:05 GMT-0800 (PST)')
};

var buf = type.toBuffer(transaction);
console.log("Buffer", buf);
var date = type.fromBuffer(buf).time; // Date object.
console.log("Oject", date);

and get:

C:\Program Files\Emscripten\emscripten\1.34.1\tests\may\test>node testLogicalTyp
e.js
C:\Program Files\Emscripten\emscripten\1.34.1\tests\may\test\node_modules\avsc\l
ib\schemas.js:2166
throw new Error(f('invalid %s: %j', type, val));
^
Error: invalid "long": "2015-11-05T19:38:05.000Z"
at throwInvalidError (C:\Program Files\Emscripten\emscripten\1.34.1\tests\ma
y\test\node_modules\avsc\lib\schemas.js:2166:9)
at LongType._write (C:\Program Files\Emscripten\emscripten\1.34.1\tests\may
test\node_modules\avsc\lib\schemas.js:477:5)
at RecordType.writeTransaction [as _write](eval at %28C:Program
FilesEmscriptenemscripten1.34.1testsmaytestnode_modulesavsclibschemas.
js:1611:10%29, :5:6)
at RecordType.Type.toBuffer (C:\Program Files\Emscripten\emscripten\1.34.1\t
ests\may\test\node_modules\avsc\lib\schemas.js:247:8)
at Object. (C:\Program Files\Emscripten\emscripten\1.34.1\tests\m
ay\test\testLogicalType.js:31:16)
at Module._compile (module.js:460:26)
at Object.Module._extensions..js (module.js:478:10)
at Module.load (module.js:355:32)
at Function.Module._load (module.js:310:12)
at Function.Module.runMain (module.js:501:10)

What am I doing wrong?

Combined schema and encoded message

Hi,

Thanks for creating this module :)

I would like to know if it's possible to combine the schema and the encoded data into one message to be sent over kafka.
I understand this is not ideal, but it's how we're using this while in a testing/sketch phase.

Thanks in advance for your help,
Cameron

ambiguous unwrapped union

I get an error that says "ambiguous unwrapped union" when I try process one of my company's Avro log files.

Is there any way to somehow support this?

Thanks!

Questions about de-serialization of nested classes and custom record constructors

Thank you for the great avsc library and for your support. I've been stuck on two issues and would like to ask for your kind guidance.

  1. I havenโ€™t been able to get the type.fromBuffer() method to return an instance of my class -it always returns a generic class. Iโ€™ve even set the _constructor field of the Type object to my class constructor before calling fromBuffer(), and verified with the browser debugger that the function signature is the same as with the dynamically-generated constructor. I'm not sure if this is the reason for Resolver, since my reader and writer classes are the same, but I tried creating a resolver with type.createResolver(type) and passing that in to parse, but it created a different error.
  2. The second related issue is that my classes have other embedded classes, and pre-registering them results in thrown exceptions. For example, the record definition has a structure like this:
    { name: โ€œPersonโ€,
           type: โ€œrecordโ€,
           fields: [ {
                     name: โ€œidโ€,
                     type: {
                           name: โ€œEmployeeRecordโ€,
                           type: โ€œrecordโ€,
                           fields: [
                                { name: โ€œemployeeIdโ€, type: long },
                                { name: โ€œstartDateโ€, type: long }
                           ]
                    } },
                   { name: โ€œnameโ€, type: โ€œstringโ€ }
             ]
      }

I want to pre-register all classes before calling fromBuffer, because I know they will be needed for deserialization. When registering classes, with either avsc.parse() or avsc.types.Type.fromSchema(), there is an Error thrown in the Type constructor because of duplicates. If "Person" has EmployeeRecord and "Manager" has EmployeeRecord, you get this exception. One thing I haven't tried is editing the schema definition to use references but not full nested record definitions, but these schemas are automatically generated by apache-avro (for java), so I would have to change that schema-generator code or hand edit the schemas, neither of which seems like the right thing to do. I don't know why registering twice should be a problem - imho it should be idempotent. What I want to do is loop through all my classes at startup time and parse/register all of them, in preparation for receiving complex nested objects over the wire. If instead of throwing an Error in line 72 of types.js, it returns (before adding the type to the registry), I can at least pre-register all my classes. When it returns, I also set the _constructor to the correct class constructor.
I'm probably missing something here but I think it would be helpful if
(a) either duplicate registration is harmless, or there's a flag in opts that can be set to make it not throw an exception
(b) there's an API to specify the record constructor - a setter to complement getRecordConstructor().
(c) another variation on (a)&(b) would be to have a pre-register API function that takes a list of (schema,constructor) pairs and registers them, perhaps returning a registry dictionary mapping schema name to Type. Any duplicate registrations within this set caused by nested record definitions would be ignored.
Thanks again for your help and for avsc!

ArrayBuffer to Buffer not working

const avro = require('avsc');

const type = avro.parse({
  name: 'Pet',
  type: 'record',
  fields: [
    {name: 'kind', type: {name: 'Kind', type: 'enum', symbols: ['CAT', 'DOG']}},
    {name: 'name', type: 'string'},
  ]
});

const buf = type.toBuffer({
  kind: 'CAT',
  name: 'Bob'
});

const arrayBuffer = buf.buffer;
const newBuf = new Buffer(arrayBuffer);

const val = type.fromBuffer(newBuf);

console.log(val);

output

/Users/mota/Sandbox/avro-demo/node_modules/avsc/lib/types.js:1337
    throw new Error(f('invalid %s enum index: %s', this._name, index));
    ^

Error: invalid Kind enum index: -50
    at EnumType._read (/Users/mota/Sandbox/avro-demo/node_modules/avsc/lib/types.js:1337:11)
    at RecordType.readPet [as _read] (eval at <anonymous> (/Users/mota/Sandbox/avro-demo/node_modules/avsc/lib/types.js:1957:10), <anonymous>:5:8)
    at readValue (/Users/mota/Sandbox/avro-demo/node_modules/avsc/lib/types.js:2471:17)
    at RecordType.Type.fromBuffer (/Users/mota/Sandbox/avro-demo/node_modules/avsc/lib/types.js:305:13)
    at Object.<anonymous> (/Users/mota/Sandbox/avro-demo/index.js:20:18)
    at Module._compile (module.js:541:32)
    at Object.Module._extensions..js (module.js:550:10)
    at Module.load (module.js:456:32)
    at tryModuleLoad (module.js:415:12)
    at Function.Module._load (module.js:407:3)

Record object with `fixed` field throws error

Hello,

I recently ran into an error and was wondering if this was expected behavior.

var schema = {
  type: 'record',
  name: 'something',
  fields: [
    { name: 'payload', type: 'fixed', size: 10 }
  ]
};

avsc.parse(schema)

Fails with:

'Error: undefined type name: fixed',
at createType (node_modules/avsc/lib/types.js:92:11)',
     '    at new Field (node_modules/avsc/lib/types.js:2371:16)',
     '    at RecordType.<anonymous> (node_modules/avsc/lib/types.js:1816:17)',
 '    at Array.map (native)',
     '    at new RecordType (node_modules/avsc/lib/types.js:1815:31)',
     '    at node_modules/avsc/lib/types.js:135:14',
     '    at Object.createType (node_modules/avsc/lib/types.js:136:7)',
     '    at Object.parse (node_modules/avsc/lib/index.js:31:11)'

node v4.42, avsc 4.1.0

adding a field with a default value breaks the protocol

Here is a rough test case:

var avro = require('avsc');
var PassThrough = require('stream').PassThrough;
var pt1 = PassThrough();
var pt2 = PassThrough();
var writer_transport = {readable: pt1, writable: pt2};
var reader_transport = {readable: pt2, writable: pt1};


var protocol_definition = {
  types: [{
    type: 'record',
    name: 'Request',
    fields: [{
      type: 'string',
      name: 'name'
    }, {
      type: 'string',
      name: 'id'
    }]
  }, {
    type: 'record',
    name: 'Response',
    fields: [{
      type: 'string',
      name: 'result'
    }]
  }],
  messages: {
    test: {
      response: 'Response',
      request: [{
        type: 'Request',
        name: 'params'
      }]
    }
  },
  namespace: 'MyNamespace',
  protocol: 'MyProto'
};

var writer_protocol = avro.parse(protocol_definition);

writer_protocol.on('test', function(req, ee, cb) {
  console.log('got a request');
  cb(null, {
    result: 'OK'
  });
});

writer_protocol.createListener(writer_transport);


//change the Response message to include a new new_optional_field with default value to "123"
protocol_definition.types[1].fields.push({
  type: 'string',
  name: 'new_optional_field',
  'default': '123'
});

var reader_protocol = avro.parse(protocol_definition);

var ee = reader_protocol.createEmitter(reader_transport);
reader_protocol.emit('test', {
  params: {
    name: 'jose',
    id: '123'
  }
}, ee, function(err, res) {
  console.dir(arguments); // 9!
  process.exit(0);
});

It throws this error:

Error: no alias for test in MyNamespace.test
    at StatefulEmitter.MessageEmitter._finalizeHandshake (/Users/jose/Projects/auth0/auth0-vagrant/src/auth0-anomaly-detector/node_modules/avsc/lib/protocols.js:319:11)
    at MessageDecoder.onHandshakeData (/Users/jose/Projects/auth0/auth0-vagrant/src/auth0-anomaly-detector/node_modules/avsc/lib/protocols.js:548:23)
    at emitOne (events.js:77:13)

It works as expected if I remove the namespace field from the protocol.

Add snappy codec

It would be extremely nice to have built into this library. Was there a conscious choice to leave out snappy and just have deflate?

performance and benchmarks

First your benchmark looks way solid, you have done a great job.

I am curious about the JSON comparison in your benchmark, doing a simple test like this:

var avsc = require('avsc');

var type = avsc.parse({
  name: 'Request',
  type: 'record',
  fields: [
    {name: 'id', type: 'string' },
    {name: 'method', type: 'string'},
  ]
});

var start = new Date();

for (var i = 0; i <= 1000000; i++) {
  type.fromBuffer(type.toBuffer({ id: i.toString(), method: 'PING' }));
}

console.log('finished in', new Date() - start);

versus

var start = new Date();

for (var i = 0; i <= 1000000; i++) {
  JSON.parse(JSON.stringify({ id: i.toString(), method: 'PING' }));
}

console.log('finished in', new Date() - start);

I do not get the twice as fast than JSON highlighted in your README but rather this:

/tmp/test-avro ยป node test-json.js
finished in 1264
/tmp/test-avro ยป node test-avsc.js
finished in 1942

Parsing (registry) multiple schemas

Hi,
I am trying to parse schemas that are split in multiple files (my understanding is I need to register these during the parse). However, the documentation is a bit sparse and I am having difficulty using the registry, can you give an example on how to use?
Thanks,
x10ba
pseudo code nodejs

var first_type = avsc.parse(frist.avsc); // how do I use registry?
var second_type = avsc.parse(second.avsc);

// parse avro encoded message fromBuffer
console.log(type.fromBuffer());

two schema files
// first.avsc
{
"namespace": "x10ba",
"type": "record",
"name": "first",
"fields": [
{ "name": "stuff", "type": [ "long", "null" ] },
{ "name": "moreStuff", "type": [
"x10ba.second", "null" ] }
]
}

// second.avsc
{
"namespace": "x10ba",
"type": "record",
"name": "second",
"fields": [
{ "name": "someStuff", "type": [ "string", "null" ] },
]
}

BlockEncoder produces invalid avro format?

Hi. I'm trying to encode a collection of data into a buffer and send it off to S3. Here's my code:

var avsc = require('avsc'),
  avroBuffers = [],
  avroType,
  avroEncoder

avroType = avsc.parse(__dirname + '/schemas/logSchema.avsc')
avroEncoder = new avsc.streams.BlockEncoder(avroType)

// Source data is an array of JSON objects that conform to my schema
sourceData.forEach(function (data, index) {
  if(avroType.isValid(data)) {
    encodedBuffer = avroType.toBuffer(data)
    avroBuffers.push(encodedBuffer)
  }
})

dataBuffer = Buffer.concat(avroBuffers)

// Send to S3.. but for testing, just write to the file system
var fs = require('fs')
fs.writeFileSync('/Users/cmiller/Desktop/foo.avro', dataBuffer)

In this case, I'm just writing to my local file system. I could use createFileEncoder, but I need to actually have a buffer to send to S3 instead of a file on the file system (although I'm just writing the buffer here).

When I take the resulting avro file and try to decode it with the CLI tools, I get this:

Exception in thread "main" java.io.IOException: Not an Avro data file
    at org.apache.avro.file.DataFileReader.openReader(DataFileReader.java:63)
    at org.apache.avro.tool.DataFileReadTool.run(DataFileReadTool.java:71)
    at org.apache.avro.tool.Main.run(Main.java:84)
    at org.apache.avro.tool.Main.main(Main.java:73)

The data in the file looks reasonable. It's binary data and has all of the data I added.

Am I doing something wrong?

Newlines

Hi,
I am parsing an avroEncoded message and I am getting a bit of awkwardness in my result.
//result of decoded object
// issue being, "firstAVSC" is seemingly inserted in front of the object
// second issue, \n and whitespaces are seemingly inserted in front/ after each k:v
// even if I parse/trim the firstAVSC and secondAVSC of newlines and whitepaces in-line, I get the same result ("\n" and what I am calling the "firstAVSC" header Object)

// Getting this result
{"decodedMessage": "firstAVSC {\n stuff:1233455,\n someStuff:'example'"}}

// expecting this result
{"decodedMessage":{stuff:1233455,someStuff:'example'}}

Thanks,
x10ba

//using this script example
var avsc = require('avsc');
var secondAVSC = avsc.parse(
{
"namespace": "x10ba",
"type": "record",
"name": "second",
"fields": [
{ "name": "someStuff", "type": [ "string", "null" ] },
]
},{registry:registry});

var firstAVSC = avsc.parse(
{
"namespace": "x10ba",
"type": "record",
"name": "first",
"fields": [
{ "name": "stuff", "type": [ "long", "null" ] },
{ "name": "moreStuff", "type": [
"x10ba.second", "null" ] }
]
},{registry:registry})

var message = avroEncodedString // this is what I am parsing
var object = {
"decodedMessage": firstAVSC.fromBuffer(message)
}

console.log(object)

Error type.fromBuffer for string of json data

Hello !!!,
I have an issue about avsc for you.

I have an avro schema for data:

{
    "namespace": "mention.avro",
    "type": "record",
    "name": "Mention",
    "fields": [
        {"name": "type",  "type": "string"},
        {"name": "users_mention",  "type": "string"},
        {"name": "from_user",  "type": "string"},
        {"name": "rq",  "type": "string"},
        {"name": "object_id",  "type": "string"},
        {"name": "object_type",  "type": "string"},
        {"name": "action", "type": "string"},
        {"name": "comment_id", "type": "string"}
    ]   
}

If I send a data:

{
  type: 'mention',
  users_mention: '[{"user_id": "2", "user_name": "test2"}]',
  from_user: '1',
  rq: 'b7581b8c-846d-11e5-86bf-56847afe9799',
  object_id: '65c6074e-b83c-4379-837b-315c190acd35',
  object_type: 'object_type',
  action: 'update',
  comment_id: '4c3448cd-c74b-43fa-85d2-b6693bfb0c21' 
}

And decode it by:

try {
        var buffer = new Buffer(message.value);
        var type = avsc.parse(schema);
        var obj = type.fromBuffer(buffer);
        return obj;
} catch (e) {
        console.log('can not decode avro data', e); 
        return false;
}

It is working properly.

But, If I sent data with new data of "users_mention" field:

{
  type: 'mention',
  users_mention: '[{"user_id": "2", "user_name": "test2"}, {"user_id": "3", "user_name": "test3"}]',
  from_user: '1'',
  rq: 'b7581b8c-846d-11e5-86bf-56847afe9799',
  object_id: '65c6074e-b83c-4379-837b-315c190acd35',
  object_type: 'object_type',
  action: 'update',
  comment_id: '4c3448cd-c74b-43fa-85d2-b6693bfb0c21' 
}

It is not work and had a error RangeError: out of range index at :

var obj = type.fromBuffer(buffer);

I have defined users_mention field has a string type. Please help me solve it as soon as.

Thanks so much and best regards,
Cuong Ba

multiplex protocol

Is there a way to define a multiplexed protocol? For instance having the server to reply a newer request before an older one? i.e. unorderdered.

This is done usually with something like a "request id".

The AVRO specs mention this:

The mechanism of correspondance is transport-specific. For example, in HTTP it is implicit, since HTTP directly supports requests and responses. But a transport that multiplexes many client threads over a single socket would need to tag messages with unique identifiers.

https://avro.apache.org/docs/1.7.7/spec.html

But I am not sure if is supported in this library.

buffer corruption during serialization

during serializing of an object, the buffer write position is reset and new fields overwrite previous ones. The returned buffer is corrupt.

A simplified version of the stack trace is

Type.toBuffer       <- (3)
RecordType._createWriter  <- (2)
RecordType 
(anonymous function)
Type.fromSchema 
(anonymous function)
...
Type.fromSchema 
UnionType._write 
writeEvent
Type.toBuffer 
(anonymous function) <-(1)

in (1), my app calls toBuffer() to serialize an object
in (2), toBuffer is called again to serialize a default value
in (3) the global buffer TAP is re-used for the default value and pos gets set to zero.

There's a comment inside createWriter that mentions the possibility of a buffer pool. Using a buffer pool to allocate a new buffer for saving the default value, instead of re-using the global one, would have avoided this issue.

The original call toBuffer() from my app returns without throwing any exceptions, but the data in the buffer has been corrupted and the total buffer length is shorter than expected. In my avro schema, I have several nested unions (unions aren't nested immediately, there is a record between). If that is relevant, it might explain why people haven't run into this before.

avsc.parse() lost some configuration of type

I use avro-1.7.7 java api to create test.avsc like:
{
"type" : "record",
"name" : "SimplePOJO",
"namespace" : "avro.test",
"fields" : [
{
"name" : "timestamp",
"type" : {
"type" : "long",
"CustomEncoding" : "DateAsLongEncoding"
}
}
]
}

when i use avsc in nodejs:

var type = avsc.parse('./test.avsc');

it lost the infomation of "CustomEncoding", can i get the origin(full information) avsc (generate by java) by the project?

union type error

given this AVSC:

{
  "type" : "record",
  "name" : "event_root",
  "namespace" : "com.millicom.digital.events.schema",
  "fields" : [ {
    "name" : "apiproxy",
    "type" : {
      "type" : "record",
      "name" : "apiproxy_type",
      "fields" : [ {
        "name" : "name",
        "type" : "string"
      }, {
        "name" : "revision",
        "type" : "string"
      } ]
    }
  }, {
    "name" : "client",
    "type" : {
      "type" : "record",
      "name" : "client_type",
      "fields" : [ {
        "name" : "cn",
        "type" : [ "null", "string" ]
      }, {
        "name" : "country",
        "type" : [ "null", "string" ]
      } ]
    }
  } ]
}

and this JSON:

{
  "schema": "tigo_mobile_gt_upselling_v1_Subscriber_Balances",
  "apiproxy": {
    "name": "tigo_mobile_gt_upselling_v1",
    "revision": "4"
  },
  "application": {
    "basepath": null
  },
  "client": {
    "cn": null,
    "country": null
  }
}

and using permutations between

  • tag country in the JSON
  • data type for country in the AVSC
AVSC Type JSON Value Result Status Comment
"type" : [ "null", "string" ] null parsing ok expected
"type" : [ "null", "string" ] "US" parsing error (see below) unexpected the union type is supposed to allow either null or an string value
"type" : "string" "US" parsing ok expected
"type" : "string" null parsing error expected

** Parsing Error **

/home/asarubbi/Development/repo/avrotest/node_modules/avsc/lib/schemas.js:809
    keys = Object.keys(val);
                  ^
TypeError: Object.keys called on non-object
    at Function.keys (native)
    at UnionType._write (/home/asarubbi/Development/repo/avrotest/node_modules/avsc/lib/schemas.js:809:19)
    at RecordType.writeclient_type [as _write] (eval at <anonymous> (/home/asarubbi/Development/repo/avrotest/node_modules/avsc/lib/schemas.js:1620:10), <anonymous>:4:6)
    at RecordType.writeevent_root [as _write] (eval at <anonymous> (/home/asarubbi/Development/repo/avrotest/node_modules/avsc/lib/schemas.js:1620:10), <anonymous>:4:6)
    at RecordType.Type.toBuffer (/home/asarubbi/Development/repo/avrotest/node_modules/avsc/lib/schemas.js:247:8)
    at Object.<anonymous> (/home/asarubbi/Development/repo/avrotest/index.js:9:18)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)

my code:

var avsc = require('avsc');

var parser = avsc.parse("./schemas/event_root.avsc");

var evt = require("./input.json");

var buf = parser.toBuffer(evt);

console.log(buf);

var obj = parser.fromBuffer(buf);

console.log(obj);

Issues with union in validation

I have the following simple example:

var test = require('tape');
var avsc = require('avsc');

test('should be valid', function(assert) {
  assert.plan(1);
  var schema = avsc.parse({
    type: "record",
    name: "nullable",
    fields: [
      { name: "foo", type: [ "null", "string" ] }
    ]
  });
  var valid = schema.isValid({ foo: "bar" }, { errorHook: assert.comment });
  assert.ok(valid);
});

And I was expecting that {"foo":"bar"} is valid based on the schema defined above but it seems not, because test fails with valid === false as a result of isValid call.

I am using node v4.2.3 and avsc ^3.3.11 and in order to get the above running npm install tape

Accessing the Avro.Schema from a reader

Hi,
I feel like I'm struggling to understand a core concept in using this package and was hoping that I could be given a nudge in the right direction about what I may be missing. Apologies if this is not the correct place for this question.

I have a "Writer" which ultimately is responsible for putting an avro encoded message onto a Kafka message bus, here is a contrived example:

/**********
 * WRITER *
 **********/
let schema = {
    type: "record",
    name: "User",
    fields: [
        {
            name: "firstname",
            type: "string"
        },
        {
            name: "surname",
            type: "string"
        }
    ],
}

let message = {
    'firstname': 'John',
    'surname': 'Smith'
}

let type = avro.parse(schema);

let sentBuffer = type.toBuffer(message);

/**********
 * KAFKA *
 **********/
// Send to Kafka
console.log("=====SENT BUFFER=====");
console.log(sentBuffer);

And on the other side of the Kafka cluster I have a "Reader" which needs to decode the message:

/**********
 * READER *
 **********/
// Consume Kafka message
let receivedBuffer = new Buffer(sentBuffer);

console.log("=====RECEIVED BUFFER=====");
console.log(receivedBuffer.toString());

The above code outputs:

=====SENT BUFFER=====
<Buffer 08 4a 6f 68 6e 0a 53 6d 69 74 68>
=====RECEIVED BUFFER=====
John
Smith

As you can see the message buffer that is sent doesn't (seemingly) contain any Avro header/meta information (such as schema id/string etc...). I would expect these things through from my limited understanding of the Avro docs.

What am I missing? How do I decode a message using the schema that's meant to be inside my message (rather than depending on it again)?

Any help is greatly appreciated.

Kind regards,

Matt

Interoperability with confluent kafka + schema registry + container header

Hello, I'm working on using a confluent kafka deployment, and using avsc to be the publisher in many cases. However I am running into some problems when trying to use schema validation and the schema registry.
I am wondering if I have not found the correct way to do this in avsc or if it is not yet compatible, but in essence, if you look at this
You will see that along with the serialized data they include in the payload the schema, so that the client can then query the schema registry for the latest version of the schema to deserialize.

also looking at this it mentions that:

When Avro data is read, the schema used when writing it is always present. This permits each datum to be written with no per-value overheads, making serialization both fast and small. This also facilitates use with dynamic, scripting languages, since data, together with its schema, is fully self-describing.

If I run using the java samples included in the distribution, I get no problems, however trying to serialize with AVSC I only get the serialized payload, and not the schema.(thus getting magic byte missing error)

Is this something avsc is compatible with? or , is there a way to have avsc include the schema in the header as described here and here

in advance, thank you for your reply

Node 0.11 requirement

Please explain why specifically Node 0.11 is required. I'm curious if Node v4.2 will work alternatively?

add support for logicalType

It would already be pretty useful to preserve logicalType as well as extra attributes when parsing the schema and when writing it back in a container header.

Test for array failing

Hi
Having embedded this library in a project I'm getting errors calling the simple cat/dog example

/Users/****/node-red/node_modules/avsc/lib/types.js:1478
    throw new Error(f('non-array %s fields', this._name));
    ^
24 Feb 11:24:45 - [error] [function:f0a19131.642a9] Error: non-array Pet fields

It seem that instanceof is not a great way to test for an array... eg http://stackoverflow.com/questions/22289727/difference-between-using-array-isarray-and-instanceof-array
\Would you consider moving to use Array.isArray instead ???
eg in types.js
make lines 966
if (!Array.isArray(attrs.symbols) || !attrs.symbols.length) {
throw new Error(f('invalid %j enum symbols: %j', attrs.name, attrs));
}

and lines 1473
if (!Array.isArray(attrs.fields)) {
throw new Error(f('non-array %s fields', this._name));
}

StatelessEmitter issue when serverHash != clientHash

Hey there. First off thanks for writing this library, it is immensely useful. In using it however I'm coming across an issue that seems difficult to address with the current implementation. Pardon me if this is a bit wandering-- I'm still working to fully grok the avro handshake process, and want to make sure any errors in my reasoning are clear to you.

Using the StatelessEmitter you can emit an RPC request, which will contain the protocol hash and the message. When the server(general avro server, not necessarily using avsc) receives the message it tries to retrieve a protocol from its cache using the hash. If it does not find a protocol in its cache, then it sends back a handshake response with match == 'NONE' and its own protocol and hash. Then, the client receives the server's protocol/hash and must use that to make a new request. So, in this scenario there are two HTTP request/response cycles.

The problem is that StatelessEmitter is only set up for one request/response cycle. It has a writable that it uses to send out its initial request, which it closes with a .end()(https://github.com/mtth/avsc/blob/master/lib/protocols.js#L460). But, if there is a match == 'NONE' (https://github.com/mtth/avsc/blob/master/lib/protocols.js#L442) then the same writable stream is used again after .end() has been called, resulting in a write after end error.

It seems to me that the StatelessEmitter must somehow request a new writable object to send out a new request containing its full protocol, and not just a protocol hash.

If the hashes match then this is never an issue, because there is only a single request needed to get a proper response.

You can see in the python implementation what I mean as well-- if the requestor received a full response then it reads it, otherwise there was not an acceptable handshake match and a new request must be made: https://github.com/apache/avro/blob/master/lang/py/src/avro/ipc.py#L235

Handling unions from JSON

Hi,

Thanks for this great library!

Quick question: is there a simple way to handle "unions" without having to wrap elements, maybe using Type Hooks or LogicalTypes or something else ?

For example:

My schema:

{"name":"foo", "type": ["null", "string"]}

I've to wrap element with the type:

{"foo": {"string": "my foo value"}}

What i'd like is to be able to simply encode my records without having to wrap them, like this:

{"foo": "my foo value"}

Thanks !

Add RPC support.

  • Protocol request and response types.
  • Stateful transports (e.g. WebSockets, node Duplex streams).
  • Stateless transports.
  • IDL (probably only in node).

Question/Issue

I have the following schema

{
  "type" : "record",
  "name" : "foo",
  "namespace" : "example",
  "fields" : [ {
    "name" : "hdr",
    "type" : {
      "type" : "record",
      "name" : "header",
      "fields" : [ {
        "name" : "chn",
        "type" : "int"
      }, {
        "name" : "str",
        "type" : "int"
      }, {
        "name" : "lne",
        "type" : "int"
      }, {
        "name" : "ctry",
        "type" : "int",
        "default" : 0
      }, {
        "name" : "pos",
        "type" : "int",
        "default" : 0
      }, {
        "name" : "seq",
        "type" : "int",
        "default" : 0
      }, {
        "name" : "ts",
        "type" : "int",
        "default" : 0
      } ]
    }
  } ]
}

I write a test in mocha

        // read entire avro file as JSON and parse using name of file as registry name
        var full_filename = path.join(root, file);
        var data = fs.readFileSync(full_filename, "utf8");
        var schema_name = file.substring(0, file.indexOf('.'));
        var schema = avro.parse(JSON.parse(data));
        var bits = schema.toBuffer({
            hdr: {
              ctry: 1,
              chn: 523,
              str: 321,
              lne: 1,
              pos: 1,
              seq: 1,
              ts: 1
            }
          });
          console.log(schema.fromBuffer(bits), 'DOC');

Expected results are a doc that equals the doc put in.

Actual results:

...
     chn: -8892408,
     str: -5746680,
     lne: 1,
     ctry: 1,
     pos: 1,
     seq: 1,
     ts: 1
...

Anytime a int >= 64 it looks like some weirdo encoding is going on. Issue is probably between me and the screen at this point. But I checked the docs on long (not using long, so no need to do any custom long library parsing).

schema not validating multi-type field

Schemas with fields which support more than 1 type don't seem to be passing validation

coded as a mocha test below, avsc isn't finding a string value valid for the {name: "optional", type: ["null","string"]} field

var avsc = require('avsc');

describe('avsc', () => {
    before(() => {
        this.logMsg = {
            name:"Susan McDonald",
            value: 7,
            optional: "French Fries"
        };
    });

    it('works with single-type fields', () => {
        var schema = avsc.parse({
            namespace: "my.test",
            type: "record",
            name: "OptionalRequired",
            fields: [
                {name: "name", type: "string"},
                {name: "value", type: "int"},
                {name: "optional", type: "string"}
            ]
        });
        // works ok
        chai.assert.isTrue(schema.isValid(this.logMsg));
    });

    it('should work with multi-type fields', () => {
        var schema = avsc.parse({
            namespace: "my.test",
            type: "record",
            name: "OptionalOptional",
            fields: [
                {name: "name", type: "string"},
                {name: "value", type: "int"},
                {name: "optional", type: ["null","string"]}
            ]
        });
        // fails
        chai.assert.isTrue(schema.isValid(this.logMsg));
    });
});

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.