GithubHelp home page GithubHelp logo

liquidcarrot / carrot Goto Github PK

View Code? Open in Web Editor NEW
293.0 293.0 34.0 17.45 MB

🥕 Evolutionary Neural Networks in JavaScript

Home Page: https://liquidcarrot.io/carrot/

License: MIT License

JavaScript 100.00%
browser easy-to-use javascript lstm machine-learning neat neural-networks neuro-evolution nodejs recurrent-neural-networks

carrot's People

Contributors

akashsamlal avatar allcontributors[bot] avatar christianechevarria avatar delvinroque avatar dependabot[bot] avatar guzuligo avatar iviiemtz avatar luiscarbonell avatar nicoszerman avatar raimannma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

carrot's Issues

Moved: Shared weights

This suggestion was moved from Neataptic.
wagenaartje/neataptic#54

In order to amplify the results of neural networks on larger datasets (i.e. MNIST), we need to implement shared weights. This will also allow the creation of convolutional networks.

Moved: Consistent return with train and evolve methods

This issue has been moved from Neataptic.
wagenaartje/neataptic#78

If i run network.evolve the function return a promise with the result, but if i run network.train i get a result.

Shouldnt they be consistent? shouldnt both of them return a promise?

Owner's reply:

That is because network.evolve is an async function and network.train is a sync function. There is not a lot I can do about it, except for creating two seperate functions for network.evolve, e.g.:

network.evolveSync();
network.evolve()

But network.evolveSync wouldn't be able to use multithreading capabilities then. However, it seems a good idea to add a network.trainAsync() function.

Personally, I think every long task should be async by default and there should be a "sync" method for each one.

maxNodes/maxConns doesn't seem to always work

This is an old Neataptic bug that is still happening. "maxNodes" and "maxConns" don't seem to always work. After a while, they will increase over the maximum.

My settings:

    Methods.mutation.MOD_ACTIVATION.mutateOutput = false;
    Methods.mutation.MOD_ACTIVATION.allowed = [
        Methods.activation.LOGISTIC,
        Methods.activation.TANH,
        Methods.activation.STEP,
        Methods.activation.SOFTSIGN,
        Methods.activation.SINUSOID,
        Methods.activation.GAUSSIAN,
        Methods.activation.BIPOLAR,
        Methods.activation.BIPOLAR_SIGMOID,
        Methods.activation.HARD_TANH,
        Methods.activation.INVERSE,
        Methods.activation.SELU,
        Methods.activation.RELU,
        Methods.activation.BENT_IDENTITY,
        Methods.activation.IDENTITY,
    ];

    neat = new Neat(4, 1, null,
        {
            mutation: Methods.mutation.ALL,
            popsize: 100,
            mutationRate: 0.2,
            elitism: 10,
            equal: true,
            network: new Architect.Random(4, 5, 1),
            provenance: 2,
            maxNodes: 7, 
            maxConns: 10,
            maxGates: 5,
        }
    );

Basic Neuron

Build support for building NNs, NN Layers, and NN Groups with simple neurons

Basic Layers

Build support for building NNs with simple layers

[Bug] npm run build is breaking

npm run build is breaking

0 info it worked if it ends with ok
1 verbose cli [ '/usr/bin/node', '/usr/bin/npm', 'run', 'build' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'prebuild', 'build', 'postbuild' ]
5 info lifecycle @liquid-carrot/[email protected]~prebuild: @liquid-carrot/[email protected]
6 info lifecycle @liquid-carrot/[email protected]~build: @liquid-carrot/[email protected]
7 verbose lifecycle @liquid-carrot/[email protected]~build: unsafe-perm in lifecycle true
8 verbose lifecycle @liquid-carrot/[email protected]~build: PATH: /usr/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/home/carrot/workspace/lc-app/carrot/node_modules/.bin:/home/carrot/anaconda3/bin:/home/carrot/anaconda3/bin:/home/carrot/anaconda3/bin:/home/carrot/bin:/home/carrot/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
9 verbose lifecycle @liquid-carrot/[email protected]~build: CWD: /home/carrot/workspace/lc-app/carrot
10 silly lifecycle @liquid-carrot/[email protected]~build: Args: [ '-c',
10 silly lifecycle   'jsdoc -r -c jsdoc.json && git add . && git stash && git checkout gh-pages && git pull && git merge --squash --strategy-option=theirs stash && git stash drop && git commit -m \'Auto-build\' && git push && git checkout master' ]
11 silly lifecycle @liquid-carrot/[email protected]~build: Returned: code: 1  signal: null
12 info lifecycle @liquid-carrot/[email protected]~build: Failed to exec build script
13 verbose stack Error: @liquid-carrot/[email protected] build: `jsdoc -r -c jsdoc.json && git add . && git stash && git checkout gh-pages && git pull && git merge --squash --strategy-option=theirs stash && git stash drop && git commit -m 'Auto-build' && git push && git checkout master`
13 verbose stack Exit status 1
13 verbose stack     at EventEmitter.<anonymous> (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:301:16)
13 verbose stack     at emitTwo (events.js:126:13)
13 verbose stack     at EventEmitter.emit (events.js:214:7)
13 verbose stack     at ChildProcess.<anonymous> (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
13 verbose stack     at emitTwo (events.js:126:13)
13 verbose stack     at ChildProcess.emit (events.js:214:7)
13 verbose stack     at maybeClose (internal/child_process.js:915:16)
13 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
14 verbose pkgid @liquid-carrot/[email protected]
15 verbose cwd /home/carrot/workspace/lc-app/carrot
16 verbose Linux 4.4.0-143-generic
17 verbose argv "/usr/bin/node" "/usr/bin/npm" "run" "build"
18 verbose node v8.15.1
19 verbose npm  v6.4.1
20 error code ELIFECYCLE
21 error errno 1
22 error @liquid-carrot/[email protected] build: `jsdoc -r -c jsdoc.json && git add . && git stash && git checkout gh-pages && git pull && git merge --squash --strategy-option=theirs stash && git stash drop && git commit -m 'Auto-build' && git push && git checkout master`
22 error Exit status 1
23 error Failed at the @liquid-carrot/[email protected] build script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]

Moved: calling the included NEAT implementation NEAT is misleading

This suggestion has been moved from Neataptic.
wagenaartje/neataptic#94

Speciation via some distance measurement is a fundamental aspect of the NEAT algorithm. I suggest clarifying in the docs that the included implementation diverges from the original specification, and perhaps renaming it to something like NEATSS (NEAT Sans Speciation)

Owner's reply:

I completely understand. The way it works is quite different to the original NEAT algorithm. I actually call it the 'Instict' algorithm (more about that can be read there). I'm leaving this open until I have time to modify the docs.

Ideal Layer Interface

Ideal Layer Interface

Properties

Key Description
neurons All layer neurons

Instance Functions

Key Description
Layer.prototype.add_neurons() Adds specified number of neurons to layer
Layer.prototype.get_neuron() Returns neurons matching specified name
Layer.prototype.get_best() Returns neuron in layer with highest axon state value
Layer.prototype.connect() Connects layer neurons to another neuron, layer, or group
Layer.prototype.activate() Activate layer neurons generating an axon state value for each
Layer.prototype.forward() Forward propagates results of Layer.prototype.activate()
Layer.prototype.backward() Backward propagates results of Layer.prototype.activate()

new Layer(props[, options])

  • new Layer(): Creates a new layer
  • new Layer(32): Creates a layer with 32 neurons inside of it
  • new Layer([n0, n1]): Creates a layer with n0 and n1 as its neurons
  • new Layer(layer): Creates a new layer with the same number of neurons in it as layer

add_neurons(props[, callback])

  • neuron.add_neurons(n): Adds n number of neurons to layer
  • neuron.add_neurons([n0, n1]): Adds n0 and n1 to layer neurons

get_neurons(name[, callback])

  • neuron.get_neurons(name): Returns layer neurons matching name

get_best([callback])

  • neuron.get_best(): Returns neuron with highest axon value in layer

.connect(object[, options][, callback])

  • layer.connect(neuron): Connects every neuron in layer to neuron
  • layer.connect(other_layer): Connects every neuron in layer to every neuron in other_layer
  • layer.connect(group): Connects every neuron in layer to every input neuron in group; Layers are flat by design, groups might not be.

.activate([inputs][, options][, callback])

  • layer.activate(): Activates every neuron in the layer; input layers require inputs
  • layer.activate([0, 1, 0, 1]): Activates every neuron in the layer with the given values; inputs.length must layer.size

forward([callback])

  • .forward(): Propagates the last result of previous layers .activate() to all outgoing layer connections

.backward([callback])

  • .backward(): Propagates the last results of following layers .learn() to all incoming layer connections

Synaptic.js -> Neataptic

Synaptic.js is not built as well and does not support as many features as Neataptic right out of the gate.

I think we should make the change from one to the other - and am going to go ahead and get started with it.

Moved: Neat's input and output parameters are useless if a network is provided

This suggestion has been moved from Neataptic.
wagenaartje/neataptic#144

So I checked the code, at least within the neat module, the first 2 arguments (inputs and outputs) are only use to create the random network if none is provided.

This means that if one is, those 2 are useless and hence the calling code looks awkward. I think it should make more sense to make inputs and outputs also options (heck, make it accept just a single options object) and hence they are optional and only used (and required) if no template network is provided.

An extra comment, I notice that when a network is provided, it is used as a template as is. That means that either passing a random network or any other, means the first generation is all equal and does the same thing (AFAIU). Maybe it'd make sense to randomly mutate each template the first time?

Outputs are being mutated when they shouldn't

Describe the bug
Output is not remaining normalized

To Reproduce
Current settings:

Methods.mutation.MOD_ACTIVATION.mutateOutput = false;
Methods.mutation.SWAP_NODES.mutateOutput = false;

Expected behavior

  • Outputs remain normalized when mutateOutput is set to false.
  • Default behavior is to keep output normalized

Screenshots
Screenshot 2019-05-04 at 10 26 12 AM

Ideal Neuron Interface

Ideal Neuron Interface

Properties

Key Description
bias Bias
rate Learning rate
activation Activation/squash function
connections All neuron connections
connections.incoming Incoming neuron connections
connections.outgoing Outgoing neuron connections

Instance Functions

Key Description
Neuron.prototype.is.input() Tests whether neuron has no input connections
Neuron.prototype.is.output() Tests whether neuron has no output connections
Neuron.prototype.project() Connects to another neuron, layer, or group
Neuron.prototype.activate() Activates neuron
Neuron.prototype.propagate() Updates neuron; propagates error

Class Constants

Key Description
Neuron.activation An object of typical activation/squash functions
Neuron.activation.SIGMOID sigmoid Squash Function
Neuron.activation.ReLU ReLU Squash Function
Neuron.activation.TANH tanh Squash Function
Neuron.activation.IDENTITY identity Squash Function
Neuron.activation.PERCEPTRON perceptron Squash Function
Neuron.update An object of typical activation/squash function derivatives
Neuron.activation.SIGMOID sigmoid Squash Function Partial Derivative
Neuron.activation.ReLU ReLU Squash Function Partial Derivative
Neuron.activation.TANH tanh Squash Function Partial Derivative
Neuron.activation.IDENTITY identity Squash Function Partial Derivative
Neuron.activation.PERCEPTRON perceptron Squash Function Partial Derivative

new Neuron()

  • new Neuron(): Creates a new neuron.
  • new Neuron({ inputs: [n0, n1], outputs: [n2] }): Creates a new neuron with n0 and n1 as incoming connections, and n2 as an outgoing connection.
  • new Neuron(n0): Creates a new neuron with the same connections as n0

.is.input([callback])

  • neuron.is.input(): Returns true if neuron has no incoming connections

.is.output([callback])

  • neuron.is.output(): Returns true if neuron has no outgoing connections

.project(object[, callback])

  • neuron.projects(other_neuron): Projects neuron to other_neuron
  • neuron.project(layer): Projects neuron to every neuron in layer
  • neuron.project(group): Projects neuron to every neuron in group

.activate(inputs[,callback])

  • .activate([0, 1, 0, 1]): Activates neuron with the given inputs; inputs.length must equal connections.length

.propagate(feedback[,callback])

  • .propagate([1, 0, 1, 0]): Updates weights, and propagates errors.

Ideal README.md

Carrot™

A Simple Node.js Neural Network Library

Carrot's Build Status Carrot's Code Coverage Carrot's Current Version Carrot's Weekly Download Carrot's License Carrot's Love

Table of Contents

Table of Contents

Installation

$ npm i -S liquid-carrot

Getting Started

let carrot = require('@liquid-carrot/carrot')

let net = new carrot.Network()

Examples

Resources

Contributing

Carrot™ is a open source project. We invite your participation through issues
and pull requests!

When adding or changing a service please add tests.

If you're new to the project, maybe you'd like to open a pull request to address one of them:

Carrot's GitHub Issues

Credits

Contributors

Carrot's Code Contributors

This project exists thanks to all the people who contribute. We can't do it without you! 🙇

Backers

Carrot's Financial Backers

Thank you to all our backers! 🙏 [Become a backer]

Become a Backer

Sponsors

Carrot's Sponsors

Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]

Become a Sponsor

Patrons

Carrot's Patrons

Thank you to all of our patrons! 🙏 [Become a Patron]

Become a Patron

Documentation

Used By

Feedback

Feel free to send us feedback on Twitter or file an issue. Feature requests are always welcome. If you wish to contribute, please take a quick look at the guidelines!

If there's anything you'd like to chat about, please feel free to join our Discord chat!

Road Map

More coming soon...

License

Carrot™ is available under the MIT License

JSDOC Build Script

It would be great to just automatically generate all of our documentation, on the fly when we are running npm publish.

@christianechevarria Let me know if this is possible.

Moved: Layer.LSTM not performing as well as Architect.LSTM

This bug has been moved from Neataptic.
wagenaartje/neataptic#25

Currently I'm rewriting all the built-in networks code to construct networks form layers instead of groups and nodes. However, using groups/nodes to construct a LSTM network seems to always outperform using LSTM layers.

Simple one-layer LSTM network with layers

function LSTM(inputSize, hiddenSize, outputSize){
  var input = new Layer.Dense(inputSize);
  var hidden = new Layer.LSTM(hiddenSize);
  var output = new Layer.Dense(outputSize);

  input.connect(hidden, Methods.Connection.ALL_TO_ALL);
  hidden.connect(output, Methods.Connection.ALL_TO_ALL);

  // option.inputToOutput is set to true for Architect.LSTM
  if(true)
    input.connect(output, Methods.Connection.ALL_TO_ALL);

  return Architect.Construct([input, hidden, output]);
}

So:

var network = new LSTM(1, 6, 1);

// is the equivalent of

var network = new Architect.LSTM(1,6,1);

However, there is one key difference: Layer.LSTM consists of an output block itself. So when you connect Layer.LSTM to an output layer, there will be a group of size in between (this is needed if you connect two LSTM layers with each other). Architect.LSTM also does this, except for the output layer:

var outputBlock = i == blocks.length-1 ? outputLayer : new Group(block);

However, there is no way a Layer.LSTM can know that the next layer is a regular dense layer - if it would know, then no outputBlock would be needed.

Visualisation


Left: layer LSTM - Right: architect LSTM

You can clearly see the lack of gates/extra nodes in the second image. All these extra nodes and gates require a lot of extra iterations to converge to a solution.

Current solution idea

Make the Layer.Dense.input() function detect if the connecting layer is a LSTM. If it is, remove its outputBlock. This requires a workaround of the connect() function though.

This issue will not be fixed soon.

Infinity/NaN output

An old Neataptic bug which is still bugging me.

I'm getting Infinity/NaN outputs while training. Using the latest Node.js.

My settings:

    Methods.mutation.MOD_ACTIVATION.mutateOutput = false;
    Methods.mutation.MOD_ACTIVATION.allowed = [
        Methods.activation.LOGISTIC,
        Methods.activation.TANH,
        Methods.activation.STEP,
        Methods.activation.SOFTSIGN,
        Methods.activation.SINUSOID,
        Methods.activation.GAUSSIAN,
        Methods.activation.BIPOLAR,
        Methods.activation.BIPOLAR_SIGMOID,
        Methods.activation.HARD_TANH,
        Methods.activation.INVERSE,
        Methods.activation.SELU,
        Methods.activation.RELU,
        Methods.activation.BENT_IDENTITY,
        Methods.activation.IDENTITY,
    ];

    neat = new Neat(4, 1, null,
        {
            mutation: Methods.mutation.ALL,
            popsize: 100,
            mutationRate: 0.2,
            elitism: 10,
            equal: true,
            network: new Architect.Random(4, 5, 1),
            provenance: 2,
            maxNodes: 7, 
            maxConns: 10,
            maxGates: 5,
        }
    );

Infinity Example:

The normalised data:
[0.9354838709677419, 0.5, 0.5933786078098472, 0.5880669161907702]

The genome:

{"nodes":[{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":0},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":1},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":2},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":3},{"bias":1.1508084667830487,"type":"output","squash":"SELU","mask":1,"index":4}],"connections":[{"weight":1,"from":4,"to":4,"gater":4},{"weight":-0.05001360658035439,"from":3,"to":4,"gater":null},{"weight":0.9984137443904727,"from":2,"to":4,"gater":null},{"weight":-0.7832753538521565,"from":1,"to":4,"gater":null},{"weight":-0.9040067054346645,"from":0,"to":4,"gater":4}],"input":4,"output":1,"dropout":0}

NaN Example:

The normalised data:
[0.3870967741935484, 0.75, 0.5040295048190726, 0.5575469079452833]

The genome:

{"nodes":[{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":0},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":1},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":2},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":3},{"bias":-0.0717463180924848,"type":"hidden","squash":"BENT_IDENTITY","mask":1,"index":4},{"bias":-0.02994106373052867,"type":"output","squash":"LOGISTIC","mask":1,"index":5}],"connections":[{"weight":1,"from":4,"to":4,"gater":null},{"weight":1,"from":5,"to":5,"gater":null},{"weight":-0.048341462482942354,"from":4,"to":5,"gater":null},{"weight":-0.4890970041600281,"from":3,"to":5,"gater":5},{"weight":0.9984137443904727,"from":2,"to":5,"gater":5},{"weight":-0.4890970041600281,"from":3,"to":4,"gater":4},{"weight":0.024574079733371557,"from":1,"to":5,"gater":null},{"weight":0.9984137443904727,"from":2,"to":4,"gater":4},{"weight":-0.9040067054346645,"from":0,"to":5,"gater":5},{"weight":0.049181674792330154,"from":1,"to":4,"gater":null},{"weight":0.04742932010695605,"from":0,"to":4,"gater":null}],"input":4,"output":1,"dropout":0}

wagenaartje/neataptic#130

Moved: SoftMax function for output layer

This suggestion has been moved from Neataptic.
wagenaartje/neataptic#82

A SoftMax Function for the output layer would be awesome for any classification NN. I took a look inside your code and I guess it's not that easy to integrate into the activation functions, am I right?
https://en.wikipedia.org/wiki/Softmax_function

Owner's reply:

I have thought about this, your assumption is right. It's quite hard to integrate it as activations are now completely independent of each other. It would also require a more layered structure of networks, which is not the case right now.

Carrot is broken when importing into Codepen

Describe the bug
Title captures description

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'codepen.io'
  2. Click on 'Open JS Settings' & paste CDN link
  3. Copy and paste neuro-evolution example
  4. Open console
  5. See error

Screenshots
Screenshot 2019-04-25 at 8 07 01 PM

Screenshot 2019-04-25 at 8 07 41 PM

Desktop (please complete the following information):

  • OS: ChromeOS
  • Browser: chrome

Ideal Group Interface

Ideal Group Interface

new Group(props)

  • new Group(): Creates a blank new group
  • new Group(25): Creates a group with 35 neurons inside of it
  • new Group([n0, n1]): Creates a group with 2 neurons inside of it - n0 and n1.
  • new Group(group): Creates a new group with the same number of neurons in it as group

.project(object[,options][,callback])

  • group.project(neuron): Projects every output neuron in group to neuron
  • group.project(layer): Projects every output neuron in group to every neuron in layer
  • group.project(other_group): Projects every output neuron in group to every input neuron in other_group

.activate([inputs][,callback])

  • group.activate(): Activates every neuron in the group; input groups require inputs
  • group.activate([0, 1, 0, 1]): Activates every input neuron in the group with the given value, then every other neuron with the forward feed results; inputs.length must group.inputs.length

How to train my AI?

Is your feature request related to a problem? Please describe.

It's incredibly confusing to try and figure out how much data to train vs. test on - and how.

Describe the solution you'd like

It would be amazing to have a course or tutorial on different examples of how to train a different AI bot and seeing when and where different things are applicable.

Describe alternatives you've considered

Siraj (School of AI); YouTube; etc.

Additional context

Screenshot 2019-05-03 at 1 55 33 PM

Basic Networks

Build support for building NNs with neurons, groups, and layers.

Import dependencies in documentation

When copy/pasting from the docs the libraries are not imported and assume the user understands the libraries exported object structure.

As an example the network.evolve() function's example in the docs assumes a) the user declared carrot.Network as Network, and b) declared carrot.methods as methods.

Current

async function execute () {
   var network = new Network(2,1);

   // XOR dataset
   var trainingSet = [
       { input: [0,0], output: [0] },
       { input: [0,1], output: [1] },
       { input: [1,0], output: [1] },
       { input: [1,1], output: [0] }
   ];

   await network.evolve(trainingSet, {
       mutation: methods.mutation.FFW,
       equal: true,
       error: 0.05,
       elitism: 5,
       mutationRate: 0.5
   });

   network.activate([0,0]); // 0.2413
   network.activate([0,1]); // 1.0000
   network.activate([1,0]); // 0.7663
   network.activate([1,1]); // -0.008
}

execute();

Potential Improvement

let carrot = require("@liquid-carrot/carrot");

async function execute () {
   var network = new carrot.Network(2,1);

   // XOR dataset
   var trainingSet = [
       { input: [0,0], output: [0] },
       { input: [0,1], output: [1] },
       { input: [1,0], output: [1] },
       { input: [1,1], output: [0] }
   ];

   await network.evolve(trainingSet);

   console.log(network.activate([0,0])); // 0.2413
   console.log(network.activate([0,1])); // 1.0000
   console.log(network.activate([1,0])); // 0.7663
   console.log(network.activate([1,1])); // -0.008
}

execute();

Other Potential Improvement

let { Network } = require("@liquid-carrot/carrot");

async function execute () {
   var network = new Network(2,1);

   // XOR dataset
   var trainingSet = [
       { input: [0,0], output: [0] },
       { input: [0,1], output: [1] },
       { input: [1,0], output: [1] },
       { input: [1,1], output: [0] }
   ];

   await network.evolve(trainingSet);

   console.log(network.activate([0,0])); // 0.2413
   console.log(network.activate([0,1])); // 1.0000
   console.log(network.activate([1,0])); // 0.7663
   console.log(network.activate([1,1])); // -0.008
}

execute();

Ideal Connection Interface

Ideal Connection Interface

Properties

Key Description
from Source neuron
to Destination neuron
weight Connection weight/importance

WAPE cost method

Neataptic has a pull request for Weighted Absolute Percentage Error (WAPE) that was never merged in (unmaintained). Maybe something we would want to merge in here?

wagenaartje/neataptic#132

Missing Factory Functions

Neuron function does not create a new neuron. It passes a reference to same neuron.

Need to create a better factory function.

Basic Groups

Build support for building NNs with simple neuron groups

Community Goal: 100 Stars

🙏 🌟 🙏 Please Star

Contributors to this repo value their worth as human beings by the number of stars this repo receives.

Please give our repo one shooting star.

EfficientMutation flag for neat

Neataptic has a pull request for EfficientMutation flag that was never merged in (unmaintained).

"Before it attempts to do any mutations, it checks which mutations are possible on the genome from the selected mutation types, and passes them to be utilized from the selectMutationMethod."

Maybe something we would want to merge in here or look into? I've tested it on my own project and seems to work fine.

wagenaartje/neataptic#79

Ideal Neat API

I was going to write this in another issue, but since you asked for feedback here...

I have found the Neat class is much more complicated than it needs to be.

My ideal api would be:

  • You extend network.evolve.
  • You pass it a score that for the network to hit to finish.
  • You pass in the scoring function.
  • Select the population class. So you can extend and store data if needed.
  • The population has ids.
  • Multi-threading support.
  • Able to give it your data and select what percentage is the training, validation and testing data.
  • A setting that auto adjusts the mutation rate based on the validation results.
  • Regularization support.

Originally posted by @dan-ryan in #25 (comment)

Examples & Use Cases

Examples & Use Cases

It would be great if we could get a strong set of use cases and examples built out for different communities to be able to use in their individual projects - and maybe add them to the README.md, @christianechevarria.

As a first contender, @mateogianolio created this great - really simple - OCR example with Synaptic.js.

I think this could be a really cool visual - maybe implement it as a GAN and have the generative NN do a live display.

HyperNEAT

HyperNEAT is an improvement over the NEAT formula. We should add an option on the type of evolution formula when calling the evolve function.

This video and blog explain how HyperNEAT works:
https://www.youtube.com/watch?v=t15wUkCXuxQ

https://towardsdatascience.com/hyperneat-powerful-indirect-neural-network-evolution-fba5c7c43b7b

The HyperNEAT paper:
http://axon.cs.byu.edu/~dan/778/papers/NeuroEvolution/stanley3**.pdf

You could go a step further, Evolved-Substrate HyperNEAT addresses some of the issues of HyperNEAT:
https://www.mitpressjournals.org/doi/pdfplus/10.1162/ARTL_a_00071

Test Passing Badge | Code Coverage Badge

Some repositories have a badge that measures what percentage of their tests are passing. It would be great for us to have that as well.

As an example async is using Travis CI to run their tests and CoverAlls to run their code coverage.

Imgur

Imgur

Node.connect should check for self-connections first

Currently Node.connect first checks if target is itself and then checks if it's projecting to the node already; this should be the other way around

  connect: function (target, weight) {
    if (typeof target.bias !== 'undefined') { // must be a node!
      if (target === this) {
        // Turn on the self connection by setting the weight
        if (this.connections.self.weight !== 0) {
          if (config.warnings) console.warn('This connection already exists!');
        } else {
          this.connections.self.weight = weight || 1;
        }
        connections.push(this.connections.self);
      } else if (this.isProjectingTo(target)) {
        throw new Error('Already projecting a connection to this node!');
      } else {
        let connection = new Connection(this, target, weight);
        target.connections.in.push(connection);
        this.connections.out.push(connection);

        connections.push(connection);
      }
    } else { // should be a group
      for (var i = 0; i < target.nodes.length; i++) {
        let connection = new Connection(this, target.nodes[i], weight);
        target.nodes[i].connections.in.push(connection);
        this.connections.out.push(connection);
        target.connections.in.push(connection);

        connections.push(connection);
      }
    }
    return connections;
  }

Ideal Feature Set

Ideal Feature Set

Performance

  • GPU Acceleration
  • Clustering
  • Streaming
  • Asynchronous/Non-Blocking Neuron Execution
  • Seperation of Intra-Neuron (e.g Squash Functions) and Inter-Neuron (e.g. Forward Feeding, Back-Propagation) Processes

Algorithms

  • Backpropogation
  • DQN
  • SARSA
  • Simple Fitness
  • NEAT
  • Instinct
  • Lion

Architectures

  • RNN
  • LSTM
  • GRU
  • Capsule Network
  • GANs

Ideal Network Interface

Ideal Network Interface

new Network()

  • new Network(): Creates a new blank network
  • new Network([l0, l1, l2]): Creates a new neural network where: l0 projects to l1 and l1 projects to l2.
  • new Network([g0, g1, g2]): Creates a new neural network where: g0 projects to g1 and g1 projects to g2
  • new Network([n0, n1, n2]): Creates a new neural network where: n0 projects to n1 and n1 projects to n2

.activate(inputs[,callback])

  • network.activate([0, 1, 0, 1]):

Moved: Allow bundling with Webpack

This bug has been moved from Neataptic.
wagenaartje/neataptic#57

The pull request:
wagenaartje/neataptic#148

Cannot resolve 'child_process'

./~/neataptic/src/multithreading/workers/node/testworker.js
Module not found: Error: Can't resolve 'child_process' in '[...]/node_modules/neataptic/src/multithreading/workers/node'
 @ ./~/neataptic/src/multithreading/workers/node/testworker.js 7:9-33
 @ ./~/neataptic/src/multithreading/workers/workers.js
 @ ./~/neataptic/src/multithreading/multi.js
 @ ./~/neataptic/src/neataptic.js

Compare & Contrast Everything for Performance

Is your feature request related to a problem? Please describe.

Figuring out what network topology to use based on previous data and particular problems is incredibly time consuming and frustrating.

Describe the solution you'd like

It would be amazing if Carrot could automate the process of not only neuroevolution, but also what network and network structure/architecture/paradigm would be best to solve a particular problem.

Describe alternatives you've considered

Trail and error; Data Robot; reading research papers; etc.

Additional context

Screenshot 2019-05-03 at 1 47 18 PM

Moved: ONE_TO_ONE connections between groups stored as self connections

This bug has been moved from Neataptic.
wagenaartje/neataptic#114

Thanks for such a great library and the awesome documentation! I've been going through your library with a fine toothed comb trying to learn it. Thus far in my studies of the implementation, I have only found one thing that I can't follow.

My question revolves around connecting groups to other groups using the ONE_TO_ONE connection method. Here's the code excerpt:

	} else if (method === methods.connection.ONE_TO_ONE) {
		if (this.nodes.length !== target.nodes.length) {
			throw new Error('From and To group must be the same size!');
		}

		for (i = 0; i < this.nodes.length; i++) {
			let connection = this.nodes[i].connect(target.nodes[i], weight);
HERE >>>     		this.connections.self.push(connection[0]);
			connections.push(connection[0]);
		}
	}

I see each node in the current group being connected to the corresponding node in the other group. However, why are the new connections added to the self connections list and not out?

Owner's reply:

I wrote this code some time ago, but it looks like this might just be wrongly coded. I see that some lines before the code contains:

        if (this !== target) {
          if (config.warnings) console.warn('No group connection specified, using ALL_TO_ALL');
          method = methods.connection.ALL_TO_ALL;
        } else {
          if (config.warnings) console.warn('No group connection specified, using ONE_TO_ONE');
          method = methods.connection.ONE_TO_ONE;
        }

So if this === target, a ONE_TO_ONE connection is assumed when none is given. For some reason I must have thought that a ONE_TO_ONE connection is always a self-connection. The correct code of course should be :

      } else if (method === methods.connection.ONE_TO_ONE) {
        if (this.nodes.length !== target.nodes.length) {
          throw new Error('From and To group must be the same size!');
        }

        for (i = 0; i < this.nodes.length; i++) {
          let connection = this.nodes[i].connect(target.nodes[i], weight);
          this.connections.out.push(connection[0]);
          target.connections.in.push(connection[0]);
          connections.push(connection[0]);
        }
      }

The faulty code should not cause a lot of problems except for calling Group.prototype.gate with methods.gating.SELF.

I think the Group.prototype.connect function is poorly written anyways as I never see it checking if this === target when performing the actual connections (so instead of connections being added to .self like they should, they are added to .in AND .out).

Thanks, and it seems like your comb is very fine indeed!

Detailed Contribution Guidelines

From opensource.guide:

A CONTRIBUTING file tells your audience how to participate in your project. For example, you might include information on:

How to file a bug report (try using issue and pull request templates)
How to suggest a new feature
How to set up your environment and run tests

In addition to technical details, a CONTRIBUTING file is an opportunity to communicate your expectations for contributions, such as:

The types of contributions you’re looking for
Your roadmap or vision for the project
How contributors should (or should not) get in touch with you
Using a warm, friendly tone and offering specific suggestions for contributions (such as writing documentation, or making a website) can go a long way in making newcomers feel welcomed and excited to participate.

For example, Active Admin starts its contributing guide with:

"First off, thank you for considering contributing to Active Admin. It’s people like you that make Active Admin such a great tool."

In the earliest stages of your project, your CONTRIBUTING file can be simple. You should always explain how to report bugs or file issues, and any technical requirements (like tests) to make a contribution.

Link to your CONTRIBUTING file from your README, so more people see it. If you place the CONTRIBUTING file in your project’s repository, GitHub will automatically link to your file when a contributor creates an issue or opens a pull request.

More Advice:

Over time, you might add other frequently asked questions to your CONTRIBUTING file. Writing down this information means fewer people will ask you the same questions over and over again.

For more help with writing your CONTRIBUTING file, check out @nayafia’s contributing guide template or @mozilla’s “How to Build a CONTRIBUTING.md”.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.