liquidcarrot / carrot Goto Github PK
View Code? Open in Web Editor NEW🥕 Evolutionary Neural Networks in JavaScript
Home Page: https://liquidcarrot.io/carrot/
License: MIT License
🥕 Evolutionary Neural Networks in JavaScript
Home Page: https://liquidcarrot.io/carrot/
License: MIT License
This suggestion was moved from Neataptic.
wagenaartje/neataptic#54
In order to amplify the results of neural networks on larger datasets (i.e. MNIST), we need to implement shared weights. This will also allow the creation of convolutional networks.
This issue has been moved from Neataptic.
wagenaartje/neataptic#78
If i run network.evolve the function return a promise with the result, but if i run network.train i get a result.
Shouldnt they be consistent? shouldnt both of them return a promise?
Owner's reply:
That is because
network.evolve
is anasync
function andnetwork.train
is async
function. There is not a lot I can do about it, except for creating two seperate functions fornetwork.evolve
, e.g.:network.evolveSync(); network.evolve()But
network.evolveSync
wouldn't be able to use multithreading capabilities then. However, it seems a good idea to add anetwork.trainAsync()
function.
Personally, I think every long task should be async by default and there should be a "sync" method for each one.
This is an old Neataptic bug that is still happening. "maxNodes" and "maxConns" don't seem to always work. After a while, they will increase over the maximum.
My settings:
Methods.mutation.MOD_ACTIVATION.mutateOutput = false;
Methods.mutation.MOD_ACTIVATION.allowed = [
Methods.activation.LOGISTIC,
Methods.activation.TANH,
Methods.activation.STEP,
Methods.activation.SOFTSIGN,
Methods.activation.SINUSOID,
Methods.activation.GAUSSIAN,
Methods.activation.BIPOLAR,
Methods.activation.BIPOLAR_SIGMOID,
Methods.activation.HARD_TANH,
Methods.activation.INVERSE,
Methods.activation.SELU,
Methods.activation.RELU,
Methods.activation.BENT_IDENTITY,
Methods.activation.IDENTITY,
];
neat = new Neat(4, 1, null,
{
mutation: Methods.mutation.ALL,
popsize: 100,
mutationRate: 0.2,
elitism: 10,
equal: true,
network: new Architect.Random(4, 5, 1),
provenance: 2,
maxNodes: 7,
maxConns: 10,
maxGates: 5,
}
);
Build support for building NNs, NN Layers, and NN Groups with simple neurons
crossover
, architect
, etc. are all methods
and should be part of the namespace - but are currently displayed as individual entities. Should fix this.
Build support for building NNs with simple layers
npm run build
is breaking
0 info it worked if it ends with ok
1 verbose cli [ '/usr/bin/node', '/usr/bin/npm', 'run', 'build' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'prebuild', 'build', 'postbuild' ]
5 info lifecycle @liquid-carrot/[email protected]~prebuild: @liquid-carrot/[email protected]
6 info lifecycle @liquid-carrot/[email protected]~build: @liquid-carrot/[email protected]
7 verbose lifecycle @liquid-carrot/[email protected]~build: unsafe-perm in lifecycle true
8 verbose lifecycle @liquid-carrot/[email protected]~build: PATH: /usr/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/home/carrot/workspace/lc-app/carrot/node_modules/.bin:/home/carrot/anaconda3/bin:/home/carrot/anaconda3/bin:/home/carrot/anaconda3/bin:/home/carrot/bin:/home/carrot/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
9 verbose lifecycle @liquid-carrot/[email protected]~build: CWD: /home/carrot/workspace/lc-app/carrot
10 silly lifecycle @liquid-carrot/[email protected]~build: Args: [ '-c',
10 silly lifecycle 'jsdoc -r -c jsdoc.json && git add . && git stash && git checkout gh-pages && git pull && git merge --squash --strategy-option=theirs stash && git stash drop && git commit -m \'Auto-build\' && git push && git checkout master' ]
11 silly lifecycle @liquid-carrot/[email protected]~build: Returned: code: 1 signal: null
12 info lifecycle @liquid-carrot/[email protected]~build: Failed to exec build script
13 verbose stack Error: @liquid-carrot/[email protected] build: `jsdoc -r -c jsdoc.json && git add . && git stash && git checkout gh-pages && git pull && git merge --squash --strategy-option=theirs stash && git stash drop && git commit -m 'Auto-build' && git push && git checkout master`
13 verbose stack Exit status 1
13 verbose stack at EventEmitter.<anonymous> (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:301:16)
13 verbose stack at emitTwo (events.js:126:13)
13 verbose stack at EventEmitter.emit (events.js:214:7)
13 verbose stack at ChildProcess.<anonymous> (/usr/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
13 verbose stack at emitTwo (events.js:126:13)
13 verbose stack at ChildProcess.emit (events.js:214:7)
13 verbose stack at maybeClose (internal/child_process.js:915:16)
13 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)
14 verbose pkgid @liquid-carrot/[email protected]
15 verbose cwd /home/carrot/workspace/lc-app/carrot
16 verbose Linux 4.4.0-143-generic
17 verbose argv "/usr/bin/node" "/usr/bin/npm" "run" "build"
18 verbose node v8.15.1
19 verbose npm v6.4.1
20 error code ELIFECYCLE
21 error errno 1
22 error @liquid-carrot/[email protected] build: `jsdoc -r -c jsdoc.json && git add . && git stash && git checkout gh-pages && git pull && git merge --squash --strategy-option=theirs stash && git stash drop && git commit -m 'Auto-build' && git push && git checkout master`
22 error Exit status 1
23 error Failed at the @liquid-carrot/[email protected] build script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]
This suggestion has been moved from Neataptic.
wagenaartje/neataptic#94
Speciation via some distance measurement is a fundamental aspect of the
NEAT
algorithm. I suggest clarifying in the docs that the included implementation diverges from the original specification, and perhaps renaming it to something likeNEATSS
(NEAT
Sans Speciation)
Owner's reply:
I completely understand. The way it works is quite different to the original NEAT algorithm. I actually call it the 'Instict' algorithm (more about that can be read there). I'm leaving this open until I have time to modify the docs.
Properties
Key | Description |
---|---|
neurons |
All layer neurons |
Instance Functions
Key | Description |
---|---|
Layer.prototype.add_neurons() |
Adds specified number of neurons to layer |
Layer.prototype.get_neuron() |
Returns neurons matching specified name |
Layer.prototype.get_best() |
Returns neuron in layer with highest axon state value |
Layer.prototype.connect() |
Connects layer neurons to another neuron , layer , or group |
Layer.prototype.activate() |
Activate layer neurons generating an axon state value for each |
Layer.prototype.forward() |
Forward propagates results of Layer.prototype.activate() |
Layer.prototype.backward() |
Backward propagates results of Layer.prototype.activate() |
new Layer(props[, options])
new Layer()
: Creates a new layernew Layer(32)
: Creates a layer with 32 neurons inside of itnew Layer([n0, n1])
: Creates a layer with n0
and n1
as its neuronsnew Layer(layer)
: Creates a new layer with the same number of neurons in it as layer
add_neurons(props[, callback])
neuron.add_neurons(n)
: Adds n
number of neurons to layerneuron.add_neurons([n0, n1])
: Adds n0
and n1
to layer neuronsget_neurons(name[, callback])
neuron.get_neurons(name)
: Returns layer neurons matching name
get_best([callback])
neuron.get_best()
: Returns neuron
with highest axon value in layer.connect(object[, options][, callback])
layer.connect(neuron)
: Connects every neuron in layer
to neuron
layer.connect(other_layer)
: Connects every neuron in layer
to every neuron in other_layer
layer.connect(group)
: Connects every neuron in layer
to every input neuron in group
; Layers are flat by design, groups might not be..activate([inputs][, options][, callback])
layer.activate()
: Activates every neuron in the layer; input layers require inputs
layer.activate([0, 1, 0, 1])
: Activates every neuron in the layer with the given values; inputs.length
must layer.size
forward([callback])
.forward()
: Propagates the last result of previous layers .activate()
to all outgoing layer connections.backward([callback])
.backward()
: Propagates the last results of following layers .learn()
to all incoming layer connectionsSynaptic.js is not built as well and does not support as many features as Neataptic right out of the gate.
I think we should make the change from one to the other - and am going to go ahead and get started with it.
This suggestion has been moved from Neataptic.
wagenaartje/neataptic#76
I'd love to see the Liquid State Machine implemented in Neataptic.
I noticed there was a similar functionality in the Random network but it's not really a Liquid network.
( Great paper on LSM from Utrecht University, http://www.ai.rug.nl/~mwiering/thesis_kok.pdf, see page 18)
This suggestion has been moved from Neataptic.
wagenaartje/neataptic#144
So I checked the code, at least within the
neat
module, the first 2 arguments (inputs and outputs) are only use to create the random network if none is provided.This means that if one is, those 2 are useless and hence the calling code looks awkward. I think it should make more sense to make inputs and outputs also options (heck, make it accept just a single options object) and hence they are optional and only used (and required) if no template network is provided.
An extra comment, I notice that when a network is provided, it is used as a template as is. That means that either passing a random network or any other, means the first generation is all equal and does the same thing (AFAIU). Maybe it'd make sense to randomly mutate each template the first time?
Describe the bug
Output is not remaining normalized
To Reproduce
Current settings:
Methods.mutation.MOD_ACTIVATION.mutateOutput = false;
Methods.mutation.SWAP_NODES.mutateOutput = false;
Expected behavior
Properties
Key | Description |
---|---|
bias |
Bias |
rate |
Learning rate |
activation |
Activation/squash function |
connections |
All neuron connections |
connections.incoming |
Incoming neuron connections |
connections.outgoing |
Outgoing neuron connections |
Instance Functions
Key | Description |
---|---|
Neuron.prototype.is.input() |
Tests whether neuron has no input connections |
Neuron.prototype.is.output() |
Tests whether neuron has no output connections |
Neuron.prototype.project() |
Connects to another neuron , layer , or group |
Neuron.prototype.activate() |
Activates neuron |
Neuron.prototype.propagate() |
Updates neuron; propagates error |
Class Constants
Key | Description |
---|---|
Neuron.activation |
An object of typical activation/squash functions |
Neuron.activation.SIGMOID |
sigmoid Squash Function |
Neuron.activation.ReLU |
ReLU Squash Function |
Neuron.activation.TANH |
tanh Squash Function |
Neuron.activation.IDENTITY |
identity Squash Function |
Neuron.activation.PERCEPTRON |
perceptron Squash Function |
Neuron.update |
An object of typical activation/squash function derivatives |
Neuron.activation.SIGMOID |
sigmoid Squash Function Partial Derivative |
Neuron.activation.ReLU |
ReLU Squash Function Partial Derivative |
Neuron.activation.TANH |
tanh Squash Function Partial Derivative |
Neuron.activation.IDENTITY |
identity Squash Function Partial Derivative |
Neuron.activation.PERCEPTRON |
perceptron Squash Function Partial Derivative |
new Neuron()
new Neuron()
: Creates a new neuron.new Neuron({ inputs: [n0, n1], outputs: [n2] })
: Creates a new neuron with n0
and n1
as incoming connections, and n2
as an outgoing connection.new Neuron(n0)
: Creates a new neuron with the same connections as n0
.is.input([callback])
neuron.is.input()
: Returns true
if neuron
has no incoming connections.is.output([callback])
neuron.is.output()
: Returns true
if neuron
has no outgoing connections.project(object[, callback])
neuron.projects(other_neuron)
: Projects neuron
to other_neuron
neuron.project(layer)
: Projects neuron
to every neuron in layer
neuron.project(group)
: Projects neuron
to every neuron in group
.activate(inputs[,callback])
.activate([0, 1, 0, 1])
: Activates neuron
with the given inputs
; inputs.length
must equal connections.length
.propagate(feedback[,callback])
.propagate([1, 0, 1, 0])
: Updates weights, and propagates errors.A Simple Node.js Neural Network Library
$ npm i -S liquid-carrot
let carrot = require('@liquid-carrot/carrot')
let net = new carrot.Network()
Carrot™ is a open source project. We invite your participation through issues
and pull requests!
When adding or changing a service please add tests.
If you're new to the project, maybe you'd like to open a pull request to address one of them:
This project exists thanks to all the people who contribute. We can't do it without you! 🙇
Thank you to all our backers! 🙏 [Become a backer]
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]
Thank you to all of our patrons! 🙏 [Become a Patron]
Feel free to send us feedback on Twitter or file an issue. Feature requests are always welcome. If you wish to contribute, please take a quick look at the guidelines!
If there's anything you'd like to chat about, please feel free to join our Discord chat!
More coming soon...
Carrot™ is available under the MIT License
This bug has been moved from Neataptic.
wagenaartje/neataptic#145
While
createPool()
gracefully handles the no-template case, provenance doesn't. You could simply set the default network (new Network(this.input, this.output)
) when none is provided and simplifycreatePool()
It would be great to just automatically generate all of our documentation, on the fly when we are running npm publish
.
@christianechevarria Let me know if this is possible.
This bug has been moved from Neataptic.
wagenaartje/neataptic#53
If a network gets mutated from a
Neat
instance withmutation.ADD_NODE
, the new node should only get a random activation ifmutation.MOD_ACTIVATION
is present inNeat.mutation
.// Random squash function node.mutate(mutation.MOD_ACTIVATION);
Add reinforcement training, like DQN to Carrot.
Requested by Jared Codling
This bug has been moved from Neataptic.
wagenaartje/neataptic#25
Currently I'm rewriting all the built-in networks code to construct networks form layers instead of groups and nodes. However, using groups/nodes to construct a LSTM network seems to always outperform using LSTM layers.
Simple one-layer LSTM network with layers
function LSTM(inputSize, hiddenSize, outputSize){ var input = new Layer.Dense(inputSize); var hidden = new Layer.LSTM(hiddenSize); var output = new Layer.Dense(outputSize); input.connect(hidden, Methods.Connection.ALL_TO_ALL); hidden.connect(output, Methods.Connection.ALL_TO_ALL); // option.inputToOutput is set to true for Architect.LSTM if(true) input.connect(output, Methods.Connection.ALL_TO_ALL); return Architect.Construct([input, hidden, output]); }So:
var network = new LSTM(1, 6, 1); // is the equivalent of var network = new Architect.LSTM(1,6,1);However, there is one key difference:
Layer.LSTM
consists of an output block itself. So when you connectLayer.LSTM
to an output layer, there will be a group ofsize
in between (this is needed if you connect two LSTM layers with each other).Architect.LSTM
also does this, except for the output layer:var outputBlock = i == blocks.length-1 ? outputLayer : new Group(block);However, there is no way a
Layer.LSTM
can know that the next layer is a regular dense layer - if it would know, then nooutputBlock
would be needed.Visualisation
Left: layer LSTM - Right: architect LSTMYou can clearly see the lack of gates/extra nodes in the second image. All these extra nodes and gates require a lot of extra iterations to converge to a solution.
Current solution idea
Make the
Layer.Dense.input()
function detect if the connecting layer is a LSTM. If it is, remove its outputBlock. This requires a workaround of theconnect()
function though.This issue will not be fixed soon.
Related to #4
It would be great if creating a neuron with incoming connections worked something like this:
An old Neataptic bug which is still bugging me.
I'm getting Infinity/NaN outputs while training. Using the latest Node.js.
My settings:
Methods.mutation.MOD_ACTIVATION.mutateOutput = false;
Methods.mutation.MOD_ACTIVATION.allowed = [
Methods.activation.LOGISTIC,
Methods.activation.TANH,
Methods.activation.STEP,
Methods.activation.SOFTSIGN,
Methods.activation.SINUSOID,
Methods.activation.GAUSSIAN,
Methods.activation.BIPOLAR,
Methods.activation.BIPOLAR_SIGMOID,
Methods.activation.HARD_TANH,
Methods.activation.INVERSE,
Methods.activation.SELU,
Methods.activation.RELU,
Methods.activation.BENT_IDENTITY,
Methods.activation.IDENTITY,
];
neat = new Neat(4, 1, null,
{
mutation: Methods.mutation.ALL,
popsize: 100,
mutationRate: 0.2,
elitism: 10,
equal: true,
network: new Architect.Random(4, 5, 1),
provenance: 2,
maxNodes: 7,
maxConns: 10,
maxGates: 5,
}
);
Infinity Example:
The normalised data:
[0.9354838709677419, 0.5, 0.5933786078098472, 0.5880669161907702]
The genome:
{"nodes":[{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":0},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":1},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":2},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":3},{"bias":1.1508084667830487,"type":"output","squash":"SELU","mask":1,"index":4}],"connections":[{"weight":1,"from":4,"to":4,"gater":4},{"weight":-0.05001360658035439,"from":3,"to":4,"gater":null},{"weight":0.9984137443904727,"from":2,"to":4,"gater":null},{"weight":-0.7832753538521565,"from":1,"to":4,"gater":null},{"weight":-0.9040067054346645,"from":0,"to":4,"gater":4}],"input":4,"output":1,"dropout":0}
NaN Example:
The normalised data:
[0.3870967741935484, 0.75, 0.5040295048190726, 0.5575469079452833]
The genome:
{"nodes":[{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":0},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":1},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":2},{"bias":0,"type":"input","squash":"LOGISTIC","mask":1,"index":3},{"bias":-0.0717463180924848,"type":"hidden","squash":"BENT_IDENTITY","mask":1,"index":4},{"bias":-0.02994106373052867,"type":"output","squash":"LOGISTIC","mask":1,"index":5}],"connections":[{"weight":1,"from":4,"to":4,"gater":null},{"weight":1,"from":5,"to":5,"gater":null},{"weight":-0.048341462482942354,"from":4,"to":5,"gater":null},{"weight":-0.4890970041600281,"from":3,"to":5,"gater":5},{"weight":0.9984137443904727,"from":2,"to":5,"gater":5},{"weight":-0.4890970041600281,"from":3,"to":4,"gater":4},{"weight":0.024574079733371557,"from":1,"to":5,"gater":null},{"weight":0.9984137443904727,"from":2,"to":4,"gater":4},{"weight":-0.9040067054346645,"from":0,"to":5,"gater":5},{"weight":0.049181674792330154,"from":1,"to":4,"gater":null},{"weight":0.04742932010695605,"from":0,"to":4,"gater":null}],"input":4,"output":1,"dropout":0}
This suggestion has been moved from Neataptic.
wagenaartje/neataptic#82
A SoftMax Function for the output layer would be awesome for any classification NN. I took a look inside your code and I guess it's not that easy to integrate into the activation functions, am I right?
https://en.wikipedia.org/wiki/Softmax_function
Owner's reply:
I have thought about this, your assumption is right. It's quite hard to integrate it as activations are now completely independent of each other. It would also require a more layered structure of networks, which is not the case right now.
Describe the bug
Title captures description
To Reproduce
Steps to reproduce the behavior:
Desktop (please complete the following information):
I may have missed something but I couldn't see any NEAT documentation for Carrot.
Neataptic documenation:
https://wagenaartje.github.io/neataptic/docs/neat/
new Group(props)
new Group()
: Creates a blank new groupnew Group(25)
: Creates a group with 35 neurons inside of itnew Group([n0, n1])
: Creates a group with 2 neurons inside of it - n0
and n1
.new Group(group)
: Creates a new group with the same number of neurons in it as group
.project(object[,options][,callback])
group.project(neuron)
: Projects every output neuron in group
to neuron
group.project(layer)
: Projects every output neuron in group
to every neuron in layer
group.project(other_group)
: Projects every output neuron in group
to every input neuron in other_group
.activate([inputs][,callback])
group.activate()
: Activates every neuron in the group; input groups require inputs
group.activate([0, 1, 0, 1])
: Activates every input neuron in the group with the given value, then every other neuron with the forward feed results; inputs.length
must group.inputs.length
Is your feature request related to a problem? Please describe.
It's incredibly confusing to try and figure out how much data to train vs. test on - and how.
Describe the solution you'd like
It would be amazing to have a course or tutorial on different examples of how to train a different AI bot and seeing when and where different things are applicable.
Describe alternatives you've considered
Siraj (School of AI); YouTube; etc.
Additional context
Build support for building NNs with neurons, groups, and layers.
When copy/pasting from the docs the libraries are not imported and assume the user understands the libraries exported object structure.
As an example the network.evolve()
function's example in the docs assumes a) the user declared carrot.Network
as Network
, and b) declared carrot.methods
as methods
.
async function execute () {
var network = new Network(2,1);
// XOR dataset
var trainingSet = [
{ input: [0,0], output: [0] },
{ input: [0,1], output: [1] },
{ input: [1,0], output: [1] },
{ input: [1,1], output: [0] }
];
await network.evolve(trainingSet, {
mutation: methods.mutation.FFW,
equal: true,
error: 0.05,
elitism: 5,
mutationRate: 0.5
});
network.activate([0,0]); // 0.2413
network.activate([0,1]); // 1.0000
network.activate([1,0]); // 0.7663
network.activate([1,1]); // -0.008
}
execute();
let carrot = require("@liquid-carrot/carrot");
async function execute () {
var network = new carrot.Network(2,1);
// XOR dataset
var trainingSet = [
{ input: [0,0], output: [0] },
{ input: [0,1], output: [1] },
{ input: [1,0], output: [1] },
{ input: [1,1], output: [0] }
];
await network.evolve(trainingSet);
console.log(network.activate([0,0])); // 0.2413
console.log(network.activate([0,1])); // 1.0000
console.log(network.activate([1,0])); // 0.7663
console.log(network.activate([1,1])); // -0.008
}
execute();
let { Network } = require("@liquid-carrot/carrot");
async function execute () {
var network = new Network(2,1);
// XOR dataset
var trainingSet = [
{ input: [0,0], output: [0] },
{ input: [0,1], output: [1] },
{ input: [1,0], output: [1] },
{ input: [1,1], output: [0] }
];
await network.evolve(trainingSet);
console.log(network.activate([0,0])); // 0.2413
console.log(network.activate([0,1])); // 1.0000
console.log(network.activate([1,0])); // 0.7663
console.log(network.activate([1,1])); // -0.008
}
execute();
Properties
Key | Description |
---|---|
from |
Source neuron |
to |
Destination neuron |
weight |
Connection weight/importance |
Neataptic has a pull request for Weighted Absolute Percentage Error (WAPE) that was never merged in (unmaintained). Maybe something we would want to merge in here?
Neuron function does not create a new neuron. It passes a reference to same neuron.
Need to create a better factory function.
Build support for building NNs with simple neuron groups
Tensorflow.js has tfjs-vis, this is a small set of visualization utilities.
A very handy way to set up graphs to see how well your training is going.
Demo here:
https://storage.googleapis.com/tfjs-vis/mnist/dist/index.html
It would be great to see Carrot support something similar.
Contributors to this repo value their worth as human beings by the number of stars this repo receives.
Please give our repo one shooting star.
Neataptic has a pull request for EfficientMutation flag that was never merged in (unmaintained).
"Before it attempts to do any mutations, it checks which mutations are possible on the genome from the selected mutation types, and passes them to be utilized from the selectMutationMethod."
Maybe something we would want to merge in here or look into? I've tested it on my own project and seems to work fine.
I was going to write this in another issue, but since you asked for feedback here...
I have found the Neat class is much more complicated than it needs to be.
My ideal api would be:
Originally posted by @dan-ryan in #25 (comment)
It would be great if we could get a strong set of use cases and examples built out for different communities to be able to use in their individual projects - and maybe add them to the README.md, @christianechevarria.
As a first contender, @mateogianolio created this great - really simple - OCR example with Synaptic.js.
I think this could be a really cool visual - maybe implement it as a GAN and have the generative NN do a live display.
HyperNEAT is an improvement over the NEAT formula. We should add an option on the type of evolution formula when calling the evolve function.
This video and blog explain how HyperNEAT works:
https://www.youtube.com/watch?v=t15wUkCXuxQ
https://towardsdatascience.com/hyperneat-powerful-indirect-neural-network-evolution-fba5c7c43b7b
The HyperNEAT paper:
http://axon.cs.byu.edu/~dan/778/papers/NeuroEvolution/stanley3**.pdf
You could go a step further, Evolved-Substrate HyperNEAT addresses some of the issues of HyperNEAT:
https://www.mitpressjournals.org/doi/pdfplus/10.1162/ARTL_a_00071
Currently Node.connect first checks if target is itself and then checks if it's projecting to the node already; this should be the other way around
connect: function (target, weight) {
if (typeof target.bias !== 'undefined') { // must be a node!
if (target === this) {
// Turn on the self connection by setting the weight
if (this.connections.self.weight !== 0) {
if (config.warnings) console.warn('This connection already exists!');
} else {
this.connections.self.weight = weight || 1;
}
connections.push(this.connections.self);
} else if (this.isProjectingTo(target)) {
throw new Error('Already projecting a connection to this node!');
} else {
let connection = new Connection(this, target, weight);
target.connections.in.push(connection);
this.connections.out.push(connection);
connections.push(connection);
}
} else { // should be a group
for (var i = 0; i < target.nodes.length; i++) {
let connection = new Connection(this, target.nodes[i], weight);
target.nodes[i].connections.in.push(connection);
this.connections.out.push(connection);
target.connections.in.push(connection);
connections.push(connection);
}
}
return connections;
}
Performance
Algorithms
Architectures
new Network()
new Network()
: Creates a new blank networknew Network([l0, l1, l2])
: Creates a new neural network where: l0
projects to l1
and l1
projects to l2
.new Network([g0, g1, g2])
: Creates a new neural network where: g0
projects to g1
and g1
projects to g2
new Network([n0, n1, n2])
: Creates a new neural network where: n0
projects to n1
and n1
projects to n2
.activate(inputs[,callback])
network.activate([0, 1, 0, 1])
:This bug has been moved from Neataptic.
wagenaartje/neataptic#57
The pull request:
wagenaartje/neataptic#148
Cannot resolve 'child_process'
./~/neataptic/src/multithreading/workers/node/testworker.js Module not found: Error: Can't resolve 'child_process' in '[...]/node_modules/neataptic/src/multithreading/workers/node' @ ./~/neataptic/src/multithreading/workers/node/testworker.js 7:9-33 @ ./~/neataptic/src/multithreading/workers/workers.js @ ./~/neataptic/src/multithreading/multi.js @ ./~/neataptic/src/neataptic.js
Is your feature request related to a problem? Please describe.
Figuring out what network topology to use based on previous data and particular problems is incredibly time consuming and frustrating.
Describe the solution you'd like
It would be amazing if Carrot could automate the process of not only neuroevolution, but also what network and network structure/architecture/paradigm would be best to solve a particular problem.
Describe alternatives you've considered
Trail and error; Data Robot; reading research papers; etc.
Additional context
This bug has been moved from Neataptic.
wagenaartje/neataptic#114
Thanks for such a great library and the awesome documentation! I've been going through your library with a fine toothed comb trying to learn it. Thus far in my studies of the implementation, I have only found one thing that I can't follow.
My question revolves around connecting groups to other groups using the
ONE_TO_ONE
connection method. Here's the code excerpt:} else if (method === methods.connection.ONE_TO_ONE) { if (this.nodes.length !== target.nodes.length) { throw new Error('From and To group must be the same size!'); } for (i = 0; i < this.nodes.length; i++) { let connection = this.nodes[i].connect(target.nodes[i], weight); HERE >>> this.connections.self.push(connection[0]); connections.push(connection[0]); } }I see each node in the current group being connected to the corresponding node in the other group. However, why are the new connections added to the
self
connections list and notout
?
Owner's reply:
I wrote this code some time ago, but it looks like this might just be wrongly coded. I see that some lines before the code contains:
if (this !== target) { if (config.warnings) console.warn('No group connection specified, using ALL_TO_ALL'); method = methods.connection.ALL_TO_ALL; } else { if (config.warnings) console.warn('No group connection specified, using ONE_TO_ONE'); method = methods.connection.ONE_TO_ONE; }So if
this === target
, aONE_TO_ONE
connection is assumed when none is given. For some reason I must have thought that aONE_TO_ONE
connection is always a self-connection. The correct code of course should be :} else if (method === methods.connection.ONE_TO_ONE) { if (this.nodes.length !== target.nodes.length) { throw new Error('From and To group must be the same size!'); } for (i = 0; i < this.nodes.length; i++) { let connection = this.nodes[i].connect(target.nodes[i], weight); this.connections.out.push(connection[0]); target.connections.in.push(connection[0]); connections.push(connection[0]); } }The faulty code should not cause a lot of problems except for calling
Group.prototype.gate
withmethods.gating.SELF
.I think the
Group.prototype.connect
function is poorly written anyways as I never see it checking ifthis === target
when performing the actual connections (so instead of connections being added to.self
like they should, they are added to.in
AND.out
).Thanks, and it seems like your comb is very fine indeed!
Add @Private tag to the following things in the documentation:
From opensource.guide:
A CONTRIBUTING file tells your audience how to participate in your project. For example, you might include information on:
How to file a bug report (try using issue and pull request templates)
How to suggest a new feature
How to set up your environment and run tests
In addition to technical details, a CONTRIBUTING file is an opportunity to communicate your expectations for contributions, such as:
The types of contributions you’re looking for
Your roadmap or vision for the project
How contributors should (or should not) get in touch with you
Using a warm, friendly tone and offering specific suggestions for contributions (such as writing documentation, or making a website) can go a long way in making newcomers feel welcomed and excited to participate.For example, Active Admin starts its contributing guide with:
"First off, thank you for considering contributing to Active Admin. It’s people like you that make Active Admin such a great tool."
In the earliest stages of your project, your CONTRIBUTING file can be simple. You should always explain how to report bugs or file issues, and any technical requirements (like tests) to make a contribution.
Link to your CONTRIBUTING file from your README, so more people see it. If you place the CONTRIBUTING file in your project’s repository, GitHub will automatically link to your file when a contributor creates an issue or opens a pull request.
More Advice:
Over time, you might add other frequently asked questions to your CONTRIBUTING file. Writing down this information means fewer people will ask you the same questions over and over again.
For more help with writing your CONTRIBUTING file, check out @nayafia’s contributing guide template or @mozilla’s “How to Build a CONTRIBUTING.md”.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.