###Evolutionary Neural Networks, backed by TensorFlow and pure Python
Based on the work of Maul et al in this paper
This package speeds up the evolution of activation function structure in neural networks. Nets can sometimes become even more accurate to their problem domain when activation functions within each layer are mixed together and not uniformly applied to all neurons.
This package allows for easy simulation of an arbitrary number of layers/neurons/activation functions to find an optimal arrangement.
- TensorFlow (tested 0.11.0rc0)
- Numpy (tested 1.11.2)
.
├── LICENSE
├── README.md
└── src
├── Chromosome.py
├── GeneticNetwork.py
├── TFGenetic.py
└── tests.py
- Import dependencies
import tensorflow as tf
import numpy as np
import TFGenetic as gen
- Define valid activation functions the genetic algorithm will evolve
validActivationFunctions = [tf.nn.sigmoid, tf.nn.tanh, tf.nn.relu, tf.nn.softsign]
- Initialize a genetic algorithm population and describe the initial structure of the population dimensions. Here with the Iris dataset, the network is a 4 -> x -> 1 network type
g = gen.GeneticPool(
populationSize = 10,
tournamentSize = 4,
memberDimensions = [4, 10, 5, 1],
validActivationFunctions = validActivationFunctions
)
g.generatePopulation()
- For number of generations specified, cycle and generate new individuals
numGenerations = 100
for _ in range(numGenerations):
g.cycle()
g.generation()
g.plotEvolution()