GithubHelp home page GithubHelp logo

darnocian / encog-java Goto Github PK

View Code? Open in Web Editor NEW
0.0 0.0 0.0 65.22 MB

Automatically exported from code.google.com/p/encog-java

Java 99.74% NSIS 0.16% Batchfile 0.04% JavaScript 0.01% C 0.05%

encog-java's People

Contributors

jdfagan avatar manojtrek avatar rozenkreutz avatar seemasingh avatar

Watchers

 avatar

encog-java's Issues

Missing NeuralData input with generate of TemporalNeuralDataSet

What steps will reproduce the problem?
1. Create a TemporalNeuralDataSet and add a TemporalDataDescription as
input and predict.
2. Create a couple TemporalPoint with the previously created
TemporalNeuralDataSet.
3. Call generate for the previously created TemporalNeuralDataSet.

What is the expected output? What do you see instead?
I expected the first Input in getData to contains the value of the first
(oldest) TemporalPoint added. Well it don't. The first input value in the
first NeuralDataPair is the second TemporalPoint.

What version of the product are you using? On what operating system?
1.1.0 on linux x86_64


Original issue reported on code.google.com by [email protected] on 4 Mar 2009 at 11:29

Convert a FeedForward NN to flat does not work with linear activation function

What steps will reproduce the problem?
1. BasicNetwork network = new BasicNetwork();
2. network.addLayer(new BasicLayer(new ActivationLinear(),true,3));
3. FlatNetwork flat = new FlatNetwork(network);

What is the expected output? What do you see instead?
Should work. Error message is 
"To convert to flat a network must only use sigmoid, linear or tanh activation"

Please adjust error message or allow LinearActivation functions in your code 
ValidateForFlat.java, Line 80.


What version of the product are you using? On what operating system?
2.4.3, Java, Linux

Thanks!


Original issue reported on code.google.com by goldstein.iupr on 22 Sep 2010 at 4:38

Implement NeuralData and NeuralDataSet

Currently all input and ideal values for the neural networks are provided
as double arrays.  Implement NeuralData and NeuralDataSet in place.  This
is for version 1.0.


Original issue reported on code.google.com by [email protected] on 28 Jul 2008 at 1:26

Parsing English sentence with Encog (Subject-Verb-Object)

How can I parse an English sentence with this brilliant framework(Encog)? 
Firstly, I want to get Subject-Verb-Object from a sentences.
How do I set the Neural Net, what should be the inputs and outputs?
For instance: 
INPUT:  I will go to school. 
OUTPUT: SUBJECT: I, VERB: GO, OBJECT:SCHOOL
When I got this output, It will be wonderful.

What should be the main approach for these things? 
Please look at to the Attach file to understand what I mean.

Thanks a lot for your help.

Original issue reported on code.google.com by [email protected] on 20 Apr 2010 at 12:44

Attachments:

OpenCL x86_64

What steps will reproduce the problem?
1. Unzip encog-workbench-2.4.0
2. Download JOCL-windows-x86_64.dll
3. Include JOCL-windows-x86_64.dll in path
4. OpenCL won't load unless I use the regular x86 dll

What is the expected output? What do you see instead?
I'm running windows 7 x64 so I expected to have to use the x86_64 dll, but the 
program doesn't find the x64 dll, only the x86 one.

Please provide any additional information below.

Original issue reported on code.google.com by [email protected] on 26 Jun 2010 at 10:37

SQLNeuralDataSet Connection issues

What steps will reproduce the problem?
1. Try to run the sql neural data set using uid & pwd


It seems that when one enters uid and pwd the logic to check to see if a
password is needed or not is inverted:

if ((SQLNeuralDataSet.this.uid != null)

                        || (SQLNeuralDataSet.this.pwd != null)) {

                    this.connection = DriverManager

                            .getConnection(SQLNeuralDataSet.this.url);

                } else {

                    this.connection = DriverManager.getConnection(

                            SQLNeuralDataSet.this.url,

                            SQLNeuralDataSet.this.uid,

                            SQLNeuralDataSet.this.pwd);

                }

I guess the first if should read:
f ((SQLNeuralDataSet.this.uid == null)

                        || (SQLNeuralDataSet.this.pwd == null)) {

                    this.connection = DriverManager

                            .getConnection(SQLNeuralDataSet.this.url);

This is not letting me using the SQLNeuralDataset



Original issue reported on code.google.com by [email protected] on 26 Oct 2009 at 2:03

train.getError() is NaN (r398)

//inputDataNorm file is attached

CSVNeuralDataSet trainingSet = new CSVNeuralDataSet(inputDataNorm, 43, 1, 
false);
Logging.stopConsoleLogging();

        BasicNetwork network = new BasicNetwork();
        network.addLayer(new BasicLayer(trainingSet.getInputSize()));
        network.addLayer(new BasicLayer(10));
        network.addLayer(new BasicLayer(trainingSet.getIdealSize()));
        network.getStructure().finalizeStructure();
        network.reset();

        final Train train = new Backpropagation(network, trainingSet, 0.8, 
0.3);

        int epoch = 1;

        do
        {
            train.iteration();

            System.out
                    .println("Epoch #" + epoch + " Error:" + train.getError
());

            epoch++;
        }
        while ((epoch < 5000) && (train.getError() > 0.001));

Original issue reported on code.google.com by [email protected] on 3 Apr 2009 at 8:26

Attachments:

Cannot start the Encog Workbench

What steps will reproduce the problem?
1. Download the Encog Workbench
2. chmod +x workbench.sh
3. ./workbench.sh

What is the expected output? What do you see instead?
Instead of getting the application window coming up, I get this:

deadlock@netbux:~/Desktop/encog-workbench/encog-workbench-univ-2.0.0$ ls
copyright.txt  jar  workbench.bat  workbench.sh
deadlock@netbux:~/Desktop/encog-workbench/encog-workbench-univ-2.0.0$
./workbench.sh 
Exception in thread "main" java.lang.NoClassDefFoundError:
org/encog/workbench/EncogWorkBench
Caused by: java.lang.ClassNotFoundException: org.encog.workbench.EncogWorkBench
    at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
.  Program will exit.in class: org.encog.workbench.EncogWorkBench
deadlock@netbux:


What version of the product are you using? On what operating system?
2.0

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 16 Aug 2009 at 7:21

encog for c++

dear administrator:
     do you have plant  Transplantation to c++ platform  

Original issue reported on code.google.com by [email protected] on 19 Mar 2010 at 2:51

Trying to assign invalud number to matrix: NaN

I often got this stacktrace when training a network:

Exception in thread "main" org.encog.matrix.MatrixError: Trying to assign
invalud number to matrix: NaN
    at org.encog.matrix.Matrix.set(Matrix.java:415)
    at
org.encog.neural.networks.layers.FeedforwardLayer.createInputMatrix(FeedforwardL
ayer.java:135)
    at
org.encog.neural.networks.layers.FeedforwardLayer.compute(FeedforwardLayer.java:
110)
    at org.encog.neural.networks.BasicNetwork.compute(BasicNetwork.java:227)
    at
org.encog.neural.networks.BasicNetwork.calculateError(BasicNetwork.java:155)
    at
org.encog.neural.networks.training.backpropagation.Backpropagation.iteration(Bac
kpropagation.java:204)

Original issue reported on code.google.com by [email protected] on 5 Mar 2009 at 1:22

Incorrect order in layers after finalizeStructure(), if more than 2 hidden layers are used with feed forward network.

What steps will reproduce the problem?
1.
    final BasicNetwork net = new BasicNetwork();
    net.addLayer(new BasicLayer(new ActivationTANH(), false, 2)); // L1
    net.addLayer(new BasicLayer(new ActivationTANH(), false, 3)); // L2
    net.addLayer(new BasicLayer(new ActivationTANH(), false, 5)); // L3
    net.addLayer(new BasicLayer(new ActivationTANH(), false, 3)); // L4
    net.addLayer(new BasicLayer(new ActivationLinear(), false, 2)); // L5
    net.getStructure().finalizeStructure();
    net.reset();

What is the expected output? What do you see instead?

    net.getStructure().getLayers() returns list of layers in order:
    L5 L4 L2 L3 L1 expected: L5 L4 L3 L2 L1

What version of the product are you using? On what operating system?
    Release is 2.3.0 on Windows XP Prof.

Please provide any additional information below.
    I think org.encog.neural.networks.structure.LayerComparator does not compare correctly. If you have any questions, please contact me at stefan_at_srichter_dot_com

Original issue reported on code.google.com by [email protected] on 8 Jun 2010 at 12:26

Fix SOM Layer training issue

There is an issue where the SOM training is not effective.  While training
for the OCR example, or similar, not all of the values are being copied
from one training iteration to the next.

Original issue reported on code.google.com by [email protected] on 4 Oct 2008 at 5:30

FlatNetwork trained values don't get stored in the xml file

What steps will reproduce the problem?
1. Train a BasicNetwork like in the MarketTrain example using the new 
FlatNetwork mechanism
2. flatNetwork's weights[] array update correctly during training but at the 
end it should unflatten learned values to parent BasicNetwork. This doesn't 
happen in latest code. 
3.

What is the expected output? What do you see instead?
I see the original random weights and thresholds in the trained xml file 
instead of learned ones.

Please use labels and text to provide additional information.


Original issue reported on code.google.com by [email protected] on 16 Sep 2010 at 4:23

Maven support...

Hi

Would be nice to have encog on maven repository...
http://maven.apache.org/guides/mini/guide-central-repository-upload.html


VELO

Original issue reported on code.google.com by [email protected] on 15 May 2009 at 8:47

Workbench: A new object will overwrite an old object if the old object was loaded from an .eg file.

What steps will reproduce the problem?
1. Load a previous .eg file containing a 'Training Data' object.
2. Create a new 'Training Data' object

What is the expected output? What do you see instead?
The newly created object will overwrite the old object.

What version of the product are you using? On what operating system?
Both 2.1.0 and rev 900

Please provide any additional information below.

Original issue reported on code.google.com by [email protected] on 16 Sep 2009 at 7:45

TickerSymbol has bad hashCode function

What steps will reproduce the problem?
1. try filtering out duplicate TickerSymbol objects like you do in 
MarketNeuralDataSet:

final Set<TickerSymbol> set = new HashSet<TickerSymbol>();
        for (final TemporalDataDescription desc : getDescriptions()) {
            final MarketDataDescription mdesc = (MarketDataDescription) desc;
            set.add(mdesc.getTicker());
        }
I use the same TickerSymbol (same symbol and exchange) multiple times and it 
adds this to the Set every time (Sets aren't supposed to accept duplicates)


I replaced TickerSymbol.hashCode() with this and it works:

    /**
     * {@inheritDoc}
     */
    @Override
    public int hashCode() {
        return safeHashCode(this.symbol) 
        + safeHashCode(this.exchange);
    }

    public static <T> int safeHashCode(T o) {
        return o == null ? 0 : o.hashCode();
    }

How do I go about getting commit access to the repository? I often find bugs...

Original issue reported on code.google.com by [email protected] on 26 Aug 2010 at 3:52

Little optimization in SmartLearningRate

In SmartLearningRate in the method determineTrainingSize(), you scan all row 
and increment a counter to know the number of row.

I think if the dataset is Indexable you should be able to ask the number of row 
directly.

private long determineTrainingSize() {
    if (this.train.getTraining() instanceof Indexable) {
       return ((Indexable) this.train.getTraining()).getRecordCount();
    }
    //... same code as previously....
}

I know it's a very little optimisation and only for baskprop but that's my 2 
cents.

Regards 

Julien Blaize

Original issue reported on code.google.com by [email protected] on 13 Oct 2010 at 7:37

openCL memory leak?

What steps will reproduce the problem?
1. Try pruning a network using a large NeuralDataSet (I use 1887 input - ideal 
pairs of market data)

What is the expected output? What do you see instead?
Pruning runs for about 15 mins and then I get this exception (I assume it is a 
memory leak)

org.encog.engine.EncogEngineError: org.encog.engine.EncogEngineError: 
org.encog.engine.EncogEngineError: org.jocl.CLException: 
CL_MEM_OBJECT_ALLOCATION_FAILURE
    at org.encog.engine.concurrency.EngineConcurrency.checkError(EngineConcurrency.java:97)
    at org.encog.engine.concurrency.job.ConcurrentJob.process(ConcurrentJob.java:128)
    at org.encog.neural.prune.PruneIncremental.process(PruneIncremental.java:660)
    at org.neotrader.tradesystem.ib.MarketBuildTraining.incremental(MarketBuildTraining.java:366)
    at org.tradesystem.ib.marketscaneventprocessor.ThreadPerInstrumentEventHandlingStrategy$InstrumentAnalyzingThread.run(ThreadPerInstrumentEventHandlingStrategy.java:319)
Caused by: org.encog.engine.EncogEngineError: 
org.encog.engine.EncogEngineError: org.jocl.CLException: 
CL_MEM_OBJECT_ALLOCATION_FAILURE
    at org.encog.engine.network.train.prop.TrainFlatNetworkProp.iteration(TrainFlatNetworkProp.java:320)
    at org.encog.neural.networks.training.propagation.Propagation.iteration(Propagation.java:145)
    at org.encog.neural.prune.PruneIncremental.performJobUnit(PruneIncremental.java:563)
    at org.encog.engine.concurrency.job.JobUnitWorker.run(JobUnitWorker.java:67)
    at org.encog.engine.concurrency.PoolItem.run(PoolItem.java:76)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
Caused by: org.encog.engine.EncogEngineError: org.jocl.CLException: 
CL_MEM_OBJECT_ALLOCATION_FAILURE
    at org.encog.engine.opencl.kernels.KernelNetworkTrain.calculate(KernelNetworkTrain.java:219)
    at org.encog.engine.network.train.gradient.GradientWorkerCL.run(GradientWorkerCL.java:167)
    ... 4 more
Caused by: org.jocl.CLException: CL_MEM_OBJECT_ALLOCATION_FAILURE
    at org.jocl.CL.checkResult(CL.java:523)
    at org.jocl.CL.clEnqueueWriteBuffer(CL.java:5618)
    at org.encog.engine.opencl.kernels.KernelNetworkTrain.calculate(KernelNetworkTrain.java:195)
    ... 5 more



Please use labels and text to provide additional information.


Original issue reported on code.google.com by [email protected] on 13 Sep 2010 at 2:12

maven project settings for encog-core

Could you update encog-core's pom.xml file (required to build the project using 
maven) with the up-to-date version attached. The new version requires no hacks 
from users (renaming folders, manually downloading libraries etc)

Original issue reported on code.google.com by [email protected] on 28 Sep 2010 at 3:25

Attachments:

OpenCL support

I'm planning to build a quite big neural network with Encog. I'm afraid
that it will be slow (especially training) due to lots of neurons in it.
Human neurons run 'in parallel', while Encog neuron values are computed
sequentially, which in bigger networks is slow.
There's a standard called OpenCL, it uses the enormous power of the modern
graphics cards. All computations are done in parallel and even big neural
networks may be very very fassst.

Will Encog ever support OpenCL? Perhaps I can make some code if I have free
time.

Technical issues:
- OpenCL requires a latest graphics card from NVidia or ATI, and recent
drivers; on computers without it the 'standard' java code has to be used
- OpenCL in Java is available via OpenCL4Java project
- OpenCL code is written in a variant of C, while Encog is in Java;
probably the easiest way to add OpenCL support would be writing Java ->
OpenCL C translator



Original issue reported on code.google.com by [email protected] on 25 Feb 2010 at 9:25

Second set of trainingdata will automaticly contain the first set.

What steps will reproduce the problem?
1. Create a new 'Training Data' object.
2. Create another 'Training Data' object.

What is the expected output? What do you see instead?
The second 'Training Data' object will include all data from the first
object. You can also test it with custom data, which will also be added.

What version of the product are you using? On what operating system?
I tried this both in the 2.1.0 version and on the development version from
the subversion repository (rev 900).

Please provide any additional information below.
I tried a workaround as well. Making the first 'Training Data' object and
immediately the second object. Afterwards paste a first set of data in the
first 'Training Data' and a second set of data in the second 'Training
Data'. First things seem oke but when you reopen the second object it will
contain the data from both 'Training Data' objects.

Original issue reported on code.google.com by [email protected] on 16 Sep 2009 at 7:23

StopoStrategy endless loop

What steps will reproduce the problem?
The problem appears sometimes, its random, because the training is random, too.

1. Create a complex network with 15 InputNeuron, 15 HiddenNeuron, 1 OutputNeuron
2. set ResilientPropagation with StopoStrategy(0.0000001, 100)
3. let train

What is the expected output? What do you see instead?
I expect, that anytime the training stops.
Sometimes the training never stops. The error changes its value, but 
alternating between two or three values. 
For example:
train.iteration(); // error = 0.4300001234
train.iteration(); // error = 0.4300003333
train.iteration(); // error = 0.4300006789
train.iteration(); // error = 0.4300001234
train.iteration(); // error = 0.4300003333
train.iteration(); // error = 0.4300006789
...

What version of the product are you using? On what operating system?
Encog 2.4, OS X 10.6.1

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 21 Sep 2010 at 7:50

Workbench generate code, generates incorrect code.

What steps will reproduce the problem?
1. Use the encog workbench and create a feedforard network.
2. Set input neuron count to 5. 
3. Set output neuron count to 1.
4. Then, add a hiden layer, count 5.
5. Tools -> Generate code. [X] Java

What is the expected output? What do you see instead?
A valid network setup 

Instead, we get:

...other code...
Layer inputLayer = new BasicLayer( new ActivationTANH(),true,5);
inputLayer.addNext(inputLayer);
...other code...

Which will result in a stack overflow upon training.


What version of the product are you using? On what operating system?
Encog 2.1.0. Windows xp

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 4 Oct 2009 at 2:53

Problem when using persist without CL (JOCL-0.1.3a-beta.jar) in classpath

What steps will reproduce the problem?
1. Save a BasicNeuralNetwork with a PersistWriter
2. Load the eg file to recreate the BasicNeuralNetwork
3. In the method org.encog.util.ReflectionUtil loadClassmap() there is an Error 
thrown when we try to load the line org.encog.util.cl.EncogCLPlatform from 
classes.txt if you don't have JOCL-0.1.3a-beta.jar in your classPath 

 exact error message is : java.lang.NoClassDefFoundError: org/jocl/NativePointerObject

 All the classes after this in classes.txt are not loaded.

What is the expected output? What do you see instead?
If you try to load the file a second time it doesn't throw an error and the 
network is correctly created. I think the second call doesn't generate a call 
to loadClassmap() method because the map is not empty anymore.
All subsequent calls works ok.

What version of the product are you using? On what operating system?
encog version 2.4.3

Please provide any additional information below.
I think maybe if we don't have the JOCL jar in the classpath (because we don't 
want to use CL) the loadClassmap() method should just ignore those classes.
I did a workaround in my calling class i call loadClassmap() method inside a 
try catch (Error er) one time to avoid getting the error in the PersistReader.

Thank you for your great work on Encog

Regards 

Julien Blaize


Original issue reported on code.google.com by [email protected] on 30 Sep 2010 at 7:15

Error in javadoc of NeuralDataPair

Hi,

i am reading the source to better understand how Encog work, and i think i 
spotted an error in the source of the 2.4.2 version.

In the interface NeuralDataPair i think you inverted the term supervised and 
unsupervised. This is just comment but i can't be misleading for people who are 
not used to neural network.

Regards

Julien Blaize

Original issue reported on code.google.com by [email protected] on 16 Sep 2010 at 9:04

my program

/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package javaapplication2;

import java.io.*;
import java.util.*;
import java.lang.*;
import org.encog.neural.activation.ActivationLinear;
import org.encog.neural.activation.ActivationTANH;
import org.encog.neural.activation.ActivationSIN;
import org.encog.neural.networks.synapse.*;
import org.encog.neural.networks.layers.Layer;
import org.encog.neural.activation.ActivationSigmoid;
import org.encog.neural.data.NeuralData;
import org.encog.neural.data.NeuralDataPair;
import org.encog.neural.data.NeuralDataSet;
import org.encog.neural.data.basic.BasicNeuralDataSet;
import org.encog.neural.networks.BasicNetwork;
import org.encog.neural.networks.*;
import org.encog.neural.networks.layers.BasicLayer;
import org.encog.neural.networks.logic.FeedforwardLogic;
import org.encog.neural.networks.training.Train;
import org.encog.neural.networks.training.propagation.back.Backpropagation;

/**
*
* @author Administrator
*/
public class NewClass {

double[][] FIN = null;
double[][] FOU = null;
boolean data = false;

NewClass(String str) {
try {

FileReader fr = new FileReader(str);

BufferedReader br = new BufferedReader(fr);
String myreadline;

Vector v = new Vector();
while (br.ready()) {
myreadline = br.readLine();
v.addElement(myreadline);
}
br.close();

if (v.size() > 0) {
data = true;
FIN = new double[v.size()][11];
FOU = new double[v.size()][1];
String[] sS;
String line;
for (int i = 0; i < v.size(); i++) {
line = (String) v.elementAt(i);
sS = line.split(",");
for(String ff:sS)
{
System.out.print(ff);
System.out.print(",");
}
System.out.println("");
FIN[i][0] = Double.parseDouble(sS[0]);
FIN[i][1] = Double.parseDouble(sS[1]);
FIN[i][2] = Double.parseDouble(sS[2]);
FIN[i][3] = Double.parseDouble(sS[3]);
FIN[i][4] = Double.parseDouble(sS[4]);
FIN[i][5] = Double.parseDouble(sS[5]);
FIN[i][6] = Double.parseDouble(sS[6]);
FIN[i][7] = Double.parseDouble(sS[7]);
FIN[i][8] = Double.parseDouble(sS[8]);
FIN[i][9] = Double.parseDouble(sS[9]);
FIN[i][10] = Double.parseDouble(sS[10]);
FOU[i][0] = Double.parseDouble(sS[11]);
}
}
} catch (IOException e) {
}

}

// synchronized void run() {
void run() {
if (data) {

BasicNetwork network = new BasicNetwork();
Layer inputLayer = new BasicLayer(new ActivationLinear(), true, 11);
Layer hiddenLayer = new BasicLayer(new ActivationSIN(), true, 18);
Layer outputLayer = new BasicLayer(new ActivationTANH(), true, 1);

Synapse synapseInputToHidden = new WeightedSynapse(inputLayer, hiddenLayer);
Synapse synapseHiddenToOutput = new WeightedSynapse(hiddenLayer, outputLayer);

inputLayer.getNext().add(synapseInputToHidden);
hiddenLayer.getNext().add(synapseHiddenToOutput);

network.tagLayer(BasicNetwork.TAG_INPUT, inputLayer);
network.tagLayer(BasicNetwork.TAG_OUTPUT, outputLayer);

network.setLogic(new FeedforwardLogic());
network.getStructure().finalizeStructure();
network.reset();
NeuralDataSet trainingSet =
new BasicNeuralDataSet(FIN, FOU);
// train the neural network
final Train train =
new Backpropagation(network, trainingSet, 0.7, 0.8);
int epoch = 1;
do {
train.iteration();
System.out.println("Epoch #" + epoch + " Error:" + train.getError());
epoch++;
} while (train.getError() > 0.01);
}
}
}

Original issue reported on code.google.com by [email protected] on 20 Apr 2010 at 11:34

Yahoo finance example got broken with latest code

What steps will reproduce the problem?
1. Generating the xml marketdata file still works but I get a 
NullPointerException when I try to train the network. This worked fine 
recently, so I assume a recent change broke it.

What is the expected output? What do you see instead?
Exception in thread "main" java.lang.NullPointerException
    at org.encog.engine.network.flat.FlatNetwork.clearContext(FlatNetwork.java:334)
    at org.encog.engine.network.train.TrainFlatNetwork.iteration(TrainFlatNetwork.java:346)
    at org.encog.neural.networks.training.propagation.Propagation.iteration(Propagation.java:234)
    at org.encog.examples.neural.predict.market.MarketTrain.train(MarketTrain.java:82)
    at org.encog.examples.neural.predict.market.MarketPredict.main(MarketPredict.java:47)

What version of the product are you using? On what operating system?
I use the latest code from the reposity on Windows Vista

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 27 Aug 2010 at 1:24

Maven support for Encog

Maven is a great build tool (for me much better that Ant used by Encog),
most opensource Java projects seem to use Maven. Unfortunately Encog isn't
integrated with Maven, so I couldn't use Encog in my Maven-based project.

So I made my own pom.xml (Maven configuration file) - in the attachment. I
would be very pleased if this pom.xml was included in the official Encog
source tree and distribution.


Original issue reported on code.google.com by [email protected] on 25 Feb 2010 at 8:56

Enable the use of a seed for the Randomizer classes

Hi,

i have the need to be able to do the training of a neural network with a given 
random seed for initialisation. So 2 training with the same parameters produce 
the same network.

I have not find any other way but to create a class that inherit 
NguyenWidrowRandomizer and override the public double randomize(final double d) 
method, because you use Math.random() instead of creating a Random(long seed) 
object. 

Maybe it will be a nice addition to allow people to specifiy a long seed 
meaning that they want a Random object and specifically this seed instead of 
using Math.random().

this is especially usefull to reproduce bug, and compare different dataset 
without having to save a NeuralNet after the initialisation.

if you need more details or the class i wrote feel free to ask.

Regards.

Julien Blaize

Original issue reported on code.google.com by [email protected] on 27 Sep 2010 at 9:14

Bug in TrainingContinuationPersistor

What steps will reproduce the problem?
1. Save TrainingContinuation object from ResilientPropagation.pause() with
EncogPersistedCollection
2. Load saved object
3. Try ResilientPropagation.pause() with saved object

What version of the product are you using? On what operating system?
2.3.0

Please provide any additional information below.
This bug occurs because in TrainingContinuationPersistor line 136 you use
BasicNetworkPersistor.TAG_LAYERS instead of local TAG_ITEMS = "items".
So when you save the object. You have xml tag "layers" in the file. But
when you are trying to restore the object, the code looks for tag "items"
instead of "layers", so you have nothing.

P.S. Would be nice to have access to svn to be able contribute bug fix
instead of writing this boring report.

Original issue reported on code.google.com by [email protected] on 29 Mar 2010 at 7:35

TrainFlatNetworkSCG doesn't respect getNumberOfThread in the init (version 2.5)

Hi,

i am testing the 2.5 version right now. In my environnement i don't want to use 
CPL or multiThreading for encog because i already have a big threading 
structure for all type of datamining algorithm on top of it.

When i run a ScaledConjugateGradient i always have a Thread pool created by 
encog in version 2.5.(and i don't want to)

from TrainFlatNetworkSCG constructor there is a call to calculateGradients(). 
In this call there is a call to super.calculateGradients().(we are now in 
TrainFlatNetworkProp) And there the workers array is initialized by a call to 
init(). 

But when we are here we don't have the ability to set the number of thread 
(because we are still in the constructor) and the default 0 is used.

So to workaround i had to create 2 classes one that extends 
ScaledConjugateGradient and that use a second class that extends 
TrainFlatNetworkSCG and override the number of thread to put 1 instead of 0.(in 
the protected field because you don't call the method but use the field and 
that's not nice for people who extends your classes).

I hope i was clear enought. 

Thanks a lot for your great work on this library. 
If i find new bug do you prefer if i report here or in encog Java forum ?

Regards

Julien Blaize

Original issue reported on code.google.com by [email protected] on 11 Oct 2010 at 11:48

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.