GithubHelp home page GithubHelp logo

Comments (6)

patrikeh avatar patrikeh commented on September 4, 2024 1

You should also standardize your inputs as you did your training data, the rest I suspect is a matter of hyperparameter tuning.

from go-deep.

MelleKoning avatar MelleKoning commented on September 4, 2024

When I test go test -v with this code

trainer := training.NewTrainer(training.NewSGD(0.005, 0.5, 0, false), 50)
data, partial := data.Split(0.95)
trainer.Train(neural, data, partial, 10000)
for _, h := range partial {
	result := neural.Predict(h.Input)
	t.Log("expected", h.Response, "got", result)
}

getting this result:

9900            21.659589805s   0.2966          0.89            
9950            21.769982706s   0.3534          0.89            
10000           21.875378307s   0.0922          0.89
  wine_test.go:225: expected [0 1 0] got [0.513183715591654 0.48543792697186683 0.0013783574364791163]
  wine_test.go:225: expected [1 0 0] got [0.9669493561974722 0.02095528954867503 0.012095354253852707]
  wine_test.go:225: expected [0 1 0] got [0.000358900763500674 0.9854740144964071 0.01416708474009219]
  wine_test.go:225: expected [0 1 0] got [6.222200305646363e-05 0.9979114365903047 0.002026341406638768]
  wine_test.go:225: expected [0 1 0] got [0.030081412314653985 0.9673236981308484 0.002594889554497426]
  wine_test.go:225: expected [1 0 0] got [0.9978704601367283 0.0005394038020093424 0.0015901360612622722]
  wine_test.go:225: expected [0 1 0] got [0.0001495134952342691 0.9997731207926062 7.73657121595669e-05]
  wine_test.go:225: expected [0 1 0] got [0.0007548942137459466 0.9975244626591644 0.001720643127089691]
  wine_test.go:225: expected [0 0 1] got [0.0003894261045324449 0.018225108216312066 0.9813854656791554]

First result is a bit awkward, but rest looks good. Did you expect anything else? The figures (output by t.Log) seem to point to an expected result, correct?

from go-deep.

MelleKoning avatar MelleKoning commented on September 4, 2024

However, @wwhai I also noticed that when running your test-code, the result1, result2 and result3 results are sometimes exactly the same, which is very weird. It is as if the net is not using the actual new values when the predict func is being called, or else, it is returning the same response for result2 as it just returned for result1, I have not found yet what the cause of that is, but I am not seeing the same behavior so far when running with the partial test prediction as shown with the for..range loop above...

from go-deep.

wwhai avatar wwhai commented on September 4, 2024

You should also standardize your inputs as you did your training data, the rest I suspect is a matter of hyperparameter tuning.

Thanks, I use deep.Standardize function to sandardize the input data:

	deep.Standardize(testData1)
	deep.Standardize(testData2)
	deep.Standardize(testData3)

It print the correct result. below is test code:

func Test_wine_demo(t *testing.T) {

	rand.Seed(time.Now().UnixNano())

	data, err := load("./data/wine.data")
	if err != nil {
		panic(err)
	}

	for i := range data {
		deep.Standardize(data[i].Input)
	}
	data.Shuffle()

	fmt.Printf("have %d entries\n", len(data))

	neural := deep.NewNeural(&deep.Config{
		Inputs:     len(data[0].Input),
		Layout:     []int{5, 3},
		Activation: deep.ActivationSigmoid,
		Mode:       deep.ModeMultiClass,
		Weight:     deep.NewNormal(1, 0),
		Bias:       true,
	})
	trainer := training.NewBatchTrainer(training.NewAdam(0.1, 0, 0, 0), 50, len(data)/2, 12)
	//data, heldout := data.Split(0.5)
	trainer.Train(neural, data, data, 10000)

	for _, h := range data {
		result := [3]float64{}
		for i, v := range neural.Predict(h.Input) {
			result[i] = math.Round(v)
		}
		t.Log("expected", h.Response, "got", result)
	}
	testData1 := []float64{13.48, 1.81, 2.41, 20.5, 100, 2.7, 2.98, .26, 1.86, 5.1, 1.04, 3.47, 920}
	testData2 := []float64{12.37, 1.21, 2.56, 18.1, 98, 2.42, 2.65, .37, 2.08, 4.6, 1.19, 2.3, 678}
	testData3 := []float64{12.77, 2.39, 2.28, 19.5, 86, 1.39, .51, .48, .64, 9.899999, .57, 1.63, 470}
	deep.Standardize(testData1)
	deep.Standardize(testData2)
	deep.Standardize(testData3)
	result1 := neural.Predict(testData1)
	result2 := neural.Predict(testData2)
	result3 := neural.Predict(testData3)
	p(result1)
	p(result2)
	p(result3)

}

func p(Input []float64) [3]float64 {
	result := [3]float64{}
	for i, v := range Input {
		result[i] = math.Round(v)
	}
	fmt.Println("got", result, ", Input", Input)
	return result
}

Predict testData1、testData1、testData3 Output:

got [0 0 1] , Input [0.0016503808445649346 0.04930262843009958 0.9490469907253355]
got [0 1 0] , Input [0.016838065818058634 0.7345708879823382 0.24859104619960312]
got [1 0 0] , Input [0.9999999696603823 4.15606950787268e-09 2.6183548091254158e-08]

from go-deep.

wwhai avatar wwhai commented on September 4, 2024

However, @wwhai I also noticed that when running your test-code, the result1, result2 and result3 results are sometimes exactly the same, which is very weird. It is as if the net is not using the actual new values when the predict func is being called, or else, it is returning the same response for result2 as it just returned for result1, I have not found yet what the cause of that is, but I am not seeing the same behavior so far when running with the partial test prediction as shown with the for..range loop above...

I have resolveed,this is example:#34 (comment)

from go-deep.

MelleKoning avatar MelleKoning commented on September 4, 2024

However, @wwhai I also noticed that when running your test-code, the result1, result2 and result3 results are sometimes exactly the same, which is very weird. It is as if the net is not using the actual new values when the predict func is being called, or else, it is returning the same response for result2 as it just returned for result1, I have not found yet what the cause of that is, but I am not seeing the same behavior so far when running with the partial test prediction as shown with the for..range loop above...

I have resolveed,this is example:#34 (comment)

Yes, standardizing the tests clearly solved the issue. 👍

from go-deep.

Related Issues (14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.