The learning rate decay has been implemented in this model.
We can see the model can only classify the data it has seen. And the accuracy on the validation set is not increasing.
After Applying the L2 Regularization, it still can't be solved
Result from tf.Keras
However, this problem can be solved by using a simple algorithm (Graph_Alg) to create a graph.
Only few batch of training data can boost the accuracy on the validation set to 100%
WithColour
Grid graph with Colours
Generating Positive and Negative pairs
Result from Pytorch
LSTN-NALU
Training
& Validation (Epochs)
Original LSTM
Training
& Validation (Epochs)
Still have a serious overfitting problem.
Without Colour but no restriction on the cycle path
Training result
Simple Path to generate the data
With 2 FC layers
Without FC layer
Consider the last node of a path
Training_result
After setting a root
For PredictingNet
3 Colours
10 Colours (SimplestColorGraph)
Idx (20 Colours)
For Siamese Network
3 Colours
10 Colours (SimplestColorGraph)
Idx (20 Colours)
Both siamese network and PredictingNet have the problem of overfitting.