Click here to Skip to main content
15,888,323 members
Please Sign up or sign in to vote.
2.00/5 (1 vote)
See more:
Hi all,
i am kinda confused i've tried almost everything but with no luck unfortunately, initially i thought that i could have a mistake in the data, i've checked over and over again but didn't find anything exceptional,
so my problem is as follows :
I have a neural network which has a structure like this
C#
NeuralNetwork = new BasicNetwork();
NeuralNetwork.AddLayer(new BasicLayer(null, true, 6));
NeuralNetwork.AddLayer(new BasicLayer(new ActivationTANH(), true, 4));
NeuralNetwork.AddLayer(new BasicLayer(new ActivationLinear(), false, 1));
NeuralNetwork.Structure.FinalizeStructure();
NeuralNetwork.Reset();

Learning algorithm is ResilientPropagation,
so i have dataset of 6.000 records and the network is learning so fast like in 5-6 iterations the error rate becomes around 7-8 percent and one of the strange things is that when i evaluate the network i get an output between 0.4-0.62 even if i pass the training set, futhermore all of the data (training,validation,test sets) are normalized from 0 to 1. And another issue is that the outputs produced by the network are completely wrong, for instance
Reality Normalized Output Unnormalized Output
2 -0,98 0,429047583164628
-15 -0,9272 0,43030481996897
0 -0,8159 0,432954942682462
-6 -0,8001 0,433332076032656
14 -0,7878 0,433624916240407
-6 -0,6205 0,437608263156513

8 6,3781 0,604241517550081
8 6,5203 0,60762682001957
7 6,5353 0,607983605999036
-15 6,6413 0,610507627270154
4 7,0664 0,620629058018605

Output normalization range is from -19 to 23. It doesn't matter even if i evaluate the training set or test set the results are like above. The seems like the network is just mapping the ideal outputs between some range,in this case from 0.4-0.6.Would appreciate any info concerning this.
I am attaching the learning process graph and the results, a diagramm also.

Besides normalization are there any data preproccesing stages?

The images:
(couldn't find any attachment tags, sorry giving the links)
The learning graph Is it okay, that the cross validation always in the beginning is below the learning error?
Evaluation Diagram the blue bar is the outpur of the network(denormalized) and the red one is the ideal value. If you pay attention like i said the blue bar(actual output) sticks to some range whereas the ideal values are diverse.

I hope i've described my problem clearly and hope that someone has encountered such a problem.
Posted

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900