Keith A. Pray - Professional and Academic Site
About Me
·
·
·
LinkedIn Profile Facebook Profile GoodReads Profile
Professional
Academic
Teaching
                                          
Printer Friendly Version
Experiments

[ Intro ] test1 ] test10 ]
test11 ] test12 ] test13 ]
test14 ] test15 ] test16 ]
test17 ] test18 ] test19 ]
test2 ] test20 ] test3 ]
test4 ] test5 ] test6 ]
test7 ] test8 ] test9 ]
[  ]

Up: Report ]

Experiment Results

      All experiments here were done with the full training data set and tested with the full test data set. Each used the default number of input, hidden and output nodes, unless indicated. For the details of each experiment, please see the test.txt file for that test.

      The input data attributes that were nominal were transformed to binary attributes. For each value the nominal attribute could have, a new attribute was made attribute=value which could have the numeric values 1 or 0. The class attribute (attribute that specifies the classification of an instance) was not transformed exactly this way. Rather the target for a training instance had the same number of nodes as possible values (if the attribute was nominal, which it was for all these tests). All the nodes were initialized to 0.1 and the node that indicated the value for the instance was set to 0.9. This is similar to the face training example from the book.

      In addition to initializing weights, the training data sequence was randomized.

The accuracy of the neural network resulting from training with default settings yielded an accuracy of 76.96%. I was expecting much better.


by: Keith A. Pray
Last Modified: July 4, 2004 8:59 AM
© 2004 - 1975 Keith A. Pray.
All rights reserved.

Current Theme: 

Kapowee Hosted | Kapow Generated in 0.008 second | XHTML | CSS