Artificial Intelligence
Programming Project #5
Neural Net Simulator
Due by 12:29:59 on Tuesday, May, 2, 2006.
Please turn in a hardcopy of your report on the due date.
Introduction
There is no programming for this assignment. Instead, you will execute
a feedforward backpropagation neural network simulator on a set of training data
several times. You will create a short report of the analysis of your
results.
Simulator Notes
The simulator program was created by Rao and Rao in the text book "C++ Neural
Networks & Fuzzy Logic, 2nd ed.", M&T Books, New York, NY, 1995.
The simulator program can be downloaded
here.
I did not create this program (so I want to give them full credit).
But, I did modify (update) it to compile with MSVC++ .NET and I made the output
a little more readable. The executable has been created and is provided to
you in this zip file. If you would like compile it yourself, the source
code is provided.
The input and output file names are hard coded. If you like, you could
change this.
- The input file for training is called training.dat.
- The weights file is called weights.dat.
- The test file is called test.dat.
- The output file is called output.dat.
Description
The neural network and training data is to recognize binary images of letters
as 7x5 arrays. The letters provided are "A", "X", "H", "B", and "I".
The corresponding output patterns can be found in the last column of the
training.dat file. The architecture of the network is 35-5-4.
- Run your simulation with a tolerance of 0.001, learning rate of 0.1, and
1000 max iterations. Does your simulator recognize your test data?
- Using the above parameters, create a test pattern file with the letters
"M" and "J". What characters do those patterns classify towards?
- Add the characters "M" and "J" to your training data and rerun your
simulation. Can your simulator now classify all letters? If not,
experiment with the parameters of your simulator to get the fewest
discrepancies (if possible).
- Add more letters to your training data file ("C", "D", "E", "F", "G" )
(you now have a total of twelve letters). Run your simulation again.
Can your neural network recognize all the test patterns? If not,
experiment with the parameters of your simulator.
- How does the error change with the number of epochs? For different
learning rates and tolerances, create a plot of total error vs. number of
epochs. Comment on your results.
- Experiment with a different number of layers or number of hidden units.
With a given tolerance, learning rate, and test data, how fast (or slow) does
the neural network learn. How does the number of layers and hidden units
affect the error and recognition of test data. Analyze your results and
include other plots to support your conclusions.