Documentation /  Neural Networks /  Training Feedforward and Radial Basis Function Networks /

Examples of Different Training AlgorithmsTroubleshooting

7.3 Train with FindMinimum

If you prefer, you can use the built-in Mathematica command FindMinimum to train FF, RBF, and dynamic networks. This is done by giving the option MethodRuleFindMinimum to NeuralFit. The other choices for Method call algorithms especially written for neural network minimization and thus happen to be superior to FindMinimum in most neural network problems.

You can submit any FindMinimum options by putting them in a list and using the option ToFindMinimum.

See the documentation on FindMinimum for further details.

Consider the following small example.

Read in the Neural Networks package and a standard add-on package.

In[1]:=

Generate data and look at the function.

In[3]:=

Initialize an RBF network randomly.

In[7]:=

Out[7]=

Train with FindMinimum and specify that the Levenberg-Marquardt algorithm of FindMinimum should be used.

In[8]:=

A main disadvantage with FindMinimum is that it is hard to say whether or not the training was in fact successful. You have no intermediate results during training, and no criterion plot given at the end of the training. You almost always get the warning at the end of the training that the minimum was not reached. However, the trained network might be a good description of the data anyway. You can visually inspect the model with a plot.

Plot the approximation obtained with the network.

In[9]:=

You can repeat the example changing the number of iterations and the algorithm used in FindMinimum.

Examples of Different Training AlgorithmsTroubleshooting


Any questions about topics on this page? Click here to get an individual response.Buy NowMore Information