Documentation /  Neural Networks /  Training Feedforward and Radial Basis Function Networks /

IntroductionExamples of Different Training Algorithms

7.1 NeuralFit

NeuralFit is used to train FF and RBF networks. Prior to the training you need to initialize the network. How this is done is described in Section 5.1.1, InitializeFeedForwardNet, and Section 6.1.1, InitializeRBFNet. In the following, net indicates either of the two possible network types.

Indirectly NeuralFit is also used to train dynamic networks, since NeuralARXFit and NeuralARFit actually call NeuralFit. Hence, the description given here also applies to these two commands.

To train the network you need a set of training data containing N input-output pairs.

Train a feedforward, radial basis function, or dynamic network.

NeuralFit returns a list of two variables. The first one is the trained net and the second is an object with head NeuralFitRecord containing information about the training.

An existing network can be submitted for more training by setting net equal to the network or its training record. The advantage of submitting the training record is that the information about the earlier training is combined with the additional training.

During the training, intermediate results are displayed in an automatically created notebook. After each training iteration the following information is displayed:

FilledSmallCircle Training iteration number

FilledSmallCircle The value of the root-mean-square error (RMSE)

FilledSmallCircle If validation data is submitted in the call, then the RMSE value computed on this second data set is also displayed.

FilledSmallCircle The step size control parameter of the minimization algorithm (Lambda, or Mu), which is described in Section 2.5.3, Training Feedforward and Radial Basis Function Networks.

At the end of the training, the RMSE decrease is displayed in a plot as a function of iteration number.

Using the options of NeuralFit, as described in Section 7.7, Options Controlling Training Results Presentation, you can change the way the training results are presented.

At the end of the training process, you often receive different warning messages. Often, however, the RMS error curve flattens out although no exact minimum is reached. Instead, by looking at the RMSE plot, you can usually tell if more training iterations are necessary. If the RMSE curve has not flattened out toward the end of the training, then you should consider continuing the training. This can be done by submitting the trained network, or its training record, a second time to NeuralFit so that you do not have to restart the training from the beginning.

If you do not want the warnings you can switch them off using the command Off.

All training algorithms may have problems with local minima. By repeating the training with different initializations of the network you decrease the risk of being caught in a minimum giving a bad performing network model.

NeuralFit takes the following options.

Options of NeuralFit.

The options CriterionPlot, CriterionLog, CriterionLogExtN, ReportFrequency, and MoreTrainingPrompt are common with the other training commands in the Neural Networks package, and they are described in Section 7.7, Options Controlling Training Results Presentation. The rest of the options are explained and illustrated with examples in the sections that follows.

IntroductionExamples of Different Training Algorithms


Any questions about topics on this page? Click here to get an individual response.Buy NowMore Information