Documentation /  Neural Networks /  Training Feedforward and Radial Basis Function Networks /

The Training RecordFurther Reading

7.9 Writing Your Own Training Algorithms

The different options of NeuralFit offer you several training algorithms for FF and RBF networks. Its options give you further possibilities to modify the algorithms. Nevertheless, on occasion you may want to develop your own training algorithm. As long as the network parameter weights are stored in the correct way, as described in Section 13.1, Change the Parameter Values of an Existing Network, you may modify their values in whatever way you want. This is, in fact, enough to let you use all the capabilities of Mathematica to develop new algorithms. The advantage of representing the network in the standard way is that you can apply all other functions of the Neural Networks package to the trained network.

Many algorithms are based on the derivative of the network with respect to its parameters, and SetNeuralD and NeuralD help you compute it in a numerically efficient way. This command is described in the following.

SetNeuralD produces the code for NeuralD. Hence, SetNeuralD has to be called first, then NeuralD is used to compute the derivative. The advantage of this procedure is that SetNeuralD optimizes the symbolic expression for the derivative so that the numerical computation can be performed as fast as possible.

Notice that SetNeuralD only has to be called once for a given network structure. It does not have to be recalled if the parameters have changed; SetNeuralD only needs to be called if you change the network structure. Typically, SetNeuralD is called at the beginning of a training algorithm and only NeuralD is used inside the training loop.

Note also that NeuralD does not perform any tests of its input arguments. The reason for this is that it is intended for internal use. Instead, you have to add the tests yourself in the beginning of the training algorithm.

The use is illustrated in the following example.

Read in the Neural Networks package.

In[1]:=

Load some data.

In[2]:=

Check the dimensions of the data.

In[3]:=

Out[3]=

Out[4]=

There are 20 data samples available and there are one input and two outputs.

Initialize an RBF network.

In[5]:=

Out[5]=

Generate the code to calculate the derivative.

In[6]:=

Compute the derivative.

In[7]:=

Out[7]=

The first input argument is the network. It must have the same structure as it had when SetNeuralD was called, but the parameter values may have changed. The second input argument should be a matrix with one numerical input vector on each row. The output is better explained by looking at the dimension of its structure.

In[8]:=

Out[8]=

The first index indicates the data sample (the derivative was computed on three input data), the second index indicates the output (there are two outputs of the network), and the third index indicates the parameters (there are obviously 10 parameters in the network). The derivatives with respect to the individual parameters are placed in the same order as the parameters in the flattened parameter structure of the network, that is, the position in the list Flatten[net[[1]]].

If some parameters should be excluded from the fit, you may indicate that in the call to SetNeuralD. Then SetNeuralD tests for any possible additional simplifications so that NeuralD becomes as fast as possible. Parameters are excluded using FixedParameters.

Exclude four parameters from the fit.

In[9]:=

Calculate the derivative of the remaining six parameters.

In[10]:=

Out[11]=

Compared to earlier, there are now only six components in the third level. They correspond to the six parameters, and the fixed four are considered to be constants.

The Training RecordFurther Reading


Any questions about topics on this page? Click here to get an individual response.Buy NowMore Information