represents the gradient of the output of a net with respect to the value of the specified input port.
represents the gradient of the output with respect to a learned parameter named param.
represents the gradient with respect to a parameter at a specific position in a net.
- net[data,NetPortGradient[iport]] can be used to obtain the gradient with respect to the input port iport for a net applied to the specified data.
- net[<|…,NetPortGradient[oport]ograd|>,NetPortGradient[iport]] can be used to impose a gradient at an output port oport that will be backpropogated to calculate the gradient at iport.
- NetPortGradient can be used to calculate gradients with respect to both learned parameters of a network and inputs to the network.
- For a net with a single scalar output port, the gradient returned when using NetPortGradient is the gradient in the ordinary mathematical sense: for a net computing , where is the array whose gradient is being calculated and represents all other arrays, the gradient is an array of the same rank as , whose components are given by . Intuitively, the gradient at a specific value of is the "best direction" in which to perturb if the goal is to increase , where the magnitude of the gradient is proportional to the sensitivity of to changes in .
- For a net with vector or array outputs, the gradient returned when using NetPortGradient is the ordinary gradient of the scalar sum of all outputs. Imposing a gradient at the output using the syntax <|…,NetPortGradient[oport]ograd|> is equivalent to replacing this scalar sum with a dot product between the output and ograd.
- Using NetPortGradient to calculate the gradient with respect to the learned parameters of the net will return the sum of the gradient over the input batch.
Examplesopen all close all
Basic Examples (3)
Randomly initialize a LinearLayer and return the derivative of the output with respect to the input for a specific input value: