TrainingStoppingCriterion
is an option for NetTrain that specifies a criterion for stopping training early in order to prevent overfitting.
Details
- With the default of TrainingStoppingCriterionNone, no early stopping is performed.
- Setting TrainingStoppingCriterion"measurement" specifies that training should be stopped if measurement stops improving. Possible values for measurement include:
-
"Loss" network training loss key a TrainingProgressMeasurements key - The values available for key depend on the specification of TrainingProgressMeasurements. Often, the key is the same as the measurement. For example, with TrainingProgressMeasurements"ErrorRate", key can be "ErrorRate". However, this is not always the case.
- TrainingStoppingCriterionAutomatic is equivalent to TrainingStoppingCriterion"Loss".
- By default, TrainingStoppingCriterion is based on the metrics and loss from the validation set. If the ValidationSet option of NetTrain is None, then a warning will be issued and the training set will be used. If the validation set is present, the stopping criterion is checked whenever the validation loss and metrics are calculated (once per round by default); otherwise, the stopping criterion is checked once per round.
- TrainingStoppingCriterion has a number of suboptions that can be specified using the <"Criterion""measurement",opt1val1,opt2val2,… > syntax.
- Setting TrainingStoppingCriterion<"Criterion""measurement","Patience"n > specifies that training should be stopped if an improvement in measurement is not seen for n rounds in a row. The default value for n is 0.
- Setting TrainingStoppingCriterion<"Criterion""measurement","InitialPatience"n > specifies that the stopping criterion is only checked for the first time after n rounds. The default value for n is 0.
- Setting TrainingStoppingCriterion<"Criterion""measurement","Improvement"v > specifies the minimum change in measurement that is considered an improvement. Possible values for improvement are:
-
"AbsoluteChange" stop training if measurement improves by less than v "RelativeChange" stop training if measurement improves by less than a factor v of the current best value - If an improvement specification is not given, then "AbsoluteChange"0 is used.
- The form "RelativeChange"Quantity[q,"Percent"] can also be given.
- It is also possible to specify a function as the stopping criterion using TrainingStoppingCriterion<"Criterion"func,… >, where func should return True if training should be stopped.
- The function func is provided an association with the following keys:
-
RoundLoss total loss for the training set ValidationLoss total loss for the validation set RoundMeasurements association of requested measurements for the training set ValidationMeasurements association of requested measurements for the validation set - The validation properties are only available if there is a validation set.
- The "RelativeChange" and "AbsoluteChange" options cannot be used with a function criterion.
- If a measurement is specified for TrainingStoppingCriterion, the value of the measurement will be used to select the optimal trained net to return from NetTrain. Note that the choices of "Patience", "InitialPatience", "AbsoluteChange" and "RelativeChange" have no effect on this selection. If a function is specified for TrainingStoppingCriterion, the default behavior will be used when selecting the optimal trained net.
Examples
open allclose allBasic Examples (1)
Prevent overfitting by stopping training when the validation loss stops improving. Set up a simple net as well as some training and validation data:
Use TrainingStoppingCriterion to stop training when the validation loss stops improving:
Scope (2)
Perform early stopping with more complex criteria. Set up a simple net as well as some training and validation data:
Use TrainingStoppingCriterion to stop training when the validation loss improves by less than some absolute amount:
Use TrainingStoppingCriterion to stop training when the validation loss improves by less than some percentage of the previous best value:
Use TrainingStoppingCriterion to stop training when the validation loss has not improved for more than 5 rounds in a row:
Use TrainingStoppingCriterion to stop training when the validation loss has not improved, but only start checking after 200 training rounds:
Use TrainingStoppingCriterion to stop training when the validation macro-averaged recall has stopped increasing for 50 iterations:
Use a callback function to stop training. Set up the net and data:
Stop training when the validation loss is higher than 1.75:
Stop training if the validation loss is higher than 1.75 for more than 20 rounds in a row:
Properties & Relations (2)
The direction of change that is considered an improvement depends on the measurement. For "Loss", a decrease is an improvement. For other measurements, the direction can be specified using the "Direction"direction suboption of TrainingProgressMeasurements. For built-in measurements, the direction is automatically chosen as appropriate.
Stop training if the L1 norm of the first layer activations does not increase:
Any of the non-function stopping criteria can be specified using a callback function. Set up the net and data:
Without a callback function, use TrainingStoppingCriterion to stop training when the validation loss has not improved for 50 iterations:
This is equivalent to the following criterion with a function:
Text
Wolfram Research (2019), TrainingStoppingCriterion, Wolfram Language function, https://reference.wolfram.com/language/ref/TrainingStoppingCriterion.html.
CMS
Wolfram Language. 2019. "TrainingStoppingCriterion." Wolfram Language & System Documentation Center. Wolfram Research. https://reference.wolfram.com/language/ref/TrainingStoppingCriterion.html.
APA
Wolfram Language. (2019). TrainingStoppingCriterion. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/TrainingStoppingCriterion.html