is an option for ClassifierMeasurements, LearnedDistribution and other functions to specify if numeric results should be returned along with their uncertainty.


  • Uncertainties are given using Around.
  • The uncertainty interval generated typically corresponds to one standard deviation.


Basic Examples  (2)

Create and test a classifier using ClassifierMeasurements:

Measure the accuracy along with its uncertainty:

Measure the F1 scores along with their uncertainties:

Train a "Multinormal" distribution on a nominal dataset:

Because of the necessary preprocessing, the PDF computation is not exact:

Use ComputeUncertainty to obtain the uncertainty on the result:

Increase MaxIterations to improve the estimation precision:

Introduced in 2019