NetMeasurements

NetMeasurements[net,data,measurement]

computes the requested measurement for the net evaluated on data.

NetMeasurements[net,data,{mspec1,mspec2,}]

computes a list of measurements for the net evaluated on data.

Details and Options

  • NetMeasurements is an extension of ClassifierMeasurements and PredictorMeasurements that offers a few features that make it particularly well suited for neural nets. For example, the measurements are all implemented efficiently in the computational graph of the net and can be calculated on the GPU, which makes NetMeasurements applicable to the large datasets often used when training nets.
  • A net can be any NetChain, NetGraph, NetModel or similar construct that could be supplied to NetTrain.
  • The data can be in any form accepted by NetTrain, including:
  • {input1output1,input2output2,}a list of input-output pairs
    port1{data11,},port2{},data corresponding to each port in the net
    "dataset"a named dataset from the Wolfram Data Repository
  • Named datasets that are commonly used as examples for neural net applications include the following:
  • "MNIST"classified handwritten digits
    "FashionMNIST"classified images of items of clothing
    "CIFAR-10","CIFAR-100"classified images of real-world objects
    "MovieReview"movie review snippets with sentiment
  • Measuring on a named dataset is equivalent to measuring on ResourceData["dataset","TestSet"].
  • A measurement can have any of the following forms:
  • "measurement"a named, built-in measurement
    NetPort["output"]the value of an output port of the net
    NetPort[{lspec,"output"}]the value of an interior activation of the net
    <|"Measurement"spec,|>a measurement with additional settings
  • The additional settings in <|"Measurement"spec,|> are the same as those for TrainingProgressMeasurements.
  • All properties from NetInformation can be used as built-in measurements.
  • For nets that contain CrossEntropyLossLayer, the following built-in measurements are available:
  • "Accuracy"fraction of correctly classified examples
    "Accuracy"nfraction of examples with the correct result in the top n
    "AreaUnderROCCurve"area under the ROC curve for each class
    "CohenKappa"Cohen's kappa coefficient
    "ConfusionMatrix"counts cij of class i examples classified as class j
    "ConfusionMatrixPlot"plot of the confusion matrix
    "Entropy"entropy measured in nats
    "ErrorRate"fraction of incorrectly classified examples
    "ErrorRate"nfraction of examples with the incorrect result in the top n
    "F1Score"F1 score for each class
    "FScore"βFβ score for each class
    "FalseDiscoveryRate"false discovery rate for each class
    "FalseNegativeNumber"number of false negative examples
    "FalseNegativeRate"false negative rate for each class
    "FalseOmissionRate"false omission rate for each class
    "FalsePositiveNumber"number of false positive examples
    "FalsePositiveRate"false positive rate for each class
    "Informedness"informedness for each class
    "Markedness"markedness for each class
    "MatthewsCorrelationCoefficient"Matthews correlation coefficient for each class
    "NegativePredictiveValue"negative predictive value for each class
    "Perplexity"exponential of the entropy
    "Precision"precision for each class
    "Recall"recall rate for each class
    "ROCCurve"receiver operating characteristics (ROC) curve for each class
    "ROCCurvePlot"plot of the ROC curve
    "ScottPi"Scott's pi coefficient
    "Specificity"specificity for each class
    "TrueNegativeNumber"number of true negative examples
    "TruePositiveNumber"number of true positive examples
  • For nets that contain MeanSquaredLossLayer or MeanAbsoluteLossLayer, the following built-in measurements are available:
  • "FractionVarianceUnexplained"the fraction of output variance left unexplained by the net
    "IntersectionOverUnion"intersection over union for bounding boxes
    "MeanDeviation"mean absolute value of the residuals
    "MeanSquare"mean square of the residuals
    "RSquared"coeficient of determination
    "StandardDeviation"root mean square of the residuals
  • The following options are supported:
  • BatchSizeAutomatichow many examples to process in a batch
    LossFunctionAutomaticthe loss function for assessing outputs
    NetEvaluationMode"Test"how training-specific layers should behave
    TargetDevice"CPU"the target device on which to perform measurements
    WorkingPrecisionAutomaticprecision of floating-point calculations

Examples

open all close all

Basic Examples  (1)

Measure the accuracy of a trained LeNet on the MNIST test data. Set up a neural net and test data:

In[1]:=
Click for copyable input

Now make the measurement:

In[2]:=
Click for copyable input
Out[2]=

Scope  (2)

Properties & Relations  (2)

Possible Issues  (1)

Introduced in 2019
(12.0)