Neural Network Construction & Properties

Neural networks can be constructed to tackle specific tasks, but this requires flexibility and the possibility to experiment with different architectures and hyperparameters. Precise measurements of network outputs and gradients are essential to compare results and spot problems, together with control of the training behavior and hardware.  The symbolic environment of the Wolfram Language is particularly suitable for representing neural networks operations (layers) as abstract functions. Networks can be built from scratch by combining new layers or adapted from pretrained models by the use of surgery functions. Element parameters like weights, input shape and output type can be extracted or replaced. Options can be used for a finer control of aspects such as evaluation runtime, learning rate and working precision.

NetGraph symbolic representation of trained or untrained net graphs to be applied to data

NetChain symbolic representation of a simple chain of layers

Net Layers »

LinearLayer layer representing a trainable affine transformation

ConvolutionLayer layer representing a trainable convolution operation

ThreadingLayer  ▪  AttentionLayer  ▪  AggregationLayer  ▪  SoftmaxLayer  ▪  ...

Prebuilt Material

NetModel complete pre-trained net models

ResourceData access to training data, networks, etc.

Net Operations »

NetReplacePart replace layers or layer properties

NetExtract  ▪  NetTake  ▪  NetAppend  ▪  NetReplace  ▪  NetJoin  ▪  ...

Symbolic Elements

NetPort a named input or output port for a layer

NetPortGradient the gradient of a net with respect to a port

NetArray a learnable array

Net Objects

NetStateObject store and reuse recurrent state in a net

NetTrainResultsObject represent what happened in a training

Net Properties

NetExtract extract properties and weights, etc. from nets

Information give summary and detailed information about any net

NetMeasurements measure the performance of a net on test data

Options find the options for building or applying a layer

Construction Options

InputPorts, OutputPorts specify the number, names or shapes of ports

LearningRateMultipliers specify learning rate multipliers to apply during training

Evaluation Options

BatchSize specify the size of a batch of examples to process together

NetEvaluationMode specify whether the net should use training-specific behavior

RandomSeeding specify the seeding of pseudorandom generators

TargetDevice specify whether CPU or GPU computation should be attempted

WorkingPrecision the numerical precision used for evaluating the net

Importing & Exporting

"WLNet" Wolfram Language Net representation format

"ONNX" ONNX open exchange format

"MXNet" MXNet representation format

Import  ▪  Export