"Autoencoder" (Machine Learning Method)

Details & Suboptions

  • "Autoencoder" is a neural netbased dimensionality reduction method. The method learns a low-dimensional representation of data by learning to approximate the identity function using a deep network that has an information bottleneck.
  • "Autoencoder" works for high-dimensional data (e.g. images), a large number of examples and noisy training sets; however, it is slow to train and can fail when the training set is small.
  • The following feature-space plots (see FeatureSpacePlot) show two-dimensional embeddings learned by the "Autoencoder" method applied to the benchmarking datasets Fisher's Irises, MNIST and FashionMNIST:
  • The autoencoder network is made of an encoder net and a decoder net. The encoder net transforms the input data into a low-dimensional numeric representation (also called latent representation). The decoder attempts to reconstruct the original input from the latent representation:
  • The encoder and decoder networks are trained together by minimizing the discrepancy between the original data and its reconstruction.
  • The suboption "NetworkDepth" can be used to set the depth of encoder and decoder networks in order to control their capacity. Deeper networks allow the encoder to learn more complex patterns but will be more prone to overfitting. "NetworkDepth"1 is equivalent to performing "PrincipalComponentsAnalysis".
  • The following suboptions can be given:
  • "NetworkDepth"Automaticdepth of the encoder and decoder
    MaxTrainingRounds Automaticmaximum number of rounds of training

Examples

open allclose all

Basic Examples  (2)

Generate a dimension reducer from a high-dimensional random vector using the autoencoder method:

Reduce new vectors using the trained autoencoder:

Reduce the dimension of some images using the autoencoder method:

Visualize the two-dimensional representation of images:

Scope  (1)

Create training and test data consisting of two-dimensional numerical sequences of variable length:

Train an autoencoder to find a dense three-dimensional representation of input sequences:

Visualize the similarity between different sequences of different lengths and bounds using the encoder:

Generate new sequences from their encodings:

Options  (2)

MaxTrainingRounds  (1)

Obtain the MNIST training dataset:

Train an autoencoder network such that it visits each example exactly once:

NetworkDepth  (1)

Obtain the MNIST dataset that contains training and test images:

Train several autoencoders with different "NetworkDepth" to reduce the dimensions of the images:

Visualize the two-dimensional representation of images for various network depths:

Applications  (2)

Data Reconstruction  (1)

Load the Fashion MNIST training and test dataset:

Train an autoencoder to reduce the dimensions of the images:

Use the reducer to reconstruct images from their encodings and compare with the original images:

Data Visulization  (1)

Reduce the dimension of some images using the autoencoder method:

Visualize the two-dimensional representation of images: