generates a DimensionReducerFunction[] that projects from the space defined by the examplei to a lower-dimensional approximating manifold.


generates a DimensionReducerFunction[] for an n-dimensional approximating manifold.


generates the specified properties of the dimensionality reduction.

Details and Options

  • DimensionReduction can be used on many types of data, including numerical, textual, sounds, and images, as well as combinations of these.
  • DimensionReduction[examples] yields a DimensionReducerFunction[] that can be applied to data to perform dimension reduction.
  • Each examplei can be a single data element, a list of data elements, an association of data elements, or a Dataset object.
  • DimensionReduction[examples] automatically chooses an appropriate dimension for the target approximating manifold.
  • DimensionReduction[examples] is equivalent to DimensionReduction[examples,Automatic].
  • In DimensionReduction[,props], props can be a single property or a list of properties. Possible properties include:
  • "ReducerFunction"DimensionReducerFunction[] (default)
    "ReducedVectors"vectors obtained by reducing the examplei
    "ReconstructedData"reconstruction of examples after reduction and inversion
    "ImputedData"missing values in examples replaced by imputed values
  • The following options can be given:
  • FeatureExtractorIdentityhow to extract features from which to learn
    FeatureNamesAutomaticnames to assign to elements of the examplei
    FeatureTypesAutomaticfeature types to assume for elements of the examplei
    MethodAutomaticwhich reduction algorithm to use
    PerformanceGoalAutomaticaspect of performance to optimize
    RandomSeeding1234what seeding of pseudorandom generators should be done internally
    TargetDevice"CPU"the target device on which to perform training
  • Possible settings for PerformanceGoal include:
  • "Memory"minimize the storage requirements of the reducer function
    "Quality"maximize reduction quality
    "Speed"maximize reduction speed
    "TrainingSpeed"minimize the time spent producing the reducer
  • PerformanceGoal{goal1,goal2,} will automatically combine goal1, goal2, etc.
  • Possible settings for Method include:
  • Automaticautomatically chosen method
    "LatentSemanticAnalysis"latent semantic analysis method
    "Linear"automatically choose the best linear method
    "LowRankMatrixFactorization"use a low-rank matrix factorization algorithm
    "PrincipalComponentsAnalysis"principal components analysis method
    "TSNE"t-distributed stochastic neighbor embedding algorithm
    "AutoEncoder"use a trainable autoencoder
    "LLE"locally linear embedding
    "Isomap"isometric mapping
  • For Method"TSNE", the following suboptions are supported:
  • "Perplexity"Automaticperplexity value to be used
    "LinearPrereduction"Falsewhether to perform a light linear pre-reduction before running the t-SNE algorithm
  • Possible settings for RandomSeeding include:
  • Automaticautomatically reseed every time the function is called
    Inheriteduse externally seeded random numbers
    seeduse an explicit integer or strings as a seed
  • DimensionReduction[,FeatureExtractor"Minimal"] indicates that the internal preprocessing should be as simple as possible.
  • DimensionReduction[DimensionReducerFunction[],FeatureExtractorfe] can be used to prepend the FeatureExtractorFunction[] fe to the existing feature extractor.


open all close all

Basic Examples  (3)

Generate a dimension reducer from a list of vectors:

Click for copyable input

Use this reducer on a new vector:

Click for copyable input

Use this reducer on a list of new vectors:

Click for copyable input

Create a reducer with a specified target dimension of 1:

Click for copyable input
Click for copyable input

Apply the reducer to the vectors used to generate the reducer:

Click for copyable input

Obtain both the reducer and the reduced vectors in one step:

Click for copyable input

Train a dimension reducer on a mixed-type dataset:

Click for copyable input

Reduce the dimension of a new example:

Click for copyable input

Scope  (7)

Options  (7)

Applications  (5)

Introduced in 2015
Updated in 2018