"DecisionTree" (Machine Learning Method)

Details & Suboptions

  • A decision tree is a flow chartlike structure in which each internal node represents a "test" on a feature, each branch represents the outcome of the test, and each leaf represents a class distribution, value distribution or probability density.
  • For Classify and Predict, the tree is constructed using the CART algorithm.
  • For LearnDistribution, the splits are determined using an information criterion trading off the likelihood and the complexity of the model.
  • The following options can be given:
  • "DistributionSmoothing"1regularization parameter
    "FeatureFraction"1the fraction of features to be randomly selected for training (only in Classify and Predict)

Examples

open allclose all

Basic Examples  (3)

Train a predictor function on labeled examples:

Look at the information about the predictor:

Extract option information that can be used for retraining:

Predict a new example:

Generate some data and visualize it:

Train a predictor function on it:

Compare the data with the predicted values and look at the standard deviation:

Learn a distribution using the method "DecisionTree":

Visualize the PDF obtained:

Obtain information about the distribution:

Options  (4)

"DistributionSmoothing"  (2)

Use the "DistributionSmoothing" option to train a classifier:

Use the mushrooms training set to train a classifier with the default value of "DistributionSmoothing":

Train a second classifier using a large "DistributionSmoothing":

Compare the probabilities for examples from a test set:

"FeatureFraction"  (2)

Use the "FeatureFraction" option to train a classifier:

Use the mushrooms training set to train two classifiers with different values of "FeatureFraction":

Look at the accuracy of these classifiers on a test set:

Introduced in 2017
 (11.2)