"GradientBoostedTrees" (Machine Learning Method)
- Method for Classify and Predict.
- Predict the value or class of an example using an ensemble of decision trees.
- Trees are trained sequentially following the boosting meta-algorithm.
Details & Suboptions
- Gradient boosting is a machine learning technique for regression and classification problems that produces a prediction model in the form of an ensemble of trees. Trees are trained sequentially with the goal of compensating the weaknesses of previous trees. The current implementation uses the LightGBM framework in the back end.
- The following options can be given:
-
MaxTrainingRounds 50 number of boosting rounds "BoostingMethod" "Gradient" the method to use "L1Regularization" 0 L1 regularization parameter "L2Regularization" 0 L2 regularization parameter "LeafSize" Automatic minimum number of data samples in one leaf "LearningRate" Automatic learning rate used in gradient descent "LeavesNumber" Automatic minimum number of leaves in one tree "MaxDepth" 6 maximum depth of each tree - Possible settings for "BoostingMethod" include "Gradient", "GradientOneSideSampling", and "DART" (i.e. Dropouts meet Multiple Additive Regression Trees).
Examples
open allclose allBasic Examples (2)
Options (8)
"BoostingMethod" (1)
"LeafSize" (2)
"LeavesNumber" (1)
"MaxDepth" (2)
MaxTrainingRounds (2)
Use the MaxTrainingRounds option to train a classifier:
Train two classifiers on the "Mushroom" dataset by changing the value of MaxTrainingRounds: