"GradientBoostedTrees" (Machine Learning Method)

Details & Suboptions

  • Gradient boosting is a machine learning technique for regression and classification problems that produces a prediction model in the form of an ensemble of trees. Trees are trained sequentially with the goal of compensating the weaknesses of previous trees. The current implementation uses the LightGBM framework in the back end.
  • The following options can be given:
  • MaxTrainingRounds50number of boosting rounds
    "BoostingMethod""Gradient"the method to use
    "L1Regularization"0L1 regularization parameter
    "L2Regularization"0L2 regularization parameter
    "LeafSize"Automaticminimum number of data samples in one leaf
    "LearningRate"Automaticlearning rate used in gradient descent
    "LeavesNumber"Automaticminimum number of leaves in one tree
    "MaxDepth"6maximum depth of each tree

Examples

open allclose all

Basic Examples  (2)

Train a predictor function on labeled examples:

In[1]:=
Click for copyable input
Out[1]=

Look at its PredictorInformation:

In[2]:=
Click for copyable input
Out[2]=

Predict a new example:

In[3]:=
Click for copyable input
Out[3]=

Generate some data and visualize it:

In[1]:=
Click for copyable input
Out[1]=

Train a predictor function on it:

In[2]:=
Click for copyable input
Out[2]=

Compare the data with the predicted values and look at the standard deviation:

In[3]:=
Click for copyable input
Out[3]=

Options  (8)

See Also

Classify  Predict  ClassifierFunction  PredictorFunction  ClassifierMeasurements  PredictorMeasurements  ClassifierInformation  PredictorInformation  SequencePredict  ClusterClassify

Related Demonstrations

Related Methods