"LogisticRegression" (Machine Learning Method)

Details & Suboptions

  • "LogisticRegression" models the log probabilities of each class with a linear combination of numerical features , , where corresponds to the parameters for class k. The estimation of the parameter matrix is done by minimizing the loss function sum_(i=1)^m-log(P_(theta)(class=y_i|x_i))+lambda_1 sum_(i=1)^nTemplateBox[{{theta, _, i}}, Abs]+(lambda_2)/2 sum_(i=1)^ntheta_i^2.
  • The following options can be given:
  • "L1Regularization"0value of in the loss function
    "L2Regularization"Automaticvalue of in the loss function
    "OptimizationMethod"Automaticwhat method to use
  • Possible settings for "OptimizationMethod" include:
  • "LBFGS"limited memory BroydenFletcherGoldfarbShanno algorithm
    "StochasticGradientDescent"stochastic gradient method
    "Newton"Newton method

Examples

open allclose all

Basic Examples  (2)

Train a classifier function on labeled examples:

Obtain information about the classifier:

Classify a new example:

Generate some normally distributed data:

Visualize it:

Train a classifier on this dataset:

Plot the training set and the probability distribution of each class as a function of the features:

Options  (6)

"L1Regularization"  (2)

Train a classifier using the "L1Regularization" option:

Generate some data and visualize it:

Train several classifiers using different values for "L1Regularization" and compare the results:

"L2Regularization"  (2)

Train a classifier using the "L2Regularization" option:

Generate some data and visualize it:

Train several classifiers using different values for "L2Regularization" and compare the results:

"OptimizationMethod"  (2)

Train a classifier using a specific "OptimizationMethod":

Train a classifier using the "Newton" method:

Train a classifier using the "StochasticGradientDescent" method:

Compare the corresponding training times:

Introduced in 2014
 (10.0)