generates a ClassifierFunction[…] by partitioning data into clusters of similar elements.
Details and Options
- ClusterClassify works for a variety of data types, including numerical, textual, and image, as well as dates and times and combinations of these.
- The following options can be given:
CriterionFunction Automatic criterion for selecting a method DistanceFunction Automatic the distance function to use FeatureExtractor Identity how to extract features from which to learn FeatureNames Automatic feature names to assign for input data FeatureTypes Automatic feature types to assume for input data Method Automatic what method to use PerformanceGoal Automatic aspect of performance to optimize RandomSeeding 1234 what seeding of pseudorandom generators should be done internally Weights Automatic what weight to give to each example
- By default, ClusterClassify will preprocess the data automatically unless a DistanceFunction is specified.
- The setting for DistanceFunction can be any distance or dissimilarity function, or a function f defining a distance between two values.
- Possible settings for PerformanceGoal include:
Automatic automatic tradeoff among speed, accuracy, and memory "Memory" minimize the storage requirements of the classifier "Quality" maximize the accuracy of the classifier "Speed" maximize the speed of the classifier "TrainingSpeed" minimize the time spent producing the classifier
- Possible settings for Method include:
Automatic automatically select a method "Agglomerate" single linkage clustering algorithm "DBSCAN" density-based spatial clustering of applications with noise "NeighborhoodContraction" shift data points toward high-density regions "JarvisPatrick" Jarvis–Patrick clustering algorithm "KMeans" k-means clustering algorithm "MeanShift" mean-shift clustering algorithm "KMedoids" partitioning around medoids "SpanningTree" minimum spanning tree-based clustering algorithm "Spectral" spectral clustering algorithm "GaussianMixture" variational Gaussian mixture algorithm
- The methods "KMeans" and "KMedoids" can only be used when the number of clusters is specified.
- The following plots show results of common methods on toy datasets:
- Possible settings for CriterionFunction include:
"StandardDeviation" root-mean-square standard deviation "RSquared" R-squared "Dunn" Dunn index "CalinskiHarabasz" Calinski–Harabasz index "DaviesBouldin" Davies–Bouldin index "Silhouette" Silhouette score Automatic internal index
- Possible settings for RandomSeeding include:
Automatic automatically reseed every time the function is called Inherited use externally seeded random numbers seed use an explicit integer or strings as a seed
- ClusterClassify[…,FeatureExtractor"Minimal"] indicates that the internal preprocessing should be as simple as possible.
Examplesopen allclose all
Basic Examples (3)
Train the ClassifierFunction on some numerical data:
Train the ClassifierFunction on some colors by requiring the number of classes to be 5:
Train the ClassifierFunction on some unlabeled data:
Train the ClassifierFunction on some strings:
Classify the same test data using IndeterminateThreshold:
Visualize the resulting clustering including the Indeterminate cluster:
Use ClassifierInformation to obtain a method description:
Assign clusters to some randomly generated data and look at the AbsoluteTiming:
Assign clusters to some randomly generated data and look at the AbsoluteTiming compared to the one above:
Train several classifiers on the same colors by using different values of the RandomSeeding option:
Wolfram Research (2016), ClusterClassify, Wolfram Language function, https://reference.wolfram.com/language/ref/ClusterClassify.html (updated 2020).
Wolfram Language. 2016. "ClusterClassify." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2020. https://reference.wolfram.com/language/ref/ClusterClassify.html.
Wolfram Language. (2016). ClusterClassify. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/ClusterClassify.html