ClusteringTree

ClusteringTree[{e1,e2,}]

constructs a weighted tree from the hierarchical clustering of the elements e1, e2, .

ClusteringTree[{e1v1,e2v2,}]

represents ei with vi in the constructed graph.

ClusteringTree[{e1,e2,}{v1,v2,}]

represents ei with vi in the constructed graph.

ClusteringTree[label1e1,label2e2]

represents ei using labels labeli in the constructed graph.

ClusteringTree[data,h]

constructs a weighted tree from the hierarchical clustering of data by joining subclusters at distance less than h.

Details and Options

  • The data elements ei can be numbers; numeric lists, matrices, or tensors; lists of Boolean elements; strings or images; geo positions or geographical entities; colors; as well as combinations of these. If the ei are lists, matrices, or tensors, each must have the same dimensions.
  • The result from ClusteringTree is a binary weighted tree, where the weight of each vertex indicates the distance between the two subtrees that have that vertex as root:
  • ClusteringTree has the same options as Graph, with the following additions and changes:
  • ClusterDissimilarityFunction Automaticthe clustering linkage algorithm to use
    DistanceFunction Automaticthe distance or dissimilarity to use
    EdgeStyleGrayLevel[0.65]styles for edges
    FeatureExtractor Automatichow to extract features from data
    VertexSize0size of vertices
  • By default, ClusteringTree will preprocess the data automatically unless either a DistanceFunction or a FeatureExtractor is specified.
  • ClusterDissimilarityFunction defines the intercluster dissimilarity, given the dissimilarities between member elements.
  • Possible settings for ClusterDissimilarityFunction include:
  • "Average"average intercluster dissimilarity
    "Centroid"distance from cluster centroids
    "Complete"largest intercluster dissimilarity
    "Median"distance from cluster medians
    "Single"smallest intercluster dissimilarity
    "Ward"Ward's minimum variance dissimilarity
    "WeightedAverage"weighted average intercluster dissimilarity
    a pure function
  • The function f defines a distance from any two clusters.
  • The function f needs to be a real-valued function of the DistanceMatrix.

Examples

open allclose all

Basic Examples  (5)

Obtain a cluster hierarchy from a list of numbers:

Unify clusters at distance less than 2:

Obtain a cluster hierarchy from a list of strings:

Obtain a cluster hierarchy from a list of images:

Obtain a cluster hierarchy from a list of cities:

Obtain a cluster hierarchy from a list of Boolean entries:

Scope  (8)

Obtain a cluster hierarchy from a list of numbers:

Obtain the leaves' labels:

Look at the distance between subclusters by looking at the VertexWeight:

Find the shortest path from the root vertex to the leaf 3.4:

Obtain a cluster hierarchy from a heterogeneous dataset:

Compare it with the cluster hierarchy of the colors:

Generate a list of random colors:

Obtain a cluster hierarchy from the list using the "Centroid" linkage:

Compute the hierarchical clustering from an Association:

Compare it with the hierarchical clustering of its Values:

Compare it with the hierarchical clustering of its Keys:

Obtain a cluster hierarchy by merging clusters at distance less than 0.4:

Change the style and the layout of the ClusteringTree:

Obtain a cluster hierarchy from a list of three-dimensional vectors and label the leaves with the total of the corresponding element:

Compare it with the cluster hierarchy of the total of each vector:

Obtain a cluster hierarchy from a list of integers:

Change the vertex labels by using regular polygons:

Options  (3)

ClusterDissimilarityFunction  (1)

Generate a list of random colors:

Obtain a cluster hierarchy from the list using the "Centroid" linkage:

Obtain a cluster hierarchy from the list using the "Single" linkage:

Obtain a cluster hierarchy from the list using a different "ClusterDissimilarityFunction":

DistanceFunction  (1)

Generate a list of random vectors:

Obtain a cluster hierarchy using different DistanceFunction:

FeatureExtractor  (1)

Obtain a cluster hierarchy from a list of pictures:

Use a different FeatureExtractor to extract features:

Use the Identity FeatureExtractor to leave the data unchanged:

Wolfram Research (2016), ClusteringTree, Wolfram Language function, https://reference.wolfram.com/language/ref/ClusteringTree.html (updated 2017).

Text

Wolfram Research (2016), ClusteringTree, Wolfram Language function, https://reference.wolfram.com/language/ref/ClusteringTree.html (updated 2017).

CMS

Wolfram Language. 2016. "ClusteringTree." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2017. https://reference.wolfram.com/language/ref/ClusteringTree.html.

APA

Wolfram Language. (2016). ClusteringTree. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/ClusteringTree.html

BibTeX

@misc{reference.wolfram_2022_clusteringtree, author="Wolfram Research", title="{ClusteringTree}", year="2017", howpublished="\url{https://reference.wolfram.com/language/ref/ClusteringTree.html}", note=[Accessed: 03-June-2023 ]}

BibLaTeX

@online{reference.wolfram_2022_clusteringtree, organization={Wolfram Research}, title={ClusteringTree}, year={2017}, url={https://reference.wolfram.com/language/ref/ClusteringTree.html}, note=[Accessed: 03-June-2023 ]}