---
title: "JarvisPatrick"
language: "en"
type: "Method"
summary: "JarvisPatrick (Machine Learning Method) Method for FindClusters, ClusterClassify and ClusteringComponents. Partitions data into clusters of similar elements using Jarvis\\[Dash]Patrick clustering. JarvisPatrick is a neighbor-based clustering method. JarvisPatrick works for arbitrary cluster shapes and sizes. However, it is parameter sensitive, and can fail when clusters have different densities or are loosely connected. The following plots show the results of the JarvisPatrick method applied to toy datasets: The algorithm finds clusters based on the similarity of nearest neighbors of data points, and uses shared nearest neighbors as a measure of similarity between points. In JarvisPatrick, neighbors are defined by points within a ball of \\[Epsilon] radius. Each couple of neighbors that share at least p neighbors belong to the same cluster. The following suboptions can be given:"
keywords: 
- MachineLearning
- cluster analysis
- clustering
- clustering of data
- grouping of data
- partitioning of data
- unsupervised learning
canonical_url: "https://reference.wolfram.com/language/ref/method/JarvisPatrick.html"
source: "Wolfram Language Documentation"
related_tutorials: 
  - 
    title: "Partitioning Data into Clusters"
    link: "https://reference.wolfram.com/language/tutorial/NumericalOperationsOnData.en.md#2948"
---
# "JarvisPatrick" (Machine Learning Method)

* Method for ``FindClusters``, ``ClusterClassify`` and ``ClusteringComponents``.

* Partitions data into clusters of similar elements using Jarvis–Patrick clustering.

---

## Details & Suboptions

* ``"JarvisPatrick"`` is a neighbor-based clustering method. ``"JarvisPatrick"`` works for arbitrary cluster shapes and sizes. However, it is parameter sensitive, and can fail when clusters have different densities or are loosely connected.

* The following plots show the results of the ``"JarvisPatrick"`` method applied to toy datasets:

* [image]

* The algorithm finds clusters based on the similarity of nearest neighbors of data points, and uses "shared nearest neighbors" as a measure of similarity between points.

* In ``"JarvisPatrick"``, neighbors are defined by points within a ball of ``ϵ`` radius. Each couple of neighbors that share at least ``p`` neighbors belong to the same cluster.

* The following suboptions can be given:

|                          |           |                                      |
| ------------------------ | --------- | ------------------------------------ |
| "NeighborhoodRadius"     | Automatic | radius ϵ                             |
| "SharedNeighborsNumber"  | Automatic | minimum number of shared neighbors p |

---

## Examples (8)

### Basic Examples (2)

Find clusters of nearby values using the ``"JarvisPatrick"`` method:

```wl
In[1]:= FindClusters[{1, 2, 10, 12, 3, 1, 13, 25}, Method -> "JarvisPatrick"]

Out[1]= {{1, 2, 3, 1}, {10, 12, 13}, {25}}
```

---

Create random 2D vectors:

```wl
In[1]:=
SeedRandom[123];
data = Join[RandomReal[1, {100, 2}], RandomReal[{-3, -.1}, {100, 2}]];
ListPlot[data]

Out[1]= [image]
```

Plot clusters in ``data`` found using the ``"JarvisPatrick"`` method:

```wl
In[2]:= ListPlot[FindClusters[data, Method -> "JarvisPatrick"]]

Out[2]= [image]
```

### Options (5)

#### DistanceFunction (1)

Find the clustering of the strings using the edit distance and specifying the ``"NeighborhoodRadius"`` :

```wl
In[1]:= FindClusters[{"abc", "xyz", "bca", "wxyz", "xw", "xx", "xxx"}, Method -> {"JarvisPatrick", "NeighborhoodRadius" -> .5}, DistanceFunction -> EditDistance]

Out[1]= {{"abc", "bca"}, {"xyz", "wxyz"}, {"xw", "xx"}, {"xxx"}}
```

#### "NeighborhoodRadius" (2)

Find clusters using ``"JarvisPatrick"`` :

```wl
In[1]:= FindClusters[{1, 2, 10, 12, 3, 1, 13, 25, 26}, Method -> "JarvisPatrick"]

Out[1]= {{1, 1}, {2}, {10}, {12, 13}, {3}, {25, 26}}
```

Find clusters by specifying the ``"NeighborhoodRadius"`` suboption:

```wl
In[2]:= FindClusters[{1, 2, 10, 12, 3, 1, 13, 25, 26}, Method -> {"JarvisPatrick", "NeighborhoodRadius" -> 0.5}]

Out[2]= {{1, 2, 10, 12, 3, 1}, {13, 25, 26}}
```

---

Define a set of two-dimensional data points, characterized by four somewhat nebulous clusters:

```wl
In[1]:=
data = Join[RandomReal[1, {100, 2}], RandomReal[{-3, -.1}, {100, 2}]];
ListPlot[data]

Out[1]= [image]
```

Plot different clusterings of the data using the ``"JarvisPatrick"`` method by varying the ``"NeighborhoodRadius"`` :

```wl
In[2]:=
table = Table[ListPlot[FindClusters[data, Method -> {"JarvisPatrick", "NeighborhoodRadius" -> p}]], {p, {0.1, 0.2, 0.3}}];
	Grid[{table}, Frame -> All]

Out[2]= [image]
```

#### "SharedNeighborsNumber" (2)

Find clusters by varying ``"NeighborsNumber"`` suboptions:

```wl
In[1]:= FindClusters[{{0, 0}, {1, 1}, {1, 0}, {2, 2}, {10, 11}, {12, 13}, {3, 3}, {25, 26}}, Method -> {"JarvisPatrick", "SharedNeighborsNumber" -> #}]& /@ {2, 3}

Out[1]= {{{{0, 0}, {1, 1}, {1, 0}}, {{2, 2}, {3, 3}}, {{10, 11}, {12, 13}}, {{25, 26}}}, {{{0, 0}}, {{1, 1}}, {{1, 0}}, {{2, 2}}, {{10, 11}}, {{12, 13}}, {{3, 3}}, {{25, 26}}}}
```

---

Create random 2D vectors:

```wl
In[1]:=
SeedRandom[123];
data = Join[RandomReal[{0.5, 2}, {100, 2}], RandomReal[{-3, -0.1}, {500, 2}]];
ListPlot[data]

Out[1]= [image]
```

Plot clustering of data points using the ``"JarvisPatrick"`` method by varying the ``"SharedNeighborsNumber"`` :

```wl
In[2]:= ListPlot[FindClusters[data, Method -> {"JarvisPatrick", "SharedNeighborsNumber" -> 10}]]

Out[2]= [image]
```

### Applications (1)

Create and visualize noisy 2D moon-shaped training and test datasets:

```wl
In[1]:=
circle[r_, theta_] := {r Sin[theta], r Cos[theta]};
SeedRandom[77];
{train, test} = With[{
	rot = RotationTransform[π, {0, 0}], 
	tra = TranslationTransform[{1, 2.5}], pts = circle@@@RandomVariate[UniformDistribution[{{2, 3}, {0, Pi}}], 2000]}, 
	TakeDrop[RandomSample@Join[tra[rot[pts]], pts], 2000]
	];
ListPlot[{train, test}, PlotRange -> All, AspectRatio -> 1, PlotStyle -> Directive[ PointSize[0.013], Opacity[0.7]], Frame -> True, Axes -> False]

Out[1]= [image]
```

Train various ``ClassifierFunction``s by varying the ``"NeighborhoodRadius"`` using the ``"JarvisPatrick"`` method:

```wl
In[2]:=
nrad = {0.1, 0.2, 0.5};
	cls = ClusterClassify[train, Method -> {"JarvisPatrick", "NeighborhoodRadius" -> #}]& /@ nrad;
```

Find and visualize different clusterings of the test set, given small changes in ``"NeighborhoodRadius"`` :

```wl
In[3]:=
decision = (#[test])& /@ cls;
	clsR = Range[Length[Information[#, "Classes"]]]& /@ cls;
	
	Grid[{Table[ListPlot[
	Pick[test, decision[[i]], #]& /@ clsR[[i]], 
	PlotStyle -> Directive[PointSize[0.025], Opacity[0.7]], AspectRatio -> 1, Frame -> True, Axes -> False, PlotLabel -> "NRadius=" <> ToString@nrad[[i]] ]
	, {i, 1, Length@cls}]}, Frame -> All]

Out[3]= [image]	[image]	[image]
```

## See Also

* [`FindClusters`](https://reference.wolfram.com/language/ref/FindClusters.en.md)
* [`ClusterClassify`](https://reference.wolfram.com/language/ref/ClusterClassify.en.md)
* [`ClusteringComponents`](https://reference.wolfram.com/language/ref/ClusteringComponents.en.md)
* [`ClusteringTree`](https://reference.wolfram.com/language/ref/ClusteringTree.en.md)
* [`Dendrogram`](https://reference.wolfram.com/language/ref/Dendrogram.en.md)
* [`DimensionReduction`](https://reference.wolfram.com/language/ref/DimensionReduction.en.md)
* [`Agglomerate`](https://reference.wolfram.com/language/ref/method/Agglomerate.en.md)
* [`GaussianMixture`](https://reference.wolfram.com/language/ref/method/GaussianMixture.en.md)
* [`DBSCAN`](https://reference.wolfram.com/language/ref/method/DBSCAN.en.md)
* [`KMeans`](https://reference.wolfram.com/language/ref/method/KMeans.en.md)
* [`KMedoids`](https://reference.wolfram.com/language/ref/method/KMedoids.en.md)
* [`MeanShift`](https://reference.wolfram.com/language/ref/method/MeanShift.en.md)
* [`NeighborhoodContraction`](https://reference.wolfram.com/language/ref/method/NeighborhoodContraction.en.md)
* [`SpanningTree`](https://reference.wolfram.com/language/ref/method/SpanningTree.en.md)
* [`Spectral`](https://reference.wolfram.com/language/ref/method/Spectral.en.md)

## Tech Notes

* [Partitioning Data into Clusters](https://reference.wolfram.com/language/tutorial/NumericalOperationsOnData.en.md#2948)

## Related Links

* [An Elementary Introduction to the Wolfram Language: Machine Learning](https://www.wolfram.com/language/elementary-introduction/22-machine-learning.html)

## History

* [Introduced in 2020 (12.1)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn121.en.md)