---
title: "TTest"
language: "en"
type: "Symbol"
summary: "TTest[data] tests whether the mean of data is zero. TTest[{data1, data2}] tests whether the means of data1 and data2 are equal. TTest[dspec, \\[Mu]0] tests the mean against \\[Mu]0. TTest[dspec, \\[Mu]0,  property] returns the value of  property."
keywords: 
- hypothesis testing
- test of location
- mean test
- mean difference test
- equal mean test
- test of shift
- Student T-test
- Welch's T-test
- T-test
- t test
- Student's test
- Welch's test
- Hotelling's T Squared test
- Satterthwaite degrees of freedom
- Welch approximation
- Welch-Satterthwaite
- mean
- equal means
canonical_url: "https://reference.wolfram.com/language/ref/TTest.html"
source: "Wolfram Language Documentation"
related_guides: 
  - 
    title: "Hypothesis Tests"
    link: "https://reference.wolfram.com/language/guide/HypothesisTests.en.md"
related_functions: 
  - 
    title: "HypothesisTestData"
    link: "https://reference.wolfram.com/language/ref/HypothesisTestData.en.md"
  - 
    title: "LocationTest"
    link: "https://reference.wolfram.com/language/ref/LocationTest.en.md"
  - 
    title: "LocationEquivalenceTest"
    link: "https://reference.wolfram.com/language/ref/LocationEquivalenceTest.en.md"
  - 
    title: "VarianceTest"
    link: "https://reference.wolfram.com/language/ref/VarianceTest.en.md"
  - 
    title: "VarianceEquivalenceTest"
    link: "https://reference.wolfram.com/language/ref/VarianceEquivalenceTest.en.md"
  - 
    title: "DistributionFitTest"
    link: "https://reference.wolfram.com/language/ref/DistributionFitTest.en.md"
  - 
    title: "MannWhitneyTest"
    link: "https://reference.wolfram.com/language/ref/MannWhitneyTest.en.md"
  - 
    title: "PairedTTest"
    link: "https://reference.wolfram.com/language/ref/PairedTTest.en.md"
  - 
    title: "PairedZTest"
    link: "https://reference.wolfram.com/language/ref/PairedZTest.en.md"
  - 
    title: "SignTest"
    link: "https://reference.wolfram.com/language/ref/SignTest.en.md"
  - 
    title: "SignedRankTest"
    link: "https://reference.wolfram.com/language/ref/SignedRankTest.en.md"
  - 
    title: "ZTest"
    link: "https://reference.wolfram.com/language/ref/ZTest.en.md"
---
# TTest

TTest[data] tests whether the mean of data is zero. 

TTest[{data1, data2}] tests whether the means of data1 and data2 are equal.

TTest[dspec, μ0] tests the mean against μ0.

TTest[dspec, μ0, "property"] returns the value of "property".

## Details and Options

* ``TTest`` tests the null hypothesis $H_0$ against the alternative hypothesis $H_a$:

|                |                                                      |                                                          |
| -------------- | ---------------------------------------------------- | -------------------------------------------------------- |
|                | $H_0$                  | $H_a$                      |
| data           | $\mu _1=\mu _0$        | $\mu _1\neq \mu _0$        |
| {data1, data2} | $\mu _1-\mu _2=\mu _0$ | $\mu _1-\mu _2\neq \mu _0$ |

* where ``μi`` is the population mean for ``datai``.

* By default, a probability value or $p$-value is returned.

* A small $p$-value suggests that it is unlikely that $H_0$ is true.

* The data in ``dspec`` can be univariate ``{x1, x2, …}`` or multivariate ``{{x1, y1, …}, {x2, y2, …}, …}``.

* The argument ``μ0`` can be a real number or a real vector with length equal to the dimension of the data.

* ``TTest`` assumes that the data is normally distributed but is fairly robust to this assumption. ``TTest`` also assumes that the samples are independent in the two sample cases.

* ``TTest[dspec, μ0, "HypothesisTestData"]`` returns a ``HypothesisTestData`` object ``htd`` that can be used to extract additional test results and properties using the form ``htd["property"]``.

* ``TTest[dspec, μ0, "property"]`` can be used to directly give the value of ``"property"``.

* Properties related to the reporting of test results include:

|                       |                                                                                 |
| --------------------- | ------------------------------------------------------------------------------- |
| "DegreesOfFreedom"    | the degrees of freedom used in a test                                           |
| "PValue"              | list of $p$-values                                |
| "PValueTable"         | formatted table of $p$-values                     |
| "ShortTestConclusion" | a short description of the conclusion of a test                                 |
| "TestConclusion"      | a description of the conclusion of a test                                       |
| "TestData"            | list of pairs of test statistics and $p$-values   |
| "TestDataTable"       | formatted table of $p$-values and test statistics |
| "TestStatistic"       | list of test statistics                                                         |
| "TestStatisticTable"  | formatted table of test statistics                                              |

* For univariate samples, ``TTest`` performs a Student $t$ test. The test statistic is assumed to follow a ``StudentTDistribution[df]``.

* For multivariate samples, ``TTest`` performs Hotelling's $T^2$ test. The test statistic is assumed to follow a ``HotellingTSquareDistribution[p, df]`` where ``p`` is the dimension of ``data``.

* The degrees of freedom ``df``, used to specify the distribution of the test statistic, depend on the sample size, number of samples, and in the case of two univariate samples, the results of a test for equal variances.

* The following options can be used:

|                        |           |                                               |
| ---------------------- | --------- | --------------------------------------------- |
| AlternativeHypothesis  | "Unequal" | the inequality for the alternative hypothesis |
| SignificanceLevel      | 0.05      | cutoff for diagnostics and reporting          |
| VerifyTestAssumptions  | Automatic | what assumptions to verify                    |

* For the ``TTest``, a cutoff $\alpha$ is chosen such that $H_0$ is rejected only if $p<\alpha$. The value of $\alpha$ used for the ``"TestConclusion"`` and ``"ShortTestConclusion"`` properties is controlled by the ``SignificanceLevel`` option. This value $\alpha$ is also used in diagnostic tests of assumptions, including tests for normality, equal variance, and symmetry. By default, $\alpha$ is set to ``0.05``.

* Named settings for ``VerifyTestAssumptions`` in ``TTest`` include:

|                 |                                                 |
| --------------- | ----------------------------------------------- |
| "Normality"     | verify that all data is normally distributed    |
| "EqualVariance" | verify that data1 and data2 have equal variance |

---

## Examples (45)

### Basic Examples (3)

Test whether the mean of a population is zero:

```wl
In[1]:= data = RandomVariate[NormalDistribution[0.05, 1], 10^4];

In[2]:= TTest[data]

Out[2]= 0.00139658
```

The full test table:

```wl
In[3]:= TTest[data, Automatic, "TestDataTable"]

Out[3]=
| ""  | "Statistic" | "P‐Value"  |
| :-- | :---------- | :--------- |
| "T" | 3.19625     | 0.00139658 |
```

---

Test whether the means of two populations differ by 2:

```wl
In[1]:=
BlockRandom[SeedRandom[1];data1 = RandomVariate[NormalDistribution[1.85, 1], 1000];
	data2 = RandomVariate[NormalDistribution[0, 1], 1000]];
```

The mean difference $\mu _1-\mu _2$ :

```wl
In[2]:= Mean[data1] - Mean[data2]

Out[2]= 1.78404

In[3]:= SmoothHistogram[{data1, data2 + 2}]

Out[3]= [image]
```

At the 0.05 level, $\mu _1-\mu _2$ is significantly different from 2:

```wl
In[4]:= TTest[{data1, data2}, 2]

Out[4]= 9.723994369898542`*^-7
```

---

Compare the locations of multivariate populations:

```wl
In[1]:=
BlockRandom[SeedRandom[1];data1 = RandomVariate[MultinormalDistribution[{1, 2}, IdentityMatrix[2]], 1000];
	data2 = RandomVariate[MultinormalDistribution[{0, 0}, IdentityMatrix[2]], 1000]];
```

The mean difference vector $\mu _1-\mu _2$ :

```wl
In[2]:= Mean[data1] - Mean[data2]

Out[2]= {0.92798, 2.00348}

In[3]:= Histogram3D[{data1, Transpose[Transpose[data2] + {1, 2}]}]

Out[3]= [image]
```

At the 0.05 level, $\mu _1-\mu _2$ is not significantly different from ``{1, 2}`` :

```wl
In[4]:= TTest[{data1, data2}, {1, 2}]

Out[4]= 0.281492
```

### Scope (13)

#### Testing (10)

Test $H_0:$$\mu =0$ versus $H_a:$$\mu \neq 0$ :

```wl
In[1]:=
SeedRandom[1];
data1 = RandomVariate[NormalDistribution[0, 1], 500];
data2 = RandomVariate[NormalDistribution[3, 1], 500];
```

The $p$-values are typically large when the mean is close to $\mu _0$ :

```wl
In[2]:= TTest[data1]

Out[2]= 0.463042
```

The $p$-values are typically small when the location is far from $\mu _0$ :

```wl
In[3]:= TTest[data2]

Out[3]= 2.976081511139755`*^-251
```

---

Using ``Automatic`` is equivalent to testing for a mean of zero:

```wl
In[1]:= data = RandomVariate[NormalDistribution[0, 1], 500];

In[2]:= TTest[data, 0]

Out[2]= 0.190219

In[3]:= TTest[data, Automatic]

Out[3]= 0.190219
```

---

Test $H_0:$$\mu =3$ versus $H_a:$$\mu \neq 3$ :

```wl
In[1]:=
SeedRandom[1];
data1 = RandomVariate[NormalDistribution[3, 1], 500];
data2 = RandomVariate[NormalDistribution[0, 1], 200];
```

The $p$-values are typically large when the mean is close to $\mu _0$ :

```wl
In[2]:= TTest[data1, 3]

Out[2]= 0.463042
```

The $p$-values are typically small when the location is far from $\mu _0$ :

```wl
In[3]:= TTest[data2, 3]

Out[3]= 2.6925407662143437`*^-108
```

---

Test whether the mean vector of a multivariate dataset is the zero vector:

```wl
In[1]:= SeedRandom[1];data = RandomVariate[MultinormalDistribution[{.1, 0, -.05, 0}, IdentityMatrix[4]], 10^3];

In[2]:= TTest[data]

Out[2]= 0.0247623
```

Alternatively, test against ``{0.1, 0, –0.05, 0}`` :

```wl
In[3]:= TTest[data, {0.1, 0, -.05, 0}]

Out[3]= 0.205163
```

---

Test $H_0$$:\mu _1-\mu _2=0$ versus $H_a$$:\mu _1-\mu _2\neq 0$ :

```wl
In[1]:=
SeedRandom[1];data1 = RandomVariate[NormalDistribution[0, 1], 100];
data2 = RandomVariate[NormalDistribution[1, 1], 150];
data3 = RandomVariate[NormalDistribution[0, 2], 130];
```

The $p$-values are generally small when the locations are not equal:

```wl
In[2]:= TTest[{data1, data2}, 0]

Out[2]= 2.591557933632166`*^-16
```

The $p$-values are generally large when the locations are equal:

```wl
In[3]:= TTest[{data1, data3}, 0]

Out[3]= 0.818062
```

---

Test $H_0$$:\mu _1-\mu _2=3$ versus $H_a$$:\mu _1-\mu _2\neq 3$ :

```wl
In[1]:=
SeedRandom[1];data1 = RandomVariate[NormalDistribution[3, 1], 135];
data2 = RandomVariate[NormalDistribution[0, 1], 100];
```

The order of the datasets affects the test results:

```wl
In[2]:= TTest[{data1, data2}, 3]

Out[2]= 0.815705

In[3]:= TTest[{data2, data1}, 3]

Out[3]= 1.7505309359564832`*^-120
```

---

Test whether the mean difference vector of two multivariate datasets is the zero vector:

```wl
In[1]:= SeedRandom[1];data1 = RandomVariate[MultinormalDistribution[{.5, 0, -.5, 0}, IdentityMatrix[4]], 122];

In[2]:= data2 = RandomVariate[MultinormalDistribution[{-.5, 0, .5, 0}, IdentityMatrix[4]], 135];

In[3]:= TTest[{data1, data2}]

Out[3]= 1.1102230246251565`*^-16
```

Alternatively, test against ``{1, 0, –1, 0}`` :

```wl
In[4]:= TTest[{data1, data2}, {1, 0, -1, 0}]

Out[4]= 0.26231
```

---

Create a ``HypothesisTestData`` object for repeated property extraction:

```wl
In[1]:= SeedRandom[1];data = RandomVariate[NormalDistribution[], {2, 10^4}];

In[2]:= ℋ = TTest[data, 0, "HypothesisTestData"];
```

The properties available for extraction:

```wl
In[3]:= ℋ["Properties"]

Out[3]= {"DegreesOfFreedom", "HypothesisTestData", "Properties", "PValue", "PValueTable", "ShortTestConclusion", "T", "TestConclusion", "TestData", "TestDataTable", "TestEntries", "TestStatistic", "TestStatisticTable"}
```

---

Extract some properties from a ``HypothesisTestData`` object:

```wl
In[1]:= SeedRandom[2];data = RandomVariate[NormalDistribution[], {2, 10^4}];

In[2]:= ℋ = TTest[data, 0, "HypothesisTestData"];
```

The $p$-value, test statistic, and degrees of freedom:

```wl
In[3]:= ℋ["PValue"]

Out[3]= 0.577357

In[4]:= ℋ["TestStatistic"]

Out[4]= -0.557258

In[5]:= ℋ["DegreesOfFreedom"]

Out[5]= 19998
```

---

Extract any number of properties simultaneously:

```wl
In[1]:= SeedRandom[2];data = RandomVariate[NormalDistribution[], {2, 10^4}];

In[2]:= ℋ = TTest[data, 0, "HypothesisTestData"];
```

The $p$-value, test statistic, and degrees of freedom:

```wl
In[3]:= ℋ["PValue", "TestStatistic", "DegreesOfFreedom"]

Out[3]= {0.577357, -0.557258, 19998}
```

#### Reporting (3)

Tabulate the test results:

```wl
In[1]:= SeedRandom[1];data = RandomVariate[NormalDistribution[], {2, 20}];

In[2]:= ℋ = TTest[data, 0, "HypothesisTestData"];

In[3]:= ℋ["TestDataTable"]

Out[3]=
| ""  | "Statistic" | "P‐Value" |
| :-- | :---------- | :-------- |
| "T" | 1.10178     | 0.277489  |
```

---

Retrieve the entries from a test table for customized reporting:

```wl
In[1]:= SeedRandom[1];data = RandomVariate[NormalDistribution[], {1000, 25}];

In[2]:= res = Table[TTest[v, 0, "TestData", VerifyTestAssumptions -> None], {v, data}];

In[3]:= ListPlot[res, FrameLabel -> {"T", "p-value"}, Frame -> True, PlotRange -> All]

Out[3]= [image]
```

---

Tabulate $p$-values or test statistics:

```wl
In[1]:= SeedRandom[1];data = RandomVariate[NormalDistribution[], {2, 10^2}];

In[2]:= ℋ = TTest[data, 0, "HypothesisTestData"];

In[3]:= ℋ["PValueTable"]

Out[3]=
| ""  | "P‐Value" |
| :-- | :-------- |
| "T" | 0.124874  |
```

The $p$-value from the table:

```wl
In[4]:= ℋ["PValue"]

Out[4]= 0.124874

In[5]:= ℋ["TestStatisticTable"]

Out[5]=
| ""  | "Statistic" |
| :-- | :---------- |
| "T" | -1.54116    |
```

The test statistic from the table:

```wl
In[6]:= ℋ["TestStatistic"]

Out[6]= -1.54116
```

### Options (11)

#### AlternativeHypothesis (3)

A two-sided test is performed by default:

```wl
In[1]:=
SeedRandom[1];
data = RandomVariate[NormalDistribution[], 100];
```

Test $H_0:$$\mu =0$ versus $H_a:$$\mu \neq 0$ :

```wl
In[2]:= TTest[data, 0, AlternativeHypothesis -> "Unequal"]

Out[2]= 0.924235

In[3]:= TTest[data, 0, AlternativeHypothesis -> Automatic]

Out[3]= 0.924235
```

---

Perform a two-sided test or a one-sided alternative:

```wl
In[1]:=
SeedRandom[1];
data = RandomVariate[NormalDistribution[], 100];
```

Test $H_0:$$\mu =0$ versus $H_a:$$\mu \neq 0$ :

```wl
In[2]:= TTest[data, 0, AlternativeHypothesis -> "Unequal"]

Out[2]= 0.924235
```

Test $H_0:$$\mu \geq 0$ versus $H_a:$$\mu <0$ :

```wl
In[3]:= TTest[data, 0, AlternativeHypothesis -> "Less"]

Out[3]= 0.462117
```

Test $H_0:$$\mu \leq 0$ versus $H_a:$$\mu >0$ :

```wl
In[4]:= TTest[data, 0, AlternativeHypothesis -> "Greater"]

Out[4]= 0.537883
```

---

Perform tests with one-sided alternatives when $\mu _0$ is given:

```wl
In[1]:=
SeedRandom[1];
data1 = RandomVariate[NormalDistribution[2.9, 1], 1000];
data2 = RandomVariate[NormalDistribution[0, 1], 1000];

In[2]:= Mean[data1] - Mean[data2]

Out[2]= 2.83404
```

Test $H_0$$:\mu _1-\mu _2\geq 3$ versus $H_a$$:\mu _1-\mu _2<3$ :

```wl
In[3]:= TTest[{data1, data2}, 3, AlternativeHypothesis -> "Less"]

Out[3]= 0.0000823041
```

Test $H_0$$:\mu _1-\mu _2\geq 2.9$ versus $H_a$$:\mu _1-\mu _2<2.9$ :

```wl
In[4]:= TTest[{data1, data2}, 2.9, AlternativeHypothesis -> "Less"]

Out[4]= 0.0668435
```

#### SignificanceLevel (2)

Set the significance level for diagnostic tests:

```wl
In[1]:= data = BlockRandom[SeedRandom[2];RandomVariate[StudentTDistribution[3], 50]];

In[2]:= TTest[data, 0, SignificanceLevel -> .0001]

Out[2]= 0.169902
```

By default, ``0.05`` is used:

```wl
In[3]:= TTest[data, 0]
```

TTest::nortst: At least one of the p-values in {0.00416643}, resulting from a test for normality, is below 0.05\`. The tests in {T} require that the data is normally distributed.

```wl
Out[3]= 0.169902
```

---

The significance level is also used for ``"TestConclusion"`` and ``"ShortTestConclusion"`` :

```wl
In[1]:= BlockRandom[SeedRandom[1];data = RandomVariate[NormalDistribution[0, 1], 100]];

In[2]:= Mean[data]

Out[2]= -0.00966945

In[3]:= ℋ1 = TTest[data, .2, "HypothesisTestData", SignificanceLevel -> .1];

In[4]:= ℋ2 = TTest[data, .2, "HypothesisTestData", SignificanceLevel -> .005];

In[5]:= ℋ1["TestConclusion"]//TraditionalForm

Out[5]//TraditionalForm=
$$\text{The null hypothesis that }\text{the }\text{mean}\text{ of the population is equal to }0.2 \text{is rejected at the }10.\text{ percent level
}\text{based on the }\text{T}\text{ test.}$$

In[6]:= ℋ2["TestConclusion"]//TraditionalForm

Out[6]//TraditionalForm=
$$\text{The null hypothesis that }\text{the }\text{mean}\text{ of the population is equal to }0.2 \text{is not rejected at the }0.5\text{ percent
level }\text{based on the }\text{T}\text{ test.}$$

In[7]:= ℋ1["ShortTestConclusion"]

Out[7]= "Reject"

In[8]:= ℋ2["ShortTestConclusion"]

Out[8]= "Do not reject"
```

#### VerifyTestAssumptions (6)

By default, normality and equal variance are tested:

```wl
In[1]:=
data1 = RandomVariate[NormalDistribution[0, 1], 100];
data2 = RandomVariate[NormalDistribution[0, 2], 100];

In[2]:= TTest[{data1, data2}, 0, VerifyTestAssumptions -> Automatic]

Out[2]= 0.785189
```

If assumptions are not checked, some test results may differ:

```wl
In[3]:= TTest[{data1, data2}, 0, VerifyTestAssumptions -> None]

Out[3]= 0.785091
```

---

Diagnostics can be controlled as a group using ``All`` or ``None`` :

```wl
In[1]:=
data1 = RandomVariate[NormalDistribution[0, 1], 100];
data2 = RandomVariate[NormalDistribution[0, 2], 100];
```

Verify all assumptions:

```wl
In[2]:= TTest[{data1, data2}, 0, VerifyTestAssumptions -> All]

Out[2]= 0.672354
```

Check no assumptions:

```wl
In[3]:= TTest[{data1, data2}, 0, VerifyTestAssumptions -> None]

Out[3]= 0.672245
```

---

Diagnostics can be controlled independently:

```wl
In[1]:=
data1 = RandomVariate[NormalDistribution[0, 1], 100];
data2 = RandomVariate[NormalDistribution[0, 2], 100];
```

Assume normality but check for equal variances:

```wl
In[2]:= TTest[{data1, data2}, 0, VerifyTestAssumptions -> "EqualVariance"]

Out[2]= 0.521721
```

Only check for normality:

```wl
In[3]:= TTest[{data1, data2}, 0, VerifyTestAssumptions -> "Normality"]

Out[3]= 0.521449
```

Set the equal variance assumption to ``False`` :

```wl
In[4]:= TTest[{data1, data2}, 0, VerifyTestAssumptions -> "EqualVariance" -> False]

Out[4]= 0.521721
```

---

Unlisted assumptions are not tested:

```wl
In[1]:= data = RandomVariate[CauchyDistribution[0, 1], 100];
```

Here, normality is assumed:

```wl
In[2]:= TTest[data, 0, VerifyTestAssumptions -> "EqualVariance"]

Out[2]= 0.833765
```

The result is the same but a warning is issued:

```wl
In[3]:= TTest[data, 0, VerifyTestAssumptions -> All]
```

TTest::nortst: At least one of the p-values in {0}, resulting from a test for normality, is below 0.05\`. The tests in {T} require that the data is normally distributed.

```wl
Out[3]= 0.158483
```

---

Bypassing diagnostic tests can save compute time:

```wl
In[1]:= data = RandomVariate[MultinormalDistribution[{0, 0, 0}, IdentityMatrix[3]], 10 ^ 4];

In[2]:= TTest[data, Automatic, "TestDataTable", VerifyTestAssumptions -> All]//AbsoluteTiming

Out[2]=
{3.042039, | ""  | "Statistic" | "P‐Value" |
| :-- | :---------- | :-------- |
| "T" | 6.26495     | 0.0995343 |}

In[3]:= TTest[data, Automatic, "TestDataTable", VerifyTestAssumptions -> None]//AbsoluteTiming

Out[3]=
{0.015600, | ""  | "Statistic" | "P‐Value" |
| :-- | :---------- | :-------- |
| "T" | 6.26495     | 0.0995343 |}
```

---

It is often useful to bypass diagnostic tests for simulation purposes:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], {1000, 100}];

In[2]:= AbsoluteTiming[T = Quiet@TTest[#, Automatic, "TestStatistic"]& /@ data;]

Out[2]= {4.414347, Null}
```

The assumptions of the test hold by design, so a great deal of time can be saved:

```wl
In[3]:= AbsoluteTiming[T2 = Quiet@TTest[#, Automatic, "TestStatistic", VerifyTestAssumptions -> None]& /@ data;]

Out[3]= {0.655541, Null}
```

The results are identical:

```wl
In[4]:= SmoothHistogram[{T, T2}]

Out[4]= [image]
```

### Applications (4)

Test whether the means of some populations are equal:

```wl
In[1]:=
SeedRandom[1];data1 = RandomVariate[NormalDistribution[0, 1], 100];
data2 = RandomVariate[NormalDistribution[0, 1], 100];
data3 = RandomVariate[NormalDistribution[2, 1], 100];

In[2]:= BoxWhiskerChart[{data1, data2, data3}]

Out[2]= [image]
```

The means of the first two populations are similar:

```wl
In[3]:= TTest[{data1, data2}, Automatic, "TestDataTable"]

Out[3]=
| ""  | "Statistic" | "P‐Value" |
| :-- | :---------- | :-------- |
| "T" | -1.54116    | 0.124874  |
```

The mean of the third population is different from the first:

```wl
In[4]:= TTest[{data1, data3}, Automatic, "TestDataTable"]

Out[4]=
| ""  | "Statistic" | "P‐Value"                |
| :-- | :---------- | :----------------------- |
| "T" | -13.9158    | 3.7727447892434964`*^-31 |
```

---

The "third series" of measurements of the passage time of light was recorded by Newcomb in 1882. The given values divided by 1000 plus 24 give the time in millionths of a second for light to traverse a known distance. The true value is now considered to be 33.02:

```wl
In[1]:= lspeed = ExampleData[{"Statistics", "NewcombLightSpeed"}];

In[2]:= SmoothHistogram[lspeed, PlotRange -> All]

Out[2]= [image]
```

Use Chauvenet's criterion to identify outlying observations:

```wl
In[3]:= ChauvenetOutlier[d_, data_] := Probability[x <= d, x\[Distributed]NormalDistribution[Mean[data], StandardDeviation[data]]] < (1/2Length[data])

In[4]:= outliers = Select[lspeed, ChauvenetOutlier[#, lspeed]&]

Out[4]= {-44, -2}

In[5]:= newcomb = DeleteCases[lspeed, Alternatives@@outliers];
```

A $t$-test on the bulk of the data suggests that Newcomb's measure of the speed of light was significantly lower than reality:

```wl
In[6]:= Mean[newcomb]//N

Out[6]= 27.75

In[7]:= TTest[newcomb, 33.02, "TestDataTable"]

Out[7]=
| ""  | "Statistic" | "P‐Value"                |
| :-- | :---------- | :----------------------- |
| "T" | -8.29361    | 1.0948733536487493`*^-11 |
```

---

The vitamin C content and head weight were recorded for 30 samples from each of two experimental cabbage cultivars:

```wl
In[1]:= ExampleData[{"Statistics", "Cabbages"}, "ColumnDescriptions"]

Out[1]= {"One of two levels identifying cabbage cultivar (\"c39\" and \"c52\")", "One of three planting dates (\"d16\", \"d20\", or \"d21\")", "Weight of the cabbage head in kg", "Ascorbic acid content"}

In[2]:= cabbageData = ExampleData[{"Statistics", "Cabbages"}];

In[3]:= cultivar = cabbageData[[All, 1]];
```

Plots of the head weight and vitamin C content by cultivar:

```wl
In[4]:=
w39 = Pick[cabbageData[[All, 3]], cultivar, "c39"];
w52 = Pick[cabbageData[[All, 3]], cultivar, "c52"];
c39 = Pick[cabbageData[[All, 4]], cultivar, "c39"];
c52 = Pick[cabbageData[[All, 4]], cultivar, "c52"];

In[5]:= {SmoothHistogram[{w39, w52}, PlotLabel -> "Head Weight"], SmoothHistogram[{c39, c52}, PlotLabel -> "Vitamin C", PlotLegends -> {"c39", "c52"}]}

Out[5]= {[image], [image]}
```

The vitamin C content is significantly higher for the c52 cultivar:

```wl
In[6]:= TTest[{c52, c39}, 0, "TestDataTable"]

Out[6]=
| ""  | "Statistic" | "P‐Value"              |
| :-- | :---------- | :--------------------- |
| "T" | 6.39086     | 3.065349178189226`*^-8 |
```

The weight data is not normally distributed for c52, so ``MannWhitneyTest`` is used to show that a significantly lighter cabbage produced significantly more vitamin C:

```wl
In[7]:= DistributionFitTest[w52]

Out[7]= 0.0200393

In[8]:= MannWhitneyTest[{w52, w39}, 0, "TestDataTable"]

Out[8]=
| ""             | "Statistic" | "P‐Value" |
| :------------- | :---------- | :-------- |
| "Mann‐Whitney" | 253.        | 0.0034526 |
```

---

Fifty samples from each of three species of iris flowers were collected. The samples consist of measures of the length and width of the irises' sepals and petals. It is difficult to distinguish the species *virginica* and *versicolor* from one another:

```wl
In[1]:= iris = ExampleData[{"Statistics", "FisherIris"}];

In[2]:= species = iris[[All, -1]];

In[3]:=
versicolorData = Pick[iris[[All, 1 ;; -2]], species, "versicolor"];
virginicaData = Pick[iris[[All, 1 ;; -2]], species, "virginica"];
```

A Hotelling $T^2$ test suggests a difference in the measures for the two similar species:

```wl
In[4]:= TTest[{versicolorData, virginicaData}, 0, "TestDataTable"]

Out[4]=
| ""  | "Statistic" | "P‐Value"               |
| :-- | :---------- | :---------------------- |
| "T" | 77.6051     | 2.041666835594924`*^-11 |
```

A visualization of the data suggests this difference is most prominent in the petal dimensions:

```wl
In[5]:=
clmnDescr = ExampleData[{"Statistics", "FisherIris"}, "ColumnDescriptions"][[1 ;; -2]];
lgnd = Placed[LineLegend[{RGBColor[0.368417, 0.506779, 0.709798], RGBColor[0.880722, 0.611041, 0.142051]}, {"versicolor", "virginica"}, LegendLayout -> "Row"], Bottom];

In[6]:= Legended[Table[SmoothHistogram[{versicolorData[[All, i]], virginicaData[[All, i]]}, PlotLabel -> clmnDescr[[i]]], {i, Length[clmnDescr]}], lgnd]

Out[6]= [image]
```

### Properties & Relations (11)

For univariate data, the test statistic follows ``StudentTDistribution`` under $H_0$ :

```wl
In[1]:= data = RandomVariate[NormalDistribution[], {1000, 15}];

In[2]:= T = Table[TTest[i, Automatic, "TestStatistic", VerifyTestAssumptions -> None], {i, data}];

In[3]:= DistributionFitTest[T, StudentTDistribution[14]]

Out[3]= 0.13091
```

---

For multivariate data, the test statistic follows ``HotellingTSquareDistribution`` under $H_0$ :

```wl
In[1]:= data = RandomVariate[BinormalDistribution[.5], {1000, 25}];

In[2]:= T = Table[TTest[i, Automatic, "TestStatistic", VerifyTestAssumptions -> None], {i, data}];

In[3]:= DistributionFitTest[T, HotellingTSquareDistribution[2, 23]]

Out[3]= 0.419667
```

---

The degrees of freedom are data-dependent for univariate data:

```wl
In[1]:=
\[ScriptD]1 = RandomVariate[NormalDistribution[0, 1], 100];
\[ScriptD]2 = RandomVariate[NormalDistribution[0, 1], 95];
\[ScriptD]3 = RandomVariate[NormalDistribution[0, 5], 100];

In[2]:= L[x_] := Length[x];v[x_] := Variance[x]
```

One sample:

```wl
In[3]:= TTest[\[ScriptD]1, 0, "DegreesOfFreedom"]

Out[3]= 99

In[4]:= L[\[ScriptD]1] - 1

Out[4]= 99
```

Two samples with equal variances:

```wl
In[5]:= TTest[{\[ScriptD]1, \[ScriptD]2}, 0, "DegreesOfFreedom"]

Out[5]= 193

In[6]:= L[\[ScriptD]1] + L[\[ScriptD]2] - 2

Out[6]= 193
```

Two samples with unequal variances (Satterthwaite approximation):

```wl
In[7]:= TTest[{\[ScriptD]1, \[ScriptD]3}, 0, "DegreesOfFreedom"]

Out[7]= 108.011

In[8]:= (((v[\[ScriptD]1]/L[\[ScriptD]1]) + (v[\[ScriptD]3]/L[\[ScriptD]3]))^2/((v[\[ScriptD]1]/L[\[ScriptD]1]))^2 / (L[\[ScriptD]1] - 1) + ((v[\[ScriptD]3]/L[\[ScriptD]3]))^2 / (L[\[ScriptD]3] - 1))

Out[8]= 108.011
```

---

The type of degrees of freedom used can be controlled using ``VerifyTestAssumptions`` :

```wl
In[1]:=
\[ScriptD]1 = RandomVariate[NormalDistribution[0, 1], 95];
\[ScriptD]2 = RandomVariate[NormalDistribution[0, 5], 100];
```

Explicitly assume equal variances and test for normality:

```wl
In[2]:= TTest[{\[ScriptD]1, \[ScriptD]2}, 0, "DegreesOfFreedom", VerifyTestAssumptions -> {"EqualVariance" -> True, "Normality"}]

Out[2]= 193
```

Explicitly assume unequal variances to use the Satterthwaite approximation:

```wl
In[3]:= TTest[{\[ScriptD]1, \[ScriptD]2}, 0, "DegreesOfFreedom", VerifyTestAssumptions -> {"EqualVariance" -> False, "Normality"}]

Out[3]= 107.118
```

---

For multivariate data, the squared Mahalanobis distance is used to compute Hotelling's $T^2$ statistic:

```wl
In[1]:= data = RandomVariate[MultinormalDistribution[{1, 2, 3}, IdentityMatrix[3]], 100];

In[2]:= MahalanobisDistanceSquared[data_, mu_] := With[{inv = Inverse@Covariance[data], m = Mean[data]}, (m - mu).inv.(m - mu)]

In[3]:= Subscript[μ, 0] = {1, 2, 3};

In[4]:= T2 = Length[data] * MahalanobisDistanceSquared[data, Subscript[μ, 0]]

Out[4]= 2.17238
```

Under $H_0$, the test statistic follows ``HotellingTSquareDistribution[p, n - 1]`` :

```wl
In[5]:= pvalue = SurvivalFunction[HotellingTSquareDistribution[3, 99], T2]

Out[5]= 0.548629

In[6]:= TTest[data, {1, 2, 3}, "TestDataTable"]

Out[6]=
| ""  | "Statistic" | "P‐Value" |
| :-- | :---------- | :-------- |
| "T" | 2.17238     | 0.548629  |
```

---

If the population variance is known, the more powerful ``ZTest`` can be used:

```wl
In[1]:= σ = .25;

In[2]:= data = RandomVariate[NormalDistribution[2.5, Sqrt[σ]], {1000, 15}];

In[3]:= T = TTest[#, 2, VerifyTestAssumptions -> None]& /@ data;

In[4]:= Z = ZTest[#, σ, 2, VerifyTestAssumptions -> None]& /@ data;

In[5]:= α = 0.05;
```

``ZTest`` correctly rejects $H_0$ more frequently than ``TTest`` :

```wl
In[6]:= Probability[x < α, x\[Distributed]T]//N

Out[6]= 0.946

In[7]:= Probability[x < α, x\[Distributed]Z]//N

Out[7]= 0.968
```

---

``TTest`` is robust to mild deviations from normality:

```wl
In[1]:= data = BlockRandom[SeedRandom[1];RandomVariate[\[ScriptD] = StudentTDistribution[5], {1000, 35}]];

In[2]:= Plot[{PDF[\[ScriptD], x], PDF[NormalDistribution[], x]}, {x, -4, 4}, PlotLegends -> {"\[ScriptD]", NormalDistribution}]

Out[2]= [image]

In[3]:= pvals = TTest[#, VerifyTestAssumptions -> None]& /@ data;
```

The $p$-value can still be interpreted in the usual way:

```wl
In[4]:= DistributionFitTest[pvals, UniformDistribution[]]

Out[4]= 0.157273
```

---

Large deviations from normality require the use of median-based tests:

```wl
In[1]:= data = BlockRandom[SeedRandom[1];RandomVariate[\[ScriptD] = CauchyDistribution[0, 1], {1000, 35}]];

In[2]:= Plot[{PDF[\[ScriptD], x], PDF[NormalDistribution[], x]}, {x, -4, 4}, PlotLegends -> {"\[ScriptD]", NormalDistribution}]

Out[2]= [image]

In[3]:= tpvals = TTest[#, VerifyTestAssumptions -> None]& /@ data;

In[4]:= srpvals = SignedRankTest /@ data//Quiet;
```

The $p$-value can be interpreted in the usual way for ``SignedRankTest`` but not ``TTest`` :

```wl
In[5]:= DistributionFitTest[tpvals, UniformDistribution[]]

Out[5]= 3.9968028886505635`*^-15

In[6]:= DistributionFitTest[srpvals, UniformDistribution[]]

Out[6]= 0.283945
```

---

For two-sample testing of non-normal data, use ``MannWhitneyTest`` :

```wl
In[1]:=
data1 = RandomVariate[LaplaceDistribution[0, 1], {1000, 35}];
data2 = RandomVariate[LaplaceDistribution[.5, 1], {1000, 25}];

In[2]:= t = MapThread[TTest[{#1, #2}, VerifyTestAssumptions -> None]&, {data1, data2}];

In[3]:= mw = MapThread[MannWhitneyTest[{#1, #2}]&, {data1, data2}];
```

For non-normal data, ``MannWhitneyTest`` can be more powerful than ``TTest`` :

```wl
In[4]:= Probability[x ≤ .05, x\[Distributed]t]//N

Out[4]= 0.28

In[5]:= Probability[x ≤ 0.05, x\[Distributed]mw]//N

Out[5]= 0.361
```

---

``TTest`` works with the values only when the input is a ``TimeSeries`` :

```wl
In[1]:=
ts = TemporalData[TimeSeries, {CompressedData["«1188»"], {{0, 100, 1}}, 1, {"Continuous", 1}, {"Discrete", 1}, 1, 
  {ValueDimensions -> 1, ResamplingMethod -> None}}, False, 10.1];

In[2]:= TTest[ts]

Out[2]= 0.765928

In[3]:= TTest[ts["Values"]]

Out[3]= 0.765928
```

---

``TTest`` works with all the values together when the input is a ``TemporalData`` :

```wl
In[1]:=
td = TemporalData[Automatic, {CompressedData["«2294»"], {{0, 100, 1}}, 2, {"Continuous", 2}, {"Discrete", 1}, 1, 
  {ValueDimensions -> 1, ResamplingMethod -> None}}, False, 10.1];

In[2]:= TTest[td]

Out[2]= 0.466027
```

Test all the values only:

```wl
In[3]:=
data = td["ValueList"]//Flatten;
TTest[data]

Out[3]= 0.466027
```

Test whether the means of the two paths are equal:

```wl
In[4]:= {data1, data2} = td["ValueList"];

In[5]:= TTest[{data1, data2}]

Out[5]= 0.532502
```

### Possible Issues (2)

``TTest`` assumes that the data is normally distributed:

```wl
In[1]:= data = RandomVariate[ParetoDistribution[1, 2], 100];

In[2]:= TTest[data]
```

TTest::nortst: At least one of the p-values in {0}, resulting from a test for normality, is below 0.05\`. The tests in {T} require that the data is normally distributed.

```wl
Out[2]= 4.342795637239585`*^-26
```

Use a median-based test that does not assume normality:

```wl
In[3]:= SignTest[data]

Out[3]= 1.5777218104420236`*^-30
```

---

The covariance matrix of multivariate data may not be invertible:

```wl
In[1]:= data = BlockRandom[SeedRandom[3];RandomVariate[SuzukiDistribution[0.3, 38], {15, 2}]];

In[2]:= Quiet[TTest[data], {TTest::nortst}]
```

Inverse::luc: Result for Inverse of badly conditioned matrix {{1.12293\*10^63,-1.33943\*10^52},{-1.33943\*10^52,3.13147\*10^43}} may contain significant numerical errors.

```wl
Out[2]= 0.394492
```

### Neat Examples (1)

Compute the statistic when the null hypothesis $H_0$ is true:

```wl
In[1]:= data = RandomVariate[NormalDistribution[], {2500, 100}];

In[2]:= T1 = TTest[#, 0, "TestStatistic", VerifyTestAssumptions -> None]& /@ data;
```

The test statistic given a particular alternative:

```wl
In[3]:= T2 = TTest[#, 1, "TestStatistic", VerifyTestAssumptions -> None]& /@ data;
```

Compare the distributions of the test statistics:

In[4]:= SmoothHistogram[{T1,T2},Filling->Axis,PlotLegends->{"Subscript[H, 0] is True","Subscript[H, 0] is False"},PlotStyle->Thick]

```wl
Out[4]= [image]
```

## See Also

* [`HypothesisTestData`](https://reference.wolfram.com/language/ref/HypothesisTestData.en.md)
* [`LocationTest`](https://reference.wolfram.com/language/ref/LocationTest.en.md)
* [`LocationEquivalenceTest`](https://reference.wolfram.com/language/ref/LocationEquivalenceTest.en.md)
* [`VarianceTest`](https://reference.wolfram.com/language/ref/VarianceTest.en.md)
* [`VarianceEquivalenceTest`](https://reference.wolfram.com/language/ref/VarianceEquivalenceTest.en.md)
* [`DistributionFitTest`](https://reference.wolfram.com/language/ref/DistributionFitTest.en.md)
* [`MannWhitneyTest`](https://reference.wolfram.com/language/ref/MannWhitneyTest.en.md)
* [`PairedTTest`](https://reference.wolfram.com/language/ref/PairedTTest.en.md)
* [`PairedZTest`](https://reference.wolfram.com/language/ref/PairedZTest.en.md)
* [`SignTest`](https://reference.wolfram.com/language/ref/SignTest.en.md)
* [`SignedRankTest`](https://reference.wolfram.com/language/ref/SignedRankTest.en.md)
* [`ZTest`](https://reference.wolfram.com/language/ref/ZTest.en.md)

## Related Guides

* [Hypothesis Tests](https://reference.wolfram.com/language/guide/HypothesisTests.en.md)

## History

* [Introduced in 2010 (8.0)](https://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn80.en.md)