This is documentation for Mathematica 3, which was
based on an earlier version of the Wolfram Language.
 Statistics`HypothesisTests` A test of a statistical hypothesis is a test of assumption about the distribution of a variable. Given sample data, you test whether the population from which the sample came has a certain characteristic. You can use functions in this package to test hypotheses concerning the mean, the variance, the difference in two population means, or the ratio of their variances. The data that is given as an argument in a test function is assumed to be normally distributed. As a consequence of the Central Limit Theorem, you can disregard this normality assumption in the case of tests for the mean when the sample size, , is large and the data is unimodal. The test functions accept as arguments the list of univariate data, a hypothesized parameter and relevant options. Hypothesis tests for the mean. Hypothesis tests for the mean are based on the normal distribution when the population variance is known, and the Student distribution with degrees of freedom when the variance has to be estimated. If you know the standard deviation instead of the variance, you can also specify KnownStandardDeviation ->std. The output of a hypothesis test is a p-value, which is the probability of the sample estimate being as extreme as it is given that the hypothesized population parameter is true. A two-sided test can be requested using TwoSided->True. For more detailed information about a test use FullReport->True. This causes the parameter estimate and the test statistic to be included in the output. You can also specify a significance level using SignificanceLevel->siglev, which yields a conclusion of the test, stating acceptance or rejection of the hypothesis. Options for all hypothesis test functions. This loads the package. In[1]:= < 8] Out[3]= Hypothesis tests for mean and difference in means. To test the similarity between two populations, you can test whether their means are equal, or equivalently, you can test whether the difference between their means is zero. If the variances of the populations are known and specified as a value of KnownVariance, the test is based on the normal distribution. Usually, however, the variances are unknown and the test uses quantiles from the Student distribution to evaluate the hypothesis. Additional options for tests concerning difference in means. This is a second list of sample data whose population variance is also . In[4]:= data2 = {39, 40, 34, 45, 44, 38, 42, 39, 47, 41}; This tests whether the difference between the means of the two populations is . In[5]:= MeanDifferenceTest[data1, data2, 0,KnownVariance -> {8, 8}] Out[5]= This is the result of the same test but with a specified significance level and a request for a full report. The output now includes the estimator, test statistic and the conclusion of the test. At this level of significance, it is not unlikely that data1 and data2 came from the same normal population having variance 8. In[6]:= MeanDifferenceTest[data1, data2, 0,KnownVariance -> {8, 8},SignificanceLevel -> .05, FullReport -> True] Out[6]= You can also test for variance and the ratio of two variances using VarianceTest and VarianceRatioTest. These use the chi-square and -ratio distributions respectively. The same output options, SignificanceLevel, TwoSided, and FullReport, are available for these tests. Hypothesis tests for variance and ratio of two variances. Here is another set of data. In[7]:= data = {41.0, 42.4, 42.5, 40.6, 45.6, 34.4}; This is a test to see whether the variance of the population from which these data were sampled is . In[8]:= VarianceTest[data, 8, TwoSided -> True,FullReport -> True] Out[8]= If you have already calculated a test statistic in terms of the normal, chi-square, Student , or -ratio distribution, you can get its p-value using the appropriate p-value function. For example, NormalPValue computes a p-value for a test statistic using a normal distribution with mean zero and unit variance. A two-sided p-value is obtained by giving TwoSided ->True. Functions providing p-values of test statistics. This is the cumulative density of the normal distribution with mean and unit variance at point . In[9]:= NormalPValue[-1.96] Out[9]= A TwoSidedPValue gives the probability of the test statistic being at least as extreme as at either tail of the distribution. In[10]:= NormalPValue[-1.96, TwoSided -> True] Out[10]=