# Descriptive Statistics

Descriptive statistics refers to properties of distributions, such as location, dispersion, and shape. The functions described here compute descriptive statistics of lists of data. You can calculate some of the standard descriptive statistics for various known distributions by using the functions described in

"Continuous Distributions" and

"Discrete Distributions".

The statistics are calculated assuming that each value of data

has probability equal to

, where

is the number of elements in the data.

Mean[data] | average value |

Median[data] | median (central value) |

Commonest[data] | list of the elements with highest frequency |

GeometricMean[data] | geometric mean |

HarmonicMean[data] | harmonic mean |

RootMeanSquare[data] | root mean square |

TrimmedMean[data,f] | mean of remaining entries, when a fraction is removed from each end of the sorted list of data |

TrimmedMean[data,{f_{1},f_{2}}] | mean of remaining entries, when fractions and are dropped from each end of the sorted data |

Quantile[data,q] | quantile |

Quartiles[data] | list of the , , quantiles of the elements in list |

Location statistics.

Location statistics describe where the data is located. The most common functions include measures of central tendency like the mean, median, and mode.

Quantile gives the location before which

percent of the data lie. In other words,

Quantile gives a value

such that the probability that

is less than or equal to

and the probability that

is greater than or equal to

.

Out[1]= | |

This finds the mean and median of the data.

Out[2]= | |

This is the mean when the smallest entry in the list is excluded.

TrimmedMean allows you to describe the data with removed outliers.

Out[3]= | |

Dispersion statistics.

Dispersion statistics summarize the scatter or spread of the data. Most of these functions describe deviation from a particular location. For instance, variance is a measure of deviation from the mean, and standard deviation is just the square root of the variance.

This gives an unbiased estimate for the variance of the data with

as the divisor.

Out[4]= | |

This compares three types of deviation.

Out[5]= | |

Covariance[v_{1},v_{2}] | covariance coefficient between lists and |

Covariance[m] | covariance matrix for the matrix m |

Covariance[m_{1},m_{2}] | covariance matrix for the matrices and |

Correlation[v_{1},v_{2}] | correlation coefficient between lists and |

Correlation[m] | correlation matrix for the matrix m |

Correlation[m_{1},m_{2}] | correlation matrix for the matrices and |

Covariance and correlation statistics.

Covariance is the multivariate extension of variance. For two vectors of equal length, the covariance is a number. For a single matrix

m, the

element of the covariance matrix is the covariance between the

i and

j columns of

m. For two matrices

and

, the

element of the covariance matrix is the covariance between the

i column of

and the

j column of

.

While covariance measures dispersion, correlation measures association. The correlation between two vectors is equivalent to the covariance between the vectors divided by the standard deviations of the vectors. Likewise, the elements of a correlation matrix are equivalent to the elements of the corresponding covariance matrix scaled by the appropriate column standard deviations.

This gives the covariance between

data and a random vector.

Out[6]= | |

Out[7]= | |

This is the correlation matrix for the matrix

m.

Out[8]= | |

This is the covariance matrix.

Out[9]= | |

Scaling the covariance matrix terms by the appropriate standard deviations gives the correlation matrix.

Out[10]= | |

Shape statistics.

You can get some information about the shape of a distribution using shape statistics. Skewness describes the amount of asymmetry. Kurtosis measures the concentration of data around the peak and in the tails versus the concentration in the flanks.

Skewness is calculated by dividing the third central moment by the cube of the population standard deviation.

Kurtosis is calculated by dividing the fourth central moment by the square of the population variance of the data, equivalent to

CentralMoment. (The population variance is the second central moment, and the population standard deviation is its square root.)

QuartileSkewness is calculated from the quartiles of data. It is equivalent to

, where

,

, and

are the first, second, and third quartiles respectively.

Here is the second central moment of the data.

Out[11]= | |

A negative value for skewness indicates that the distribution underlying the data has a long left-sided tail.

Out[12]= | |

Expectation[f[x],xlist] | expected value of the function f of x with respect to the values of list |

Expected values.

The expectation or expected value of a function

is

for the list of values

,

, ...,

. Many descriptive statistics are expectations. For instance, the mean is the expected value of

, and the

central moment is the expected value of

where

is the mean of the

.

Here is the expected value of the

Log of the data.

Out[13]= | |