Random Number Generation
Introduction
The ability to generate pseudorandom numbers is important for simulating events, estimating probabilities and other quantities, making randomized assignments or selections, and numerically testing symbolic results. Such applications may require uniformly distributed numbers, nonuniformly distributed numbers, elements sampled with replacement, or elements sampled without replacement.
The functions
RandomReal,
RandomInteger, and
RandomComplex generate uniformly distributed random numbers.
RandomReal and
RandomInteger also generate numbers for builtin distributions.
RandomPrime generates primes within a range. The functions
RandomChoice and
RandomSample sample from a list of values with or without replacement. The elements may have equal or unequal weights. A framework is also included for defining additional methods and distributions for random number generation.
A sequence of nonrecurring events can be simulated via
RandomSample. For instance the probability of randomly sampling the integers 1 through
n in order might be simulated.
This estimates the probability of getting n elements in order for n from 2 to 8.
Out[1]=  

The results can be compared with the theoretical probabilities.
Out[2]=  

Random number generation is at the heart of Monte Carlo estimates. An estimate of an expected value of a function
f can be obtained by generating values from the desired distribution and finding the mean of
f applied to those values.
This estimates the 6 ^{th} raw moment for a normal distribution.
Out[3]=  

In this case, the estimate can be compared with an exact result.
Out[4]=  

Random processes can be simulated by generating a series of numbers with the desired properties. A random walk can be created by recursively summing pseudorandom numbers.
Here a random walk starting at 0 is created.
Out[5]=  

Substitution of random numbers can be used to test the equivalence of symbolic expressions. For instance, the absolute difference between two expressions could be evaluated at randomly generated points to test for inequality of the expressions.
This provides no evidence that and x are different for real values.
Out[6]=  

This provides evidence that and x differ for at least some complex values.
Out[7]=  

RandomPrime chooses prime numbers with equal probability, which can be useful—for instance, to generate large primes for RSA encryption. The prime numbers are uniformly distributed on the primes in the range but are not uniformly distributed on the entire range because primes are in general not uniformly distributed over ranges of positive integers.
Primes in a given range are generated with equal probability.
Out[8]=  

Random Generation Functions
The main functions are
RandomReal,
RandomInteger,
RandomComplex,
RandomChoice, and
RandomSample.
RandomReal,
RandomInteger, and
RandomComplex generate numbers given some range of numeric values.
RandomChoice and
RandomSample generate elements from finite sets that may include nonnumeric values.
Random Numbers
RandomReal generates pseudorandom real numbers over a specified range of real values.
RandomInteger generates pseudorandom integer numbers over a specified range of integer values.
RandomComplex generates pseudorandom complex numbers over a specified rectangular region in the complex plane.
RandomPrime generates prime numbers with equal probability within a range.
RandomReal[]  give a pseudorandom real number in the range 0 to 1 
RandomReal[{x_{min},x_{max}}]  give a pseudorandom real number in the range x_{min} to x_{max} 
RandomReal[x_{max}]  give a pseudorandom real number in the range 0 to x_{max} 
RandomReal[dist]  give a random number from the continuous distribution dist 
RandomReal[domain,n]  give a list of n pseudorandom reals 
RandomReal[domain,{n_{1},n_{2},...}]  give an n_{1}×n_{2}×... array of pseudorandom reals 
Generation of random reals.
RandomInteger[{i_{min},i_{max}}]  give a pseudorandom integer in the range {i_{min}, ..., i_{max}} 
RandomInteger[i_{max}]  give a pseudorandom integer in the range {0, ..., i_{max}} 
RandomInteger[]  pseudorandomly give 0 or 1 with probability 
RandomInteger[dist]  give a pseudorandom integer from the discrete distribution dist 
RandomInteger[domain,n]  give a list of n pseudorandom integers 
RandomInteger[domain,{n_{1},n_{2},...}]  give an n_{1}×n_{2}×... array of pseudorandom integers 
Generation of random integers.
RandomComplex[]  give a pseudorandom complex number in the unit square 
RandomComplex[{z_{min},z_{max}}]  give a pseudorandom complex number in the rectangle bounded by z_{min} and z_{max} 
RandomComplex[z_{max}]  give a pseudorandom complex number in the rectangle bounded by 0 and z_{max} 
RandomComplex[domain,n]  give a list of n pseudorandom complex numbers 
RandomComplex[domain,{n_{1},n_{2},...}]  give an n_{1}×n_{2}×... array of pseudorandom complex numbers 
Generation of random complex numbers.
RandomPrime[{i_{min},i_{max}}]  give a pseudorandom prime in the range {i_{min}, ..., i_{max}} 
RandomPrime[i_{max}]  give a pseudorandom prime in the range 2 to i_{max} 
RandomPrime[domain,n]  give a list of n pseudorandom primes 
RandomPrime[domain,{n_{1},n_{2},...}]  give an n_{1}×n_{2}×... array of pseudorandom primes 
Generation of random primes.
When the domain is specified in terms of
x_{min} and
x_{max},
RandomReal and
RandomInteger generate uniformly distributed numbers over the specified range. When the domain is specified as a distribution, rules defined for the distribution are used. Additionally, mechanisms are included for defining
new methods and distributions.
The twoargument interface provides a convenient way to obtain multiple random numbers at once. Even more importantly, there is a significant efficiency advantage to generating a large number of pseudorandom numbers at once.
Generating 10^{7} numbers between 0 and 1 takes a fraction of a second.
Out[9]=  

Generating 10^{7} numbers one at a time takes roughly five times as long.
Out[10]=  

For multidimensional arrays with dimensions
n_{1} through
n_{k}, the total number of required pseudorandom numbers
n_{total}=n_{i} is generated and then partitioned. This makes the multidimensional array generation as efficient as possible because the total number of random values is generated as efficiently as possible and the time required for partitioning is negligible.
The time required for a 100×100×100×10 array is about the same as for a vector of 10^{7} numbers.
Out[11]=  
Out[12]=  

An array of the same dimensions generated 10 numbers at a time takes several times as long.
Out[13]=  

For statistical distributions, the speed advantage of generating many numbers at once can be even greater. In addition to the efficiency benefit inherited from the uniform number generators used, many statistical distributions also benefit from vectorized evaluation of elementary and special functions. For instance,
WeibullDistribution benefits from vector evaluations of the elementary functions
Power,
Times, and
Log.
Generating 10^{5} Weibull numbers takes virtually no time.
Out[14]=  

Several seconds are required when 10^{5} Weibulls are generated one at a time.
Out[15]=  

Random number generation can be useful in exploratory investigations. For instance, you might look for occurrences of a random sequence of digits in a longer sequence of digits.
This converts a list of 5 random decimal digits to a string.
Out[16]=  

The following converts the first million digits of to a string of integers. 
This gives the positions where the string of five digits appear in the first million digits of .
Out[18]=  

Random number generation is also highly useful in estimating distributions for which closedform results are not known or known to be computationally difficult. Properties of random matrices provide one example.
This estimates the probability a 5×5 matrix of uniform reals will have real eigenvalues.
Out[19]=  

The following does the same for a matrix of standard normal numbers.
Out[20]=  

An example of simulating a multivariate distribution is the Gibbs sampler used in Bayesian statistics [
1]. The Gibbs sampler provides a means by which to simulate values from multivariate distributions provided the distributions of each coordinate conditional on the other coordinates are known. Under some restrictions, the distribution of random vectors constructed by iteratively sampling from the conditional distributions will converge to the true multivariate distribution.
The following example will construct a Gibbs sampler for an example given by Casella and George [
2]. The distribution of interest is bivariate. The conditional distribution of
x given
y is a binomial, and the conditional distribution of
y given
x is a beta. As Casella and George mention, various strategies for detecting convergence and sampling using the Gibbs sampler have been suggested. For simplicity, assume that convergence will occur within 1000 iterations. A sample of size
n from the distribution will be taken as the
n values following the 1000
^{th} iteration. It should be noted that these
n values will, however, be dependent.
This defines the sampler with a binomial and a beta conditional distribution. 
A Gibbs sampler could also be defined as a distribution object within the distribution framework for random number generation. An example of this particular
Gibbs sampler as a distribution object is provided in the section "
Defining Distributions".
data is a sample of length 10^{4}. 
The following bar chart shows the marginal distribution of the first dimension.
Out[25]=  

The marginal distribution of the second coordinate can be visualized with a histogram.
Out[27]=  

Conditional distributions should closely match the assumed binomial and beta distributions provided there is enough data for the conditional distribution. The greatest amount of data occurs when the densities of the marginal distributions are highest, so those values can be used for comparisons. The following graphics compare the empirical and assumed conditional distributions, using bins of width .05 for estimating probabilities of continuous values.
This compares the empirical and theoretical distributions of x for 0.3`≤y<0.35`.
Out[28]=  

This compares the empirical and theoretical distributions of y for x=1.
Out[29]=  

ArbitraryPrecision Reals and Complexes
By default,
RandomReal and
RandomComplex generate machineprecision numbers. Arbitraryprecision numbers can be obtained by setting the
WorkingPrecision option.
Option for RandomReal and RandomComplex.
The option is valid for uniformly distributed reals, complexes, and reals from builtin distributions.
WorkingPrecision can also be incorporated into userdefined distributions.
Here is a precision25 real number between 5 and 50.
Out[30]=  

This gives a precision50 tdistributed number.
Out[31]=  

Increased
WorkingPrecision can be useful in simulations where loss of precision can be expected and highly accurate results are necessary. Increased precision can also be used to estimate the precision loss in computations.
This estimates the worst precision loss in computing J_{1} on the interval [0, 1000].
Out[32]=  

If the precision of the input is less than the specified
WorkingPrecision, the function will warn of the problem. The precision of the input will then be artificially increased to generate a pseudorandom number of the desired precision.
A warning is generated because the machine number 7.5 has precision less than 50.
Out[33]=  

WorkingPrecision is not an option for
RandomInteger. Integers have infinite precision, so the precision is completely specified by the function name.
Random Elements
RandomChoice and
RandomSample generate pseudorandom selections from a list of possible elements. The elements can be numeric or nonnumeric.
RandomChoice[{e_{1},e_{2},...}]  give a pseudorandom choice of one of the e_{i} 
RandomChoice[list,n]  give a list of n pseudorandom choices from list 
RandomChoice[list,{n_{1},n_{2},...}]  give n_{1}×n_{2}×... pseudorandom choices from list 
RandomChoice[{w_{1},w_{2},...}>{e_{1},e_{2},...}] 
 give a pseudorandom choice weighted by the w_{i} 
RandomChoice[wlist>elist,n]  give a list of n weighted choices 
RandomChoice[wlist>elist,{n_{1},n_{2},...}] 
 give an array of n_{1}×n_{2}×... array of weighted choices 
Random choice from a list.
RandomSample[{e_{1},e_{2},...},n]  give a pseudorandom sample of n of the e_{i} 
RandomSample[{w_{1},w_{2},...}>{e_{1},e_{2},...},n] 
 give a pseudorandom sample of n of the e_{i} chosen using weights w_{i} 
RandomSample[{e_{1},e_{2},...}]  give a pseudorandom permutation of the e_{i} 
RandomSample[wlist>elist]  give a pseudorandom permutation of elist using initial weights wlist 
Random sample from a list.
The main difference between
RandomChoice and
RandomSample is that
RandomChoice selects from the
e_{i} with replacement, while
RandomSample samples without replacement. The number of elements chosen by
RandomChoice is not limited by the number of elements in
elist, and an element
e_{i} may be chosen more than once. The size of a sample returned by
RandomSample is limited by the number of elements in
elist, and the number of occurrences of a distinct element in that sample is limited by the number of occurrences of that element in
elist.
If the first argument to
RandomChoice or
RandomSample is a list, elements are selected with equal probability. The weight specification defines a distribution on the set of the
e_{i}. The weights must be positive, but need not sum to 1. For weights
{w_{1}, ..., w_{n}} the probability of
e_{i} in the initial distribution is
w_{i}/w_{j}. Since
RandomSample samples without replacement, weights are updated internally based on the total remaining weight after each selection.
RandomChoice can be used for simulation of independent identically distributed events with a finite list of possible outcomes.
This gives 15 simulated fair coin tosses.
Out[35]=  

This gives 20 rolls of a die loaded toward 5s.
Out[36]=  

RandomChoice can be used to generate observations from any discrete distribution with finite support.
Here is the empirical PDF for 1000 simulated points.
Out[40]=  

RandomSample can be used to simulate observations from a finite set of outcomes in which each element in the list of outcomes can only be observed once. There may be more than one occurrence of distinct values in the list.
This simulates 7 draws from a container of 80 blue and 45 red objects.
Out[41]=  

Randomly sampling all elements in the list results in a random permutation.
The following is a random permutation of the integers from 1 to 10.
Out[42]=  

Assigning weights to the elements results in a random permutation in which values with greater weight tend to appear earlier in the permutation than values with lesser weight.
Here is a random permutation weighted by the squares of the data values.
Out[43]=  

For the same list of weighted or unweighted elements,
RandomSample[#, 1]& is distributionally equivalent to
RandomChoice.
This gives an empirical PDF for 10^{5} random samples of size 1.
Out[45]=  
Out[46]=  

The probabilities for the two examples are very close to each other and to the theoretical values.
These are the theoretical probabilities.
Out[48]=  

RandomSample can also be used for random assignments to groups, such as in clinical trials. The following uses integers, but other identifying values such as name or identification number could be used instead.
The following randomly places 20 elements into four groups of equal size.
Out[49]=  

RandomChoice and
RandomSample can be affected by changes to the
Method option to
SeedRandom. Builtin methods are described in
"Methods". Additionally, mechanisms for defining new methods are described in
"Defining Your Own Generator".
Seeding and Localization
Pseudorandom number generators algorithmically create numbers that have some apparent level of randomness. Methods for pseudorandom number generation typically use a recurrence relation to generate a number from the current state and to establish a new state from which the next number will be generated. The state can be set by seeding the generator with an integer that will be used to initialize the recurrence relation in the algorithm.
Given an initial starting point, called a seed, pseudorandom number generators are completely deterministic. In many cases it is desirable to locally or globally set the seed for a random number generator to obtain a constant sequence of "random" values. If set globally, the seed will affect future pseudorandom numbers unless a new seed is explicitly set. If set locally, the seed will only affect random number and element generation within the localized code.
BlockRandom[expr]  evaluate expr with all pseudorandom generators localized 
SeedRandom[n]  reset the pseudorandom generator using n as a seed 
SeedRandom[]  reset the generator using as a seed the time of day and certain attributes of the current Mathematica session 
Localization and seeding functions.
The
SeedRandom function provides a means by which to seed the random generator. Used on its own,
SeedRandom will globally set the seed for random generators. The
BlockRandom function provides a means by which to locally set or change the seed for random generators without affecting the global state.
The following seeds the random generator globally.
Out[50]=  

The following gives two different numbers because the first
RandomReal is generated within
BlockRandom, while the second is generated outside of
BlockRandom.
SeedRandom also provides the mechanism for switching the random generator.
Option for SeedRandom.
An individual generator can be seeded directly by specifying that generator via the
Method option. All generators can be seeded by setting
Method>All.
Here the default generator is seeded with 1, but the "Rule30CA" generator is not.
Out[52]=  

Seeding the "Rule30CA" generator with 1 gives a different random number.
Out[53]=  

Methods
Five pseudorandom generator methods are available on all systems. A sixth platformdependent method is available on Intelbased systems. A framework for defining new methods, described in the section "
Defining Your Own Generator", is also included.
"Congruential"  linear congruential generator (lowquality randomness) 
"ExtendedCA"  extended cellular automaton generator (default) 
"Legacy"  default generators prior to Mathematica 6.0 
"MersenneTwister"  Mersenne Twister shift register generator 
"MKL"  Intel MKL generator (Intelbased systems) 
"Rule30CA"  Wolfram rule 30 generator 
Builtin methods.
This gives pseudorandom integers from each method with seed 2020.
Out[54]=  

This gives pseudorandom reals from the same seed.
Out[55]=  

Congruential
"Congruential" uses a linear congruential generator. This is one of the simplest types of pseudorandom number generators, with pseudorandom numbers between 0 and 1 obtained from
x_{i}/m, where
x_{i} is given by the modular recurrence relation
for some fixed integers
b,
c, and
m called the multiplier, increment, and modulus respectively. If the increment is 0, the generator is a multiplicative congruential generator. The values of
b,
c, and
m can be set via options to the
"Congruential" method.
  
"Bits"  Automatic  specify range of bits to use for numbers constructed from bits 
"Multiplier"  1283839219676404755  multiplier value 
"Increment"  0  increment value 
"Modulus"  2305843009213693951  modulus value 
"ConvertToRealsDirectly"  True  whether reals should be constructed directly from the congruence relation 
Options for Method "Congruential".
Linear congruential generators are periodic and tend to give a lower quality of randomness, especially when a large number of random values is needed. If reals are generated directly from the congruence relation, the period is less than or equal to
m.
The default option values are chosen to have a large period and for 64bit efficiency. With the default options, the
"Congruential" generator passes many standard tests of randomness despite the inherent issues with congruential number generators.
This generates 40 numbers from a multiplicative congruential generator. 
The period of a multiplicative congruential generator is bounded above by the number of positive integers less than or equal to the modulus that are relatively prime to the modulus. This upper bound is Euler's totient function of the modulus.
With a modulus of 63, the period of the cycle is at most 36.
Out[57]=  

The actual period can be determined by finding the smallest integer
i such that
ib^{i} mod
m.
The period with multiplier 11 and modulus 63 is 6.
Out[58]=  

Partitioning the data into sets of 6 elements shows the recursion.
Out[59]=  

The distinct numbers can also be seen graphically by plotting a sequence of generated numbers.
Here is a plot of 1000 values from the congruential generator.
Out[60]=  

If
"ConvertToRealsDirectly" is set to
False, reals are generated by taking eight bits at a time from elements of the sequence to construct a 52bit machineprecision number. Congruential numbers generated in this fashion will still cycle, but cycling will depend on repetition in the bit pattern rather than in the initial congruence relation.
The
"Bits" option can be
Automatic, a nonzero integer, or a list of two nonzero integers specifying the range of bits in the modulus
m used for constructing numbers from bits.
Automatic uses
{2, 1} unless
m is a power of 2, in which case
{1, 1} is used.
ExtendedCA
The default
"ExtendedCA" method makes use of cellular automata to generate highquality pseudorandom numbers. This generator uses a particular fiveneighbor rule, so each new cell depends on five nonadjacent cells from the previous step.
Cellularautomatabased random number generators evolve a state vector of 0s and 1s according to a deterministic rule. For a given cellular automaton, an element (or cell) at a given position in the new state vector is determined by certain neighboring cells of that cell in the old state vector. A subset of cells in the state vectors are then output as random bits from which the pseudorandom numbers are generated.
The cellular automaton used by
"ExtendedCA" produces an extremely high level of randomness. It is so high that even using every single cell in output will give a stream of bits that passes many randomness tests, in spite of the obvious correlation between one cell and five previous ones.
Two options are included for modifying the size of the state vector and the cells skipped. The defaults are chosen for quality and speed and there is typically no need to modify these options.
  
"Size"  80  state vector size as a multiplier of 64 
"Skip"  4  number of cells to skip 
Options for Method "ExtendedCA".
The length of the state vectors used is by default set to 80×64=5120 cells. The multiple of 64 can be controlled by the
"Size" option.
In practice using every fourth cell in each state vector proves to be sufficient to pass very stringent randomness tests. This is the default used for the
"Skip" option. For even faster random number generation, a
"Skip" setting of 2 or even 1 could be used, but the quality of the random numbers will then decline.
Legacy
The
"Legacy" method uses the generator called by
Random in versions of
Mathematica prior to Version 6.0. A MarsagliaZaman subtractwithborrow generator is used for reals. The integer generator is based on a Wolfram rule 30 cellular automaton generator. The rule 30 generator is used directly for small integers and used to generate certain bits for large integers.
To guarantee consistency with sequences generated prior to Version 6.0, seeds set for the
Automatic method are also applied to the
"Legacy" method.
The
"Legacy" method has no options.
MersenneTwister
"MersenneTwister" uses the Mersenne Twister generator due to Matsumoto and Nishimura [
3][
4]. The Mersenne Twister is a generalized feedback shift register generator with period
2^{19937}1.
This gives 5 random numbers from a Mersenne Twister generator.
Out[65]=  

The
"MersenneTwister" method has no options.
MKL
The
"MKL" method uses the random number generators provided in Intel's MKL libraries. The MKL libraries are platform dependent. The
"MKL" method is only available on Intelbased systems.
Option for Method "MKL".
"MCG31"  31bit multiplicative congruential generator 
"MCG59"  59bit multiplicative congruential generator 
"MRG32K3A"  combined multiple recursive generators with two components of order 3 
"MersenneTwister"  Mersenne Twister shift register generator 
"R250"  generalized feedback shift register generator 
"WH"  WichmannHill combined multiplicative congruential generators 
"Niederreiter"  Niederreiter lowdiscrepancy sequence 
"Sobol"  Sobol lowdiscrepancy sequence 
"MKL" methods.
The first six methods are uniform generators.
"Niederreiter" and
"Sobol" generate Niederreiter and Sobol sequences. These sequences are nonuniform and have underlying structure which is sometimes useful in numerical methods. For instance, these sequences typically provide faster convergence in multidimensional Monte Carlo integration.
The following shows the structure of a Niederreiter sequence in dimension 2.
Out[66]=  

This shows the structure of a Sobol sequence in dimension 2.
Out[67]=  

Rule30CA
The
"Rule30CA" method uses a Wolfram rule 30 cellular automaton generator. Bits are obtained by evolving a state vector of 0s and 1s using the relation
where
f (i, t) is the value of cell
i at time
t.
  
"Size"  9  state vector size as a multiplier of 29 
Option for Method "Rule30CA".
The length of the state vectors used is by default set to
9×29=261 cells. The multiplier for 29 can be controlled by the
"Size" option.
This gives a 2×3×4 tensor of random integers using "Rule30CA".
Out[68]=  

The
"Rule30CA" method uses only the first bit from each state vector, making it slower than the
"ExtendedCA" method, which uses multiple bits from each state vector.
Defining Your Own Generator
Methods can be plugged into the random framework as long as they follow the correct template. A generator object is of the form
gsym[data] where
gsym is the symbol that identifies the generator and to which rules are attached.
data is effectively private to the toplevel evaluations associated with the generator definitions.
Generator initialization is handled by a call to
Random`InitializeGenerator.
Random`InitializeGenerator[gsym,opts] 
 initializes the generator gsym with options opts 
Generator initialization function.
Random`InitializeGenerator is expected to return a generator object
gobj of the form
gsym[data].
Generators can support generation of random bit streams, random integers, and random reals. If the generator supports bit streams, reals and integers can be generated by conversion of the bit stream. At method setup time, properties are queried to determine what is supported and how.
GeneratesBitsQ  set to True if the method generates bits 
GeneratesIntegersQ  set to True if the method generates integers for a given range 
GeneratesRealsQ  set to True if the method generates reals for a given range and precision 
Generator properties.
If bit streams are supported, then
gobj["GenerateBits"[nbits]] is expected to return an integer comprised of
n random bits or a list of length
nbits with entries that are 0 or 1.
If random integers are supported, then
gobj["GenerateIntegers"[n, {a, b}]] is expected to return a list of
n random integers in the range
[a, b]. A warning message will be issued when results are out of range.
If random reals are supported, then
gobj["GenerateReals"[a, {a, b}, prec]] is expected to return a list of
n random reals with precision
prec in the range
[a, b]. A warning message will be issued when results are out of range or of the wrong precision.
For any of the generation functions, the return can be
{res, gobj}, where
res is the result of the correct type and
gobj is a new generator object (reflecting any state change).
Seeding is done by
gobj["SeedGenerator"[seed]] for an integer
seed.
gobj["SeedGenerator"[seed]] is expected to return a new generator object.
Example: Multiplicative Congruential Generator
In the following example a multiplicative congruential generator will be defined. A multiplicative congruential generator follows the recurrence relation
The generator, as defined below, will allow only for generation of real numbers.
This sets default options for the generator MultiplicativeCongruential. 
Initialization of the generator will extract the values of the multiplier and modulus. Initialization will fail if either of these values is not a positive integer.
The following initializes the generator. 
This establishes that MultiplicativeCongruential generates reals. 
The following seeds the generator using the recurrence relation. 
The real number generator will return the desired number of reals
n and a new
MultiplicativeCongruential generator. The seed for the new generator is updated based on the recurrence relation.
This defines the real number generator. 
This generates 10 reals using the MultiplicativeCongruential generator.
Out[74]=  

The generator is not defined for integers.
Out[75]=  

Example: BlumBlumShub Generator
The BlumBlumShub generator is a quadratic congruential method for generating pseudorandom bits for cryptographic purposes [
5]. The congruence is mod
p×
q for specified primes
p and
q.
This sets default options for the generator BlumBlumShub. 
The following define an auxiliary function and error messages for the generator. 
The generator initialization will extract option values and issue error messages if necessary before calling the actual generator.
The following initializes the generator. 
This establishes that BlumBlumShub is a bit generator and determines the bit width. 
The following seeds the generator. 
This defines the bit generator. 
This generates 5 integers and 5 reals using the BlumBlumShub generator.
Out[86]=  

Statistical Distributions
The general idea behind generating random variates from a nonuniform statistical distribution is to generate a random uniform variate between 0 and 1 and then compute the inverse CDF of that random value in the desired distribution. In practice, however, following this recipe directly can be very computationally intensive if a large number of random variates is desired, particularly when the inverse CDF is complicated or cannot be expressed in a closed form.
In such cases, table lookups, direct construction based on distributional relationships, or acceptancerejection methods are often more efficient alternatives to direct inversion of the CDF. On some level, these methodologies will all still rely on uniformly distributed
RandomReal values, uniformly distributed
RandomInteger values, observations from a weighted
RandomChoice, or a combination of these values. As a result, methods set via
SeedRandom will have an effect on random observations from statistical distributions.
The methods used by
RandomReal and
RandomInteger for many of the distributions in
Mathematica follow methods suggested or described in Gentle [
6], and are not necessarily the same methods used by
Random and
RandomArray in the standard addons included with versions of
Mathematica prior to Version 6.0.
Random observations from all builtin statistical distributions can be generated using either
RandomReal or
RandomInteger.
RandomReal[dist]  give a random number from the continuous distribution dist 
RandomReal[dist,n]  give a list of n pseudorandom reals from dist 
RandomReal[dist,{n_{1},n_{2},...}]  give an n_{1}×n_{2}×... array of pseudorandom reals from dist 
Generation of random values from continuous distributions.
RandomInteger[dist]  give a random number from the discrete distribution dist 
RandomInteger[dist,n]  give a list of n pseudorandom integers from dist 
RandomInteger[dist,{n_{1},n_{2},...}]  give an n_{1}×n_{2}×... array of pseudorandom integers from dist 
Generation of random values from discrete distributions.
Observations from continuous univariate and multivariate distributions are obtained via
RandomReal, while discrete univariate and multivariate distributions are obtained via
RandomInteger. If
RandomInteger is used on a continuous distribution, or
RandomReal on a discrete distribution, an error is generated and the input is returned unevaluated.
This fails because the ^{2} distribution is continuous.
Out[87]=  

WorkingPrecision is an option to
RandomReal for continuous distributions just as it is for uniform numbers over ranges.
Here is a precision30 betadistributed variate.
Out[88]=  

Multivariate distributions and distributions derived from the multivariate normal distribution are included in the Multivariate Statistics Package. Generators for those distributions can be accessed by loading the package.
Here is a random vector from a bivariate normal distribution.
Out[90]=  

This is a random vector from a multinomial distribution.
Out[91]=  

Continuous Distributions
For distributions whose inverse CDFs contain only elementary functions, direct computation of the inverse CDF for a random uniform is generally used. This can be seen as a direct construction from a uniformly distributed random variable. Continuous distributions falling in this category include
CauchyDistribution,
ExponentialDistribution,
ExtremeValueDistribution,
GumbelDistribution,
LaplaceDistribution,
LogisticDistribution,
ParetoDistribution,
RayleighDistribution,
TriangularDistribution, and
WeibullDistribution.
Direct construction of a single random variate from multiple uniform variates, or from variates other than the uniform distribution are also employed. Normal variates are generated in pairs from pairs of random uniforms using the BoxMüller method.
HalfNormalDistribution and
LogNormalDistribution variates are obtained by direct transformation of normal variates.
MultinormalDistribution and
QuadraticFormDistribution from the Multivariate Statistics Package also use direct construction from normal variates.
InverseGaussianDistribution uses an acceptancecomplement method involving normal and uniform variates. The method is due to Michael, Schucany and Haas and described in Gentle [
6].
MaxwellDistribution variates are constructed from
ChiDistribution variates. The chi variates themselves are obtained from
ChiSquareDistribution variates, which are special cases of
GammaDistribution variates.
In most cases
FRatioDistribution constructs each random value from a single random beta variate. For small degrees of freedom,
FRatioDistribution variates are instead generated from pairs of gamma variates to avoid possible divisions by 0 that may arise in the beta construction.
NoncentralChiSquareDistribution[, ],
, variate generation uses additive properties of
^{2} distributions to avoid expensive inverse CDF computations for nonintegral
. The additive properties are given in, for instance, Johnson, Kotz, and Balakrishnan [
7]. For
=1 a noncentral
^{2} variate can be generated as the square of a normal variate with mean
and variance 1. For
≠1 noncentral
^{2} variates are obtained as the sum of a central and a noncentral
^{2} random variable. For
>1,
X+Y is distributed
if
and
. This relationship cannot be used for
<1. In that case the construction is
X+Y with
and
, where
is the limiting noncentral
^{2} distribution as
goes to 0. The limiting distribution
is a mixture of Poisson and
^{2} variables, which has a nonzero probability mass at 0 and a continuous density for positive values.
NoncentralFRatioDistribution variates are obtained from one central and one noncentral
^{2} variate.
For the
WishartDistribution from the Multivariate Statistics Package, matrices are generated via Smith and Hocking's method [
8]. This method constructs Wishart matrices from matrices with chi distributed diagonal entries and normally distributed offdiagonal entries.
NoncentralStudentTDistribution, and the
HotellingTSquareDistribution and
MultivariateTDistribution from the Multivariate Statistics Package each use direct construction from univariate random variates.
GammaDistribution,
BetaDistribution, and
StudentTDistribution use acceptancerejection methods to some extent.
For
GammaDistribution[, ] exponential variates are generated when
=1. Otherwise, methods due to Cheng and Feast [
9] and Ahrens and Dieter [
10] are used.
Beta variates are constructed by switching between multiple methods depending on the values of the beta parameters
and
. If both parameters are 1, uniform random variates will be generated. If one of the beta parameters is 1, then a closed form inverse CDF evaluation is used. Otherwise,
RandomReal switches between acceptancerejection methods due to Jöhnk [
11], Cheng [
12] and Atkinson [
13]. An example of the advantage of using an acceptancerejection method over construction from two gammas can be seen in the following. The direct acceptancerejection method is nearly twice as fast as the gammapair construction.
Comparison of direct construction and acceptancerejection methods for beta variates.
Out[92]=  
Out[93]=  

For
StudentTDistribution the method used by
RandomReal is a polar rejection method due to Bailey [
14]. This method is more efficient than direct construction from normal and
^{2} variates as can be seen in the following. The direct construction takes roughly 1.5 times as long as the polar method for a million Student
t variates.
Comparison of direct construction and Bailey's polar rejection method for Student t.
Out[94]=  
Out[95]=  

Discrete Distributions
When used, table lookups for random integer generation are implemented via
RandomChoice using the distribution's probability mass function for the weights. Most discrete distributions switch to other methods whenever construction of the list of weights is expected to be expensive given the desired sampled size. For example, as
p approaches 1
LogSeriesDistribution[p] switches to the direct construction
, where
U and
V are uniformly distributed on the interval
[0, 1] [
15]. Depending on parameters and sample size
NegativeBinomialDistribution[n, p] may switch to construction as a Poissongamma mixture, which is a Poisson variable with mean following a gamma distribution [
6].
BinomialDistribution,
HypergeometricDistribution, and
PoissonDistribution rely on direct sampling from the density function if the computational overhead of computing the PDF values is small relative to the number of desired random values. Otherwise they switch to acceptancerejection methods. The acceptancerejection methods also allow for generation of variates when overflows or underflows would occur in directly computing the PDF values, thus extending the range of parameter values for which random numbers can be generated.
The binomial and hypergeometric distributions switch to acceptancerejection methods due to Kachitvichyanukul and Schmeiser with small modifications. The binomial method, based on the acceptancerejection portion of their BTPE (Binomial, Triangle, Parallelogram, Exponential) algorithm [
16], effectively uses a piecewise majorizing function with three regions and a triangular minorizing function for a quick acceptance test. The majorizing and minorizing functions create a twoparallelogram envelope around the center of the rescaled binomial density, and the tails of the majorizing function form exponential envelopes on the tails of the scaled binomial distribution. One case where it is clearly better to use BTPE rather than to construct a lookup table is when few observations are desired and the lookup table would be large.
The hypergeometric method, based on the acceptancerejection portion of Kachitvichyanukul and Schmeiser's H2PE algorithm [
17], uses a majorizing function with three regions around a scaled hypergeometric density. The middle portion of the density is enveloped by a rectangular region and the tails of the distribution are bounded by exponentials.
The acceptancerejection method used by
PoissonDistribution is due to Ahrens and Dieter [
18]. The acceptance and rejection is carried out using discrete normal variates, taking advantage of the tendency of
PoissonDistribution[] toward
as
increases.
Random values from the
ZipfDistribution are generated via an acceptancerejection method described by Devroye [
15]. The method uses pairs of uniform variates and a test involving only a
Floor and noninteger powers, aside from basic arithmetic, to efficiently obtain Zipfdistributed values.
Defining distributions
Definitions for distributions are supported through rules for
Random`DistributionVector.
DistributionVector is expected to return a vector of the given length with numbers of the given precision.
Random`DistributionVector[dist,n,prec] 
 defines rules for generating n observations from dist with precision prec 
Function for defining random generation from distributions.
Rules for generating random values from distributions are generally defined via a
TagSet on the head of the distribution. The distribution itself may contain parameters. As a simple example, the following defines rules for
NegativeOfUniform[a, b], which represents a uniform distribution on the interval
[b, a].
Random numbers from
NegativeOfUniform can now be generated via
RandomReal just like any builtin continuous distribution.
The following gives a machineprecision and a precision20 number from NegativeOfUniform.
Out[97]=  

Matrices and higherdimensional tensors can also be generated directly via
RandomReal.
RandomReal uses the definition given to
Random`DistributionVector to generate the total number of random values desired, and partitions that total number into the specified dimensions.
Here is a 3×4 array of NegativeOfUniform numbers.
Out[98]=  

Discrete distributions can be defined in a similar way. The main difference is that the precision argument to
Random`DistributionVector will now be
Infinity. The discrete version of
NegativeOfUniform provides a simple example.
Random values from
NegativeOfDiscreteUniform can now be obtained from
RandomInteger.
Here are 10 NegativeOfDiscreteUniform numbers.
Out[100]=  

While the previous examples show the basic framework for defining distributions, the distributions themselves are not particularly interesting. In fact, it would have been easier in these two cases to just generate values from
RandomReal and
RandomInteger and multiply the end result by 1 instead of attaching definitions to a new distribution symbol. The following examples will demonstrate slightly more complicated distributions, in which case attaching definitions to a distribution will be more useful.
Example: Normal Distribution by Inversion
The textbook definition for generating random values from a generic univariate statistical distribution involves two steps:
 generate a uniform random number q on the interval [0, 1]
 compute the inverse cumulative distribution function of q for the distribution of interest
To demonstrate the process, use the normal distribution. To generate random normal variates using this method, start with the
Quantile for the normal distribution at a point
q and replace
q with a random uniform between 0 and 1.
Here is the Quantile function for a normal with mean and standard deviation .
Out[101]=  

A new distribution object can now be used to define a normal random number generator that uses inversion of the cumulative distribution function.
This defines generation of normals by inversion. 
Here are 10 random normals generated by inversion.
Out[103]=  

Here is a sample of 10^{4} random normals along with the sample mean and standard deviation.
Out[104]=  
Out[105]=  

The normal distribution is one example where methods other than direct inversion are generally preferred. While inversion of the CDF is a perfectly valid method for generating pseudorandom normal numbers, it is not particularly efficient. Numeric evaluation of
InverseErf is computationally much more expensive than the sinusoid and logarithmic evaluations required by the BoxMüller method used by
NormalDistribution.
The builtin method takes almost no time for the same number of values.
Out[106]=  
Out[107]=  

Example: Uniform Distribution on a Disk
Random`DistributionVector can also be used to define generators for multidimensional distributions. For instance, suppose a random point from a uniform distribution on the unit disk, the set of real points
{x, y} with
, is desired. Such a random point can be constructed as follows:
 generate a random angle uniformly distributed on [0, 2)
 generate a random vector u uniformly distributed on [0, 1]
 return
The returned ordered pair can be multiplied by
r to generate points uniformly distributed on a disk of radius
r.
The following defines a generator for a uniform disk of radius r. 
Here is one random ordered pair from a disk of radius 2.
Out[110]=  

The following visualizes the distribution of 10^{4} generated points on this disk.
Out[111]=  

Example: Dirichlet Distribution
The
ndimensional Dirichlet distribution is parameterized by a vector of positive values
{_{1}, ..., _{n}} and has support on the set of vectors
{x_{1, }..., x_{n}} such that
x_{i}=1 and
x_{i}[0, 1] for
i from 1 to
n. Thus, the Dirichlet distribution is defined on an
n1dimensional subspace of the
ndimensional unit hypercube
[0, 1]^{n}. The Dirichlet distribution is the multivariate extension of the beta distribution. If
y follows
BetaDistribution[, ],
{y, 1y} follows a Dirichlet distribution with parameter vector
{, }.
An
ndimensional Dirichlet variate can be generated from
n gamma variates. With parameter vector
{_{1}, ..., _{n}}, the process is as follows:
 return {_{1}, ..., _{n}}/_{i}
This defines a Dirichlet generator attached to the symbol DirichletDistribution. 
Here is a threedimensional Dirichlet vector with precision 25.
Out[113]=  

Example: Gibbs Sampler
Gibbs samplers can also be defined as distributions. As an example consider a Gibbs sampler that mixes beta and binomial distributions. A specific case of this sampler was explored in a
previous example. Here, the distribution will be defined with two parameters
m and
.
This defines a Gibbs sampler BinomialBetaSampler. 
For the specific Gibbs sampler constructed earlier,
m was 16 and
was 2.
Here are 5 vectors from the sampler with m=16 and =2.
Out[115]=  

References
[1] Geman S. and D. Geman. "Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images" IEEE Transactions on Pattern Analysis and Machine Intelligence 6, no. 6 (1984): 721741
[2] Casella G. and E. I. George. "Explaining the Gibbs Sampler" The American Statistician 46, no. 3 (1992): 167174
[3] Matsumoto M. and T. Nishimura. "Mersenne Twister: a 623Dimensionally Equidistributed Uniform Pseudorandom Number Generator" ACM Transactions on Modeling and Computer Simulation 8, no. 1 (1998): 330
[4] Nishimura T. "Tables of 64Bit Mersenne Twisters" ACM Transactions on Modeling and Computer Simulation 10, no. 4 (2000): 348357
[5] Junod P. "Cryptographic Secure PseudoRandom Bits Generation: The BlumBlumShub Generator" August 1999. http://crypto.junod.info/bbs.pdf
[6] Gentle J. E. Random Number Generation and Monte Carlo Methods, 2nd ed. SpringerVerlag (2003)
[7] Johnson N. L., S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions, Volume 2, 2nd ed. John Wiley & Sons (1995)
[8] Smith W. B. and R. R. Hocking. "Algorithm AS 53: Wishart Variate Generator" Applied Statistics 21, no. 3 (1972): 341345
[9] Cheng R. C. H. and G. M. Feast. "Some Simple Gamma Variate Generators" Applied Statistics 28, no. 3 (1979): 290295
[10] Johnson M. E. Multivariate Statistical Simulation. John Wiley & Sons (1987)
[11] Jöhnk M. D. "Erzeugung von Betaverteilten und Gammaverteilten Zufallszahlen" Metrika 8 (1964): 515
[12] Cheng R. C. H. "Generating Beta Variables with Nonintegral Shape Parameters" Communications of the ACM 21, no. 4 (1978): 317322
[13] Atkinson A. C. "A Family of Switching Algorithms for the Computer Generation of Beta Random Variables" Biometrika 66, no. 1 (1979): 141145
[14] Bailey R. W. "Polar Generation of Random Variates with the tDistribution" Mathematics of Computation 62, no. 206 (1994): 779781
[15] Devroye L. NonUniform Random Variate Generation. SpringerVerlag (1986)
[16] Kachitvichyanukul V. and B. W. Schmeiser. "Binomial Random Variate Generation" Communications of the ACM 31, no. 2 (1988): 216223
[17] Kachitvichyanukul V. and B. W. Schmeiser. "Computer Generation of Hypergeometric Random Variates" Journal of Statistical Computation and Simulation 22, no. 2 (1985): 127145
[18] Ahrens J. H. and U. Dieter "Computer Generation of Poisson Deviates from Modified Normal Distributions" ACM Transactions on Mathematical Software 8, no. 2 (1982): 163179