# How to | Perform a Monte Carlo Simulation

Monte Carlo methods use randomly generated numbers or events to simulate random processes and estimate complicated results. For example, they are used to model financial systems, to simulate telecommunication networks, and to compute results for high-dimensional integrals in physics. Monte Carlo simulations can be constructed directly by using *Mathematica*'s built-in random number generation functions.

A sequence of random numbers can be a very simple Monte Carlo simulation. For instance, a list of random numbers generated independently from a normal distribution with mean 0 can simulate a white noise process.

Use RandomVariate with NormalDistribution to generate a sequence of 20 numbers following a normal distribution with mean 0 and standard deviation 1:

In[1]:= |

Out[1]= |

Use ListPlot to visualize the data:

In[2]:= |

Out[2]= |

You can now construct a random walk from the data:

In[3]:= |

Out[3]= |

To start the walk at zero, prepend it to the list:

In[4]:= |

Out[4]= |

In[5]:= |

Out[5]= |

Use Accumulate to sequentially sum the data, which is then visualized with ListLinePlot:

In[13]:= |

Out[13]= |

The following definition puts the preceding commands together to generate a random walk that you can use to simulate many random walks and analyze their properties.

Define a function that generates a random walk of length n:

In[36]:= |

Here, Table is used to create five random walks, each with length 100. They are then visualized with ListLinePlot:

In[15]:= |

Out[15]= |

Now generate 1000 walks, each with length 100. The output is suppressed with a semicolon () since seeing it is not necessary:

In[16]:= |

You can now calculate descriptive statistics on any aspect of the random walks. Here, the final position of each walk is analyzed.

Use (the short form of the Part function) to get the final data point of each random walk:

In[17]:= |

Calculate various statistics on the final data points from the 1000 random walks:

In[18]:= |

Out[18]= |

Monte Carlo methods can also be used to approximate values such as constants or numeric integrals. For instance, the following approximates the value of by generating random points in a square around a circle of radius 1, and then using the relationship between the area of the square and the circle.

Generate 10,000 points in a square bounded by {-1,-1} and {1,1}:

In[19]:= |

In[20]:= |

Out[20]= |

To approximate , multiply the area of the square by the percentage of points falling within a circle of radius 1 that is centered at the origin.

Multiply the area of the square (4) by the fraction of points in the circle:

In[21]:= |

Out[21]= |

Using more points or averaging several approximations will typically give a better approximation.

Define the function to approximate from a sample of size :

In[22]:= |

Approximate using a million points:

In[23]:= |

Out[23]= |

Approximate by averaging 50 approximations from samples of size 10000:

In[24]:= |

Out[24]= |

Monte Carlo simulations are most useful in cases where the nature of the system of interest is complicated. In Bayesian analysis, you often want to mix distributions, with the parameters of two distributions following each other to generate a bivariate distribution. Because the individual distributions are interrelated, points must be iteratively generated and inserted into the other distribution to sample from the bivariate distribution.

This type of mixture is called a Gibbs sampler. After a period of iteration, the points generated will closely follow this mixture. The period of iteration is referred to as the burn-in period.

As an example, you might have a normal distribution where the mean is known, but the standard deviation is not. However, you know that the standard deviation follows a beta distribution that has one known shape parameter, and another shape parameter that is related to the normal distribution where the mean is known.

Define a function that generates random numbers according to this normal distribution:

In[25]:= |

Define a function that generates random numbers for the beta distribution. The second shape parameter of the beta distribution will be the absolute value of a normal variate:

In[26]:= |

You can simulate a point from the bivariate distribution by choosing a starting value for the normal standard deviation, and then sequentially generating random numbers from the normal and beta distributions. A normal variate is generated using the starting value for the normal standard deviation. A beta variate is then generated by using that normal variate as the unknown shape parameter for the beta distribution. This beta variate is then used as the unknown standard deviation for a new normal distribution, and so on. This process is carried out for some number of iterations, and the final normal and beta variates are the coordinates of the simulated point.

Generate 500 points starting with a standard deviation value of .5 and using 1000 iterations as the burn-in period:

In[27]:= |

Visualize the resulting points using ListPlot:

In[28]:= |

Out[28]= |

Visualize the density of the points using Histogram3D:

In[29]:= |

Out[29]= |

Other examples of Monte Carlo methods for estimation include optimization and high-dimensional integration. NMinimize and NIntegrate have methods for optimization and numeric integration using these techniques.