# FirstPassageTimeDistribution

FirstPassageTimeDistribution[mproc,f]

represents the distribution of times for the Markov process mproc to pass from the initial state to final states f for the first time.

# Details

- FirstPassageTimeDistribution is also known as first hitting time.
- The Markov process mproc can be a DiscreteMarkovProcess or ContinuousMarkovProcess.
- The probability for time t in a FirstPassageTimeDistribution is equivalent to Probability[x[t]∈f∧∀
_{τ,0<τ<t}x[τ]∉fx[0]i,xmproc], where i is the initial state. - If mproc is already in the target state, FirstPassageTimeDistribution gives a distribution for the mean recurrence time.
- If the chain is absorbing and the target states are non-absorbing, FirstPassageTimeDistribution gives a distribution conditional on reaching the target states.
- FirstPassageTimeDistribution represents a discrete phase-type distribution for discrete-time Markov processes and a continuous phase-type distribution for continuous-time Markov processes.
- FirstPassageTimeDistribution can be used with such functions as Mean, Quantile, PDF, and RandomVariate.

# Examples

open allclose all## Basic Examples (1)

## Scope (3)

First passage time distribution for a continuous Markov process:

Compare with the result from simulation:

Compute the probability of an event:

Find the mean and the variance of the first passage time through target states, conditional on reaching them:

Compare against a process simulation:

Generate a set of pseudorandom numbers from the distribution:

## Applications (7)

A taxi is located either at the airport or in the city. From the city, the next trip is to the airport with probability 1/4, or to somewhere else in the city with probability 3/4. From the airport, the next trip is always to the city. Model the taxi, using a discrete Markov process, with state 1 representing the city and state 2 representing the airport, starting at the airport:

Find the expected number of trips until the taxi's next visit to the airport:

A gambler, starting with 3 units, places a 1-unit bet at each step, with a winning probability of 0.4 and a goal of winning 7 units before stopping. Find the expected playing time until the gambler achieves his goal or goes broke. The gambling process can be modeled as a discrete Markov process, where state represents the gambler having units:

Simulate some typical gambling scenarios:

The full distribution for playing time:

The probability that playing time is 10 or less:

Find how many times, on average, you have to roll a die until you have seen all six faces:

When flipping an unbiased coin, on average it takes longer for HHT to occur than for HTT:

A particle moves between the eight vertices of a cube by a symmetric random walk. Let be the initial vertex and be the opposite vertex. Compute:

• the expected number of steps until the particle returns to

• the expected number of steps until the first visit to

Expected number of steps before returning to , if starting at :

Expected number of steps until first visit to , if starting at :

Use the mean hitting time to bound the expected cover time for an irreducible process:

The Hubble space telescope carries six gyroscopes, with a minimum of three required for full accuracy. The operating times of the gyroscopes are independent and exponentially distributed with failure rate . If a fourth gyroscope fails, the telescope goes into sleep mode, in which further observations are suspended. It requires an exponential time with mean to put the telescope into sleep mode, after which the base station on Earth receives a sleep signal and a shuttle mission is prepared. It takes an exponential time with mean before the repair crew arrives at the telescope and has repaired the stabilizing unit with the gyroscopes. In the meantime, the other two gyroscopes may fail. If the last gyroscope fails, the telescope will crash. Suppose , , and , all with units of inverse years:

Find the probability that the telescope will crash in the next 10 years:

Find the probability that sleep mode is not reached (no shuttle mission is required) in 10 years:

## Properties & Relations (4)

The average number of steps to go from state 1 to state 3 is 2:

Since the process is deterministic, the variance is zero:

The mean and variance for a continuous process with acyclic bidiagonal transition rate matrix:

When the target is one of the absorbing states, a conditional distribution is returned:

The assumption is that the process reaches the target state and does not get stuck in state 1:

Autosimplification to ExponentialDistribution or other phase-type distributions: