Unit 1: Introduction to Stochastic Processes

A stochastic process can be, broadly, analysed as a sequence of random variables, each of them corresponding to a specific moment in time, place, instance, etc.

The state set of a stochastic process is the set of all possible values that the random variables in the process can take. Similarly, the parameter set of a stochastic process is the set of all possible values that its parameter can take.

Types of Stochastic Processes

We can classify stochastic processes according to the nature of their state set and parameter set :

/ DiscreteContinuous
DiscreteChainPoint process
ContinuousSequence of random variablesContinuous process

Moreover, we can also classify them according to the probabilistic features of its random variables:

Bernoulli Process

A Bernoulli process is a stochastic process that behaves according to the Bernoulli trial.

Each random variable in it will follow a Bernoulli distribution, while the stochastic process that follows the sum of successes will consist of random variables that follow a binomial distribution.

Random Walk

A random walk is a stochastic process whose state set is and parameter set is , where there’s a probability of moving to the next integer and of moving to the previous one.

When a random walk starts at , we say it is a simple random walk.

There are several formulae that can help us calculate the transition probabilities in a random walk.

Gambler’s Ruin Problem

The gambler’s ruin problem is a random walk that studies a game between two players where they give each other a monetary unit upon loss.

In the context of this problem, we can obtain the probability of a given player getting ruined, as well as the expected amount of bets before a given player gets ruined.

Unit 2: Markov Chains

A Markov chain is a chain (i.e. a stochastic process with both a discrete state set and a parameter set) that satisfies the Markov property.

Transition Probabilities

In Markov chains, the study of transition probabilities is of special interest. We can define:

The existence of stationary transition probabilities also allow us to define n step transition probabilities, which prove to be quite useful. Additionally, it can be handy to write these probabilities in a transition matrix. The step transition matrix can be easily obtained by calculating the th power of the corresponding one step matrix.

More on Transition Probabilities

For any type of stochastic process, we can define an initial probability vector, which contains the probabilities of having each possible state being the initial state in the process.

A finite and stationary Markov chain is a Markov chain with a finite state set and stationary transition probabilities.

The Chapman-Kolmogorov equation is an equation that helps us calculate the step transition probabilities through a sum of products of probabilities.

The transition probabilities that have been mentioned previously are actually conditional in the sense that they consider a specific present state. We can additionally obtain the unconditional transition probabilities, which are the probabilities of reaching a given state with no other condition specified.

Communication and Classes

When it is possible to go from a state to another , we say that is accessible from . When that happens and is also accessible from , we say that these two states communicate.

When grouping all states that communicate, we obtain classes.

Ergodic, Transitory and Absorbing States

An ergodic set is a set of states from which it is no longer possible to get out once reached. A Markov chain with only one class is said to be ergodic.

An absorbing set is an ergodic set that only has a single element.

A transitory set is basically the opposite of an ergodic set: a set from which it is still possible to get out once reached. Since every Markov chain must have at least one ergodic set, then it follows that we may exit a transitory state and never come back to it.

The states in an ergodic set, absorbing set and transitory set are called ergodic, absorbing and transitory, respectively.

Ergodic, Regular and Cyclical Markov Chains

An ergodic Markov chain is a Markov chain that only has a single class, which can only be an ergodic set.

In an ergodic Markov chain, if we can only enter all states in fixed periodic intervals, then it’s cyclical; otherwise, it’s regular.

Steady State Probabilities and Stationary Distribution

In a regular Markov chain, the probability of reaching a given state is constant when the number of steps is high enough, regardless of the initial state. As such, regular Markov chains are characterised by a stationary distribution (vector), which contains these constant steady state probabilities.

First-Passage and Recurrence Time s

In a Markov chain, the first-passage time from a state to another state is the amount of steps it takes to reach from for the first time.

Similarly, the recurrence time of a state is the amount of steps it takes to return to for the first time.

We can calculate the average first-passage time from a given state to another state .

Absorption Probability

When a Markov chain has absorbing states, the probability of reaching one of them from any other is called the probability of absorption.