# Unit 1: Introduction to Stochastic Processes

A stochastic process can be, broadly, analysed as a sequence ${X_{t}∣t∈R}$ of random variables, each of them corresponding to a specific moment in time, place, instance, etc.

The state set of a stochastic process is the set of all possible values that the random variables $X_{t}$ in the process can take. Similarly, the parameter set of a stochastic process is the set of all possible values that its parameter $t$ can take.

## Types of Stochastic Processes

We can classify stochastic processes according to the nature of their state set $S$ and parameter set $T$:

$S$ / $T$ | Discrete | Continuous |
---|---|---|

Discrete | Chain | Point process |

Continuous | Sequence of random variables | Continuous process |

Moreover, we can also classify them according to the probabilistic features of its random variables:

- An independent trial process is a stochastic process with a discrete parameter set and independent random variables.
- A Markov process is a stochastic process that satisfies the Markov property.
- A process with independent increments is a stochastic process whose every increment between any two random variables is independent.
- A stationary process is a stochastic process that has the same probability distribution for the vector of random variables, regardless of the value that its parameter takes.

## Bernoulli Process

A Bernoulli process is a stochastic process that behaves according to the Bernoulli trial.

Each random variable in it will follow a Bernoulli distribution, while the stochastic process that follows the sum of successes will consist of random variables that follow a binomial distribution.

## Random Walk

A random walk is a stochastic process whose state set is $Z$ and parameter set is $N$, where there’s a probability $p$ of moving to the next integer and $q$ of moving to the previous one.

When a random walk starts at $0$, we say it is a simple random walk.

There are several formulae that can help us calculate the transition probabilities in a random walk.

## Gambler’s Ruin Problem

The gambler’s ruin problem is a random walk that studies a game between two players where they give each other a monetary unit upon loss.

In the context of this problem, we can obtain the probability of a given player getting ruined, as well as the expected amount of bets before a given player gets ruined.

# Unit 2: Markov Chains

A Markov chain is a chain (i.e. a stochastic process with both a discrete state set and a parameter set) that satisfies the Markov property.

## Transition Probabilities

In Markov chains, the study of transition probabilities is of special interest. We can define:

- One step transition probabilities, the probabilities of transitioning from one state to another in a single step.
- Stationary transition probabilities, which show up when the transition probabilities in a Markov chain don’t change across time.

The existence of stationary transition probabilities also allow us to define n step transition probabilities, which prove to be quite useful. Additionally, it can be handy to write these probabilities in a transition matrix. The $n$ step transition matrix can be easily obtained by calculating the $n$th power of the corresponding one step matrix.

## More on Transition Probabilities

For any type of stochastic process, we can define an initial probability vector, which contains the probabilities of having each possible state being the initial state in the process.

A finite and stationary Markov chain is a Markov chain with a finite state set and stationary transition probabilities.

The Chapman-Kolmogorov equation is an equation that helps us calculate the $n$ step transition probabilities through a sum of products of probabilities.

The transition probabilities that have been mentioned previously are actually *conditional* in the sense that they consider a specific present state. We can additionally obtain the unconditional transition probabilities, which are the probabilities of reaching a given state with no other condition specified.

## Communication and Classes

When it is possible to go from a state $i$ to another $j$, we say that $j$ is accessible from $i$. When that happens and $i$ is also accessible from $j$, we say that these two states communicate.

When grouping *all* states that communicate, we obtain classes.

## Ergodic, Transitory and Absorbing States

An ergodic set is a set of states from which it is no longer possible to get out once reached. A Markov chain with only one class is said to be ergodic.

An absorbing set is an ergodic set that only has *a single* element.

A transitory set is basically the opposite of an ergodic set: a set from which it is still possible to get out once reached. Since every Markov chain must have at least one ergodic set, then it follows that we may exit a transitory state and never come back to it.

The states in an ergodic set, absorbing set and transitory set are called ergodic, absorbing and transitory, respectively.

## Ergodic, Regular and Cyclical Markov Chains

An ergodic Markov chain is a Markov chain that only has a single class, which can only be an ergodic set.

In an ergodic Markov chain, if we can only enter all states in fixed periodic intervals, then it’s cyclical; otherwise, it’s regular.

## Steady State Probabilities and Stationary Distribution

In a regular Markov chain, the probability of reaching a given state is constant when the number of steps is high enough, regardless of the initial state. As such, regular Markov chains are characterised by a stationary distribution (vector), which contains these constant steady state probabilities.

## First-Passage and Recurrence Time s

In a Markov chain, the first-passage time from a state $i$ to another state $j$ is the amount of steps it takes to reach $j$ from $i$ *for the first time*.

Similarly, the recurrence time of a state $i$ is the amount of steps it takes to *return* to $i$ *for the first time*.

We can calculate the average first-passage time from a given state $i$ to another state $j$.

## Absorption Probability

When a Markov chain has absorbing states, the probability of reaching one of them from any other is called the probability of absorption.