Unit 1: Introduction to Stochastic Processes
A stochastic process can be, broadly, analysed as a sequence of random variables, each of them corresponding to a specific moment in time, place, instance, etc.
The state set of a stochastic process is the set of all possible values that the random variables in the process can take. Similarly, the parameter set of a stochastic process is the set of all possible values that its parameter can take.
Types of Stochastic Processes
We can classify stochastic processes according to the nature of their state set and parameter set :
/ | Discrete | Continuous |
---|---|---|
Discrete | Chain | Point process |
Continuous | Sequence of random variables | Continuous process |
Moreover, we can also classify them according to the probabilistic features of its random variables:
- An independent trial process is a stochastic process with a discrete parameter set and independent random variables.
- A Markov process is a stochastic process that satisfies the Markov property.
- A process with independent increments is a stochastic process whose every increment between any two random variables is independent.
- A stationary process is a stochastic process that has the same probability distribution for the vector of random variables, regardless of the value that its parameter takes.
Bernoulli Process
A Bernoulli process is a stochastic process that behaves according to the Bernoulli trial.
Each random variable in it will follow a Bernoulli distribution, while the stochastic process that follows the sum of successes will consist of random variables that follow a binomial distribution.
Random Walk
A random walk is a stochastic process whose state set is and parameter set is , where there’s a probability of moving to the next integer and of moving to the previous one.
When a random walk starts at , we say it is a simple random walk.
There are several formulae that can help us calculate the transition probabilities in a random walk.
Gambler’s Ruin Problem
The gambler’s ruin problem is a random walk that studies a game between two players where they give each other a monetary unit upon loss.
In the context of this problem, we can obtain the probability of a given player getting ruined, as well as the expected amount of bets before a given player gets ruined.
Unit 2: Markov Chains
A Markov chain is a chain (i.e. a stochastic process with both a discrete state set and a parameter set) that satisfies the Markov property.
Transition Probabilities
In Markov chains, the study of transition probabilities is of special interest. We can define:
- One step transition probabilities, the probabilities of transitioning from one state to another in a single step.
- Stationary transition probabilities, which show up when the transition probabilities in a Markov chain don’t change across time.
The existence of stationary transition probabilities also allow us to define n step transition probabilities, which prove to be quite useful. Additionally, it can be handy to write these probabilities in a transition matrix. The step transition matrix can be easily obtained by calculating the th power of the corresponding one step matrix.
More on Transition Probabilities
For any type of stochastic process, we can define an initial probability vector, which contains the probabilities of having each possible state being the initial state in the process.
A finite and stationary Markov chain is a Markov chain with a finite state set and stationary transition probabilities.
The Chapman-Kolmogorov equation is an equation that helps us calculate the step transition probabilities through a sum of products of probabilities.
The transition probabilities that have been mentioned previously are actually conditional in the sense that they consider a specific present state. We can additionally obtain the unconditional transition probabilities, which are the probabilities of reaching a given state with no other condition specified.
Communication and Classes
When it is possible to go from a state to another , we say that is accessible from . When that happens and is also accessible from , we say that these two states communicate.
When grouping all states that communicate, we obtain classes.
Ergodic, Transitory and Absorbing States
An ergodic set is a set of states from which it is no longer possible to get out once reached. A Markov chain with only one class is said to be ergodic.
An absorbing set is an ergodic set that only has a single element.
A transitory set is basically the opposite of an ergodic set: a set from which it is still possible to get out once reached. Since every Markov chain must have at least one ergodic set, then it follows that we may exit a transitory state and never come back to it.
The states in an ergodic set, absorbing set and transitory set are called ergodic, absorbing and transitory, respectively.
Ergodic, Regular and Cyclical Markov Chains
An ergodic Markov chain is a Markov chain that only has a single class, which can only be an ergodic set.
In an ergodic Markov chain, if we can only enter all states in fixed periodic intervals, then it’s cyclical; otherwise, it’s regular.
Steady State Probabilities and Stationary Distribution
In a regular Markov chain, the probability of reaching a given state is constant when the number of steps is high enough, regardless of the initial state. As such, regular Markov chains are characterised by a stationary distribution (vector), which contains these constant steady state probabilities.
First-Passage and Recurrence Time s
In a Markov chain, the first-passage time from a state to another state is the amount of steps it takes to reach from for the first time.
Similarly, the recurrence time of a state is the amount of steps it takes to return to for the first time.
We can calculate the average first-passage time from a given state to another state .
Absorption Probability
When a Markov chain has absorbing states, the probability of reaching one of them from any other is called the probability of absorption.
Unit 3: Markov Decision Processes
A Markov decision process (MDP) is a Markov chain where we can make a decision in each state, incurring a cost for each one. MDPs are used to optimise diverse phenomena that can be modelled with them, finding the optimal policy.
We can find this optimal policy through several methods:
-
The exhaustive enumeration method is the simplest of them all. It consists of enumeration all of the viable policies, calculating the expected cost for each one, then finally selecting the optimal one.
-
Solving the MDP as a linear programming problem through any of the applicable methods.
-
The policy improvement method consists of solving a linear equation system, then finding an alternative policy that optimises the costs until the optimal policy is reached.
- The discounted policy improvement method is basically the same method applied for phenomena where it might prove useful to discount the costs the further they are in the future.
-
The successive approximations method finds an approximation to the optimal policy by using the expected total costs starting from every state given increasing remaining periods.
Unit 4: Poisson, Markov and Non-Markov Processes
An S queue is, basically, a birth-death Markov process analysed as a queuing model.