Often during interviews, probability questions are stated that contain a Markov chain. A Markov chain is a sequence of random variables with the property that given the present state, the future states and the past states are independent. In other words, in case the current state is known, the past states do not affect the future. Think for example about a cube. An ant is starting at a specific corner of the cube and it tries to reach the exact opposite corner of the cube by only walking straight to neighbor corners. It takes an ant a second to walk a side. A problem statement – containing a Markov chain – could be to compute the expected amount of time that the ant is walking before it reaches the opposite corner. The ant is moving randomly to one of either three neighbor corners. Past choices do not affect the next choice of the ant, which makes this a Markov chain problem. This section will elaborate on the Markov chain.
In case the problem contains a finite-state Markov chain with n-states, then the Markov chain can be described by an – transition matrix P, given by Equation 1.
(1)





The corresponding transition matrix would be:
(2)





In every problem statement, we are looking for a certain answer. This answer should be the end of the Markov chain, also called the absorbing state. It is impossible to leave this state. In the example with the ant, the absorbing state is when the ant reached the exact opposite corner with respect to the corner from where it started. In the transition graph above, the absorbing state is state number 4. In this case it is assumed that one does not continue with looking for another job after the applicant gets a job offer in trading.
The problem statement could either aim for the probability to reach a specific absorbing state or for the expected number of steps or amount of time before reaching the absorption state. The equation to solve this problem can be derived from the law of total expectation.
Let’s first elaborate on the probability to reach a specific absorption state





(3)



(4)
Example: The ant starts at a specific corner of a cube and tries to reach the exact opposite corner of the cube. It only walks directly to neighbor corners with an equal probability, 1/3. Every time the ant travels to the next corner, a second passes. What is the expected number of seconds that it takes the ant to reach the exact opposite corner?
Solution: First start with numbering all corners to get a clear overview of all states. The ant starts at and wants to reach
.

Now we can start writing out all state equations:
(5)
(6)
(7)



(8)
(9)
(10)


(11)
(12)
(13)