BUSINESS DECISION MODELS (İŞLETME KARAR MODELLERİ) - (İNGİLİZCE) - Chapter 8: Markov Chains Özeti :

PAYLAŞ:

Chapter 8: Markov Chains

Stochastic Processes

In this section, we introduce the concept of a stochastic process. A stochastic process concerns sequences of events ruled by probability laws. A stochastic process is a family of random variables { X(t) , t ? T } defined on a given probability space, indexed by the parameter t, where t varies over an index set T .

A random variable maps the set of all possible outcomes in an experiment into the real numbers, R. Due to this feature of it, a random variable is a function.

In a stochastic process, X(t),t ? T{} the index set T is called the parameter set of the stochastic process. In other words, a stochastic process is an indexed collection of random variables { X 0 ,X 1 ,X 2 ,... } for describing the behavior of a system operating over some period of times. The values assumed by X(t) are called states , and the set of all possible values forms the state space S of the stochastic process. If the index set T of a stochastic process is discrete, then the process is called a discrete-parameter (or discrete-time) process . If T is continuous, then the process is called a continuous-parameter (or continuoustime) process . If the state space S of a stochastic process is discrete, the process is called discrete-state process often referred as a chain . In this statement, the state space S is assumed to be {0,1,2,...}. If the state space S is continuous, then we have a continuous-state process.

Familiar examples of stochastic processes are as follow:

  • daily prices of a stock or exchange rate fluctuations,
  • failures times of a machine,
  • medical data such as a patient’s blood pressure or EKG,
  • radar measurements for the position of an airplane,
  • toss a coin.

Markov Property and Markov Chains

In this chapter, we will only focus on discrete-time Markov chains. Some systems in real life can be in any of a finite number states. Consider that a system moves from state to state according to some ruled probability law. We will start with a very common example.

Transition Matrix

The transition probabilities p ij = P(X t =i) are conditional probabilities and they are also called one-step transition probabilities.

The conditional probability of an event B is the probability that the event will occur given the knowledge that an event A has already occurred. This probability is written as (A).

One-step transition probabilities are said to be stationary transition probabilities implies the transition probabilities do not change over time. To establish the transition probabilities between states we will need to collect the data.

Transition Diagram

It is also helpful to lay out a Markov chain in the so-called transition diagram, whose nodes are the states and whose arcs are the transition probabilities. By writing the numerical values of pij near the corresponding arcs, we can visualize the entire Markov chain model that can make some of its major properties easily apparent.

N-Step Transition Matrix

In the previous section, it has defined the one-step transition probability matrix P for a Markov chain. In this section, we are going to examine the n-step transition probability pij (n) of a Markov chain. The n-step transition probability pij (n) is the probability that a process in state j will be in state i after n additional transitions. In particular, pij (1) = pij . Also, p n is called the n-step transition matrix . Let us begin with an example to see how we can operate with the n-step transition matrix.

In Chapman-Kolmogorov equations, think of going from i to j in (n + m) steps with an intermediate stop in state k after n steps; then sum over all possible k values.

Regular Transition Matrix

A transition matrix is called regular if some power of the matrix includes all positive entries. Then a Markov chain is a regular Markov chain if its transition matrix is regular. Many applications of Markov chains are in finding long run predictions which are always possible with regular transition matrices.

If a transition matrix, zeros occur in the identical places in both P n and P n+1 for any n , they will always appear in those places for all higher powers of P, so P is not regular.

Classification of States

For the structure and analysis of transitions in a Markov chain, it is necessary to classify the states. As you see in the preceding section, some states, after being visited once, are certain to be visited again, while some other states this may not to be case. In this section, we will mention about different types of states.

State j ? S is accessible from state i ? S , denoted by i›j, if pij (n) >0 for some n . It is assumed that every state is accessible from itself since pii (0) =1.

A state i ? S is said to be communicate , written as i-j if they are accessible from each other. In other words,

i-j means i›j and j›i.

States i and j are in the same communicating class if each state is accessible from the other, i.e., i-j.

A Markov chain is irreducible if the state space S is a single communicating class. If a Markov chain has more than one class, it is called reducible .

Transient and Recurrent States

State i is recurrent if, upon entering state i, the system will definitely return to state i. If a state is not recurrent, it is transient.

Recurrence and transience is a class property.

Periodic and Aperiodic States

A state is called periodic , if it can only return to itself after a fixed number of transitions greater than 1. Otherwise, a state is said aperiodic .

A state with a self-loop transition is always aperiodic.

Absorbing States

In fact, some of the most important applications of Markov chains involve an important class of Markov chains which is called an absorbing chain. A state i of a Markov chain is called an absorbing state if, once the Markov chain enters the state, it locks in there forever. This means that the probability of leaving the state is zero. Mathematically, it is expressed as pij =1 and pij =0 for i ? j . A Markov chain is called absorbing if it has at least one absorbing state.

Steady-State Behavior of Markov Chains

The main goal of a Markov chain is to calculate the probability that a system occupies a given state Si, when n is very large. This probability is called the limiting probability and it may converge to steady-state values that are independent of the initial state.

The steady-state (or limiting) behavior of a Markov chain means that the chain does not stop changing but that enough time has elapsed, so that the probabilities do not change with respect to time. Finding long run predictions of a Markov chain is possible only with regular transition matrices. Remember that a regular transition matrix contains all positive entries for all higher powers of it.

Suppose that P is a regular transition matrix. It is showed in n- step transition matrix section that

u n =u n-1 .P=u 0 .P n

and, if n›? (tends to infinity), we obtain u?= u?.P . It is customary to replace u? by ? which is called steady-state solution of the system. The steady-state solution ? will satisfy the following equations:

?= ?.P,

?i ? S ?(i)=1.