# Sensitivity of conditions for lumping finite Markov chains

• 39 Pages
• 0.51 MB
• English
by
Naval Postgraduate School, Available from the National Technical Information Service , Monterey, Calif, Springfield, Va
The Physical Object
Pagination39 p. ;
ID Numbers
Open LibraryOL25502889M

Markov chains with large transition probability matrices occur in many applications such as manpower models. Under certain conditions the state space of a stationary discrete parameter finite Markov chain may be partitioned into subsets, each of which may be treated as a single state of a smaller chain that retains the Markov property.

Finite Markov Chains Here we introduce the concept of a discrete-time stochastic process, investigat-ing its behaviour for such processes which possess the Markov property (to make predictions of the behaviour of a system it suﬃces to consider only the present state of the system and not its history).

We then add a further restriction of. When the initial and transition probabilities of a finite Markov chain in discrete time are not well known, we should perform a sensitivity analysis.

This is done by considering as basic uncertainty models the so-called credal sets that these probabilities are known or believed to belong to, and by allowing the probabilities to vary over such sets. This leads to the definition of an imprecise Author: Gert de Cooman, Filip Hermans, Erik Quaeghebeur.

For finite, homogeneous, continuous-time Markov chains having a unique stationary distribution, we derive perturbation bounds which demonstrate the connection between the sensitivity to.

Markov chain might not be a reasonable mathematical model to describe the health state of a child. We shall now give an example of a Markov chain on an countably inﬁnite state space.

The outcome of the stochastic process is gener-ated in a way such that the Markov property clearly holds. The stateFile Size: KB. Abstract: A lumping of a Markov chain is a coordinate-wise projection of the chain.

### Description Sensitivity of conditions for lumping finite Markov chains EPUB

We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original.

• know under what conditions a Markov chain will converge to equilibrium in long time; • be able to calculate the long-run proportion of time spent in a given state.

1 Deﬁnitions, basic properties, the transition matrix Markov chains were introduced in by Andrei Andreyevich Markov (–). Chapter 3 FINITE-STATE MARKOV CHAINS Introduction The counting processes {N(t); t > 0} described in Section have the property that N(t) changes at discrete instants of time, but is deﬁned for all real t > 0.

The Markov chains to be discussed in this chapter are stochastic processes deﬁned only at integer values of time, n = 0, 1.At each integer time n ≥ 0, there is an. For an n-state finite, homogeneous, ergodic Markov chain, with transition matrix ${\bf P}$ and stationary distribution ${\boldsymbol \pi}$ we assume that the entries of ${\bf P}$ are differentiabl.

Distribution of First Passage Times for Lumped States in Markov Chains To illustrate these definitions, reconsider the inventory example where Xt is the number of cameras on hand at the end of week t, where we start with X0. Suppose that it turns out that.

Sensitivity of finite Markov chains under perturbation E. Seneta as a measure of relative sensitivity (‘condition number’) r under perturbation of P to P, while on the basis of (4) and rank-one updates for finite Markov chains, in: WI.

Stewart, ed. Discounted approximations in risk-sensitive average Markov cost chains with finite state space 5 December | Mathematical Methods of Operations Research, Vol.

48 Variance-Based Risk Estimations in Markov Processes via Transformation with State Lumping. tantly, the stationary distributions of both Markov chains are related through a simple linear transformation.

To illustrate our ideas, we use as an example the compu-tation of the stationary distribution of Google's Markov chain|the so-called PageRank (Brin et al., ). The lumping of states is particularly e ective in this context.

CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): When the initial and transition probabilities of a finite Markov chain in discrete time are not well known, we should perform a sensitivity analysis.

This is done by considering as basic uncertainty models the so-called credal sets that these probabilities are known or believed to belong to, and by allowing the.

Lumping a Markov chain appears as a useful tool in this kind of investigation, since by lumping Markov chain the spectral gap cannot decrease. Informally, when a Markov chain is lumpable, it possible to reduce the number of states by a sort of aggregation process, obtaining a â€œsmallerâ€ arkov chain.

There is a close connection between stochastic matrices and Markov chains. To begin, let $S$ be a finite set with $n$ elements $\{x_1, \ldots, x_n\}$. The set $S$ is called the state space and $x_1, \ldots, x_n$ are the state values.

### Details Sensitivity of conditions for lumping finite Markov chains EPUB

A Markov chain $\{X_t\}$ on $S$ is a sequence of random variables on $S$ that have the Markov. Sensitivity of conditions for lumping finite Markov chains. By Moon Taek Suh Get PDF (3 MB).

In general, if a Markov chain has rstates, then p(2) ij = Xr k=1 p ikp kj: The following general theorem is easy to prove by using the above observation and induction.

Theorem Let P be the transition matrix of a Markov chain. The ijth en-try p(n) ij of the matrix P n gives the probability that the Markov chain, starting in state s i, will.

We prove that the optimal lumping quotient of a finite Markov chain can be constructed in O(mlgn) time, where n is the number of states and m is the number of proof relies on the use of splay trees (designed by Sleator and Tarjan [J. In book: Sensitivity Analysis: Matrix Methods in Demography and Ecology, Publisher: Springer Nature, pp an application is given which concerns the analysis of a finite Markov chain.

A Markov chain is a stochastic process, but it differs from a general stochastic process in that a Markov chain must be "memory-less."That is, (the probability of) future actions are not dependent upon the steps that led up to the present state.

This is called the Markov the theory of Markov chains is important precisely because so many "everyday" processes satisfy the Markov. Lumping the States of a Finite Markov [email protected], [email protected] Abstract: In this work we show how the lumping of states of a ﬁnite Markov chain can be regarded as a special decomposition of its transition matrix called stochastic factorization.

(see for example the book by Horn and Johnson,Theorem ). We present an efficient finite difference method for the computation of parameter sensitivities that is applicable to a wide class of continuous time Markov chain models.

The estimator for the method is constructed by coupling the perturbed and nominal processes in a natural manner, and the analysis proceeds by utilizing a martingale. Finite Markov Chains With a New Appendix "Generalization of a Fundamental Matrix" Authors: Kemeny, John G., Snell, J.

Laurie. We prove that the optimal lumping quotient of a ﬁnite Markov chain can be con-structd in O(mlgn) time, where n is the number of states and m is the number of transitions. The proof relies on the use of splay trees [18] to sort transition weights.

Key words: bisimulation, computational complexity, lumpability, Markov chains, splay trees 1. The approach here makes it easy to compute the sensitivity of a variety of dependent variables calculated from the Markov chain. As an example of this flexibility, consider a recently developed demographic index, the number of years of life lost due to mortality (Vaupel and Canudas Romo ).

The transient states of the chains are age classes, absorption corresponds to death, and absorbing. Abstract. Perturbation theory for finite discrete-time Markov chains was systematically studied by several authors. In particular, Schweitzer [8] recognized the importance of the fundamental matrix for perturbation theory and obtained perturbation formulas for so-called regular perturbations of a discrete-time Markov chain.

[7] G. Chen and L. Saloff-Coste, Comparison of cutoffs between lazy walks and Markovian semigroups, Journal of Applied Probability (): – [8] P. Diaconis and L. Saloff-Coste, Logarithmic Sobolev inequalities for finite Markov chains, The Annals of Applied Probability, 6(3) (): – [9] J. Ding, E. Lubetzky and Y.

Peres, Total variation cutoff in birth-and-death. And @Sasha already explained that every finite Markov chain, even the periodic ones, has at least one stationary distribution. (3) and (4) make no sense to me.

You could try reading this or the quite accessible book Markov chains by James Norris. I bought this book to re-learn finite markov chain, because previously I used another book that is not very good. The good points of this book: does not assume too much mathematical background; classifies the states of finite markov chains and also the types of finite markov chains early on, so that I have a clear picture of what to expect in later chapters; most theorems are proved, though Reviews: 1.

Discounted approximations in risk-sensitive average Markov cost chains with finite state space 5 December | Mathematical Methods of Operations Research, Vol.

91, No. 2 Risk-sensitive continuous-time Markov decision processes with unbounded rates and Borel spaces.When the initial and transition probabilities of a finite Markov chain in discrete time are not well known, we should perform a sensitivity analysis.

This is done by considering as basic uncertainty models the so-called credal sets that these probabilities are known or believed to belong to, and by allowing the probabilities to vary over such sets.A Sufficient Condition for Ergodicity.

Classification of the State Space. Lumping of Markov Chains.