Today we will continue to discuss random walks. In the previous post, we were proving Skorokhod embedding theorem. We introduced a random variable for a simple random walk that takes a value x greater than the value taken by random variable A but not reaching the value taken by the random variable B and assuming that this walk is independent of both A and B. The claim was that this new random variable takes a probability pk from the distribution and thus leading to the said theorem that the stopping time takes a probability from the distribution. To make the claim, we looked at the initial values when k = 0, then Z = 0 if and only if A = 0 and B = 0 and this has probability p0 by definition. Then we take k > 0 and in this case we work out the probability that our random variable takes the value k. We first expand this probability as the cumulative sum of the independent probabilities over all i < 0 and j > 0 . The independent probabilities are for the random variable to take a value k and for A to take value i and B to take value j. We can eliminate j let B take value k for the probability we are calculating. Then we apply the theorem from the gambler's ruin that describes the probability to take value b > x before reaching a < x as equals (x - a) / (b - a). Further we use the modified expression of that probability in terms of a random walk Ta ^ Tb to reach level b as the same probability and equaling (-a / (b-a)) or (-i/(k-i)) as in this case. So we simplify the independent probabilities with this value and the normalization factor times (k-i) pi pk. Since the sum of this has a zero average component, it simplifies to the probability pk thus proving the theorem.
Friday, August 22, 2014
Thursday, August 21, 2014
In the previous post, we mentioned the Skorokhod embedding theorem. This theorem gives the probability for the stopping time T as one of the probabilities in the distribution. In this post, we try to prove it. We define a pair of random variables (A,B) that takes values anywhere in the N x N grid with the following distribution:
the initial state with A=0 and B=0 to have a probability p0 and the
any other state with A=i and B=j to have a probability as a normalization factor times j-i times the combination of the independent probabilities for i and j from the distribution.
We can now cumulate all the other states to have a probability of 1 - p0. We can split the (j-1) to two separate terms.
Using the zero mean equation in the cumulation equation, we can apply the above two for all i < 0 < j, we get that the normalization factor is the inverse of this common value : sum of i and pi over all i > 0
Assuming the stopping time as an infinite series of Sn = i and (A,B) to be independent of Sn, we can take a new random variable Z in terms of the intersection of the random walk to reach value TA and before it reaches TB.
The claim is that the probability for this random variable to take a value k is the corresponding probability from the distribution. For k = 0, Z = 0 has a probability p0. For k > 0 we can use the theorem that computes the probability of the random variable we defined and we see that this has a value pk.
the initial state with A=0 and B=0 to have a probability p0 and the
any other state with A=i and B=j to have a probability as a normalization factor times j-i times the combination of the independent probabilities for i and j from the distribution.
We can now cumulate all the other states to have a probability of 1 - p0. We can split the (j-1) to two separate terms.
Using the zero mean equation in the cumulation equation, we can apply the above two for all i < 0 < j, we get that the normalization factor is the inverse of this common value : sum of i and pi over all i > 0
Assuming the stopping time as an infinite series of Sn = i and (A,B) to be independent of Sn, we can take a new random variable Z in terms of the intersection of the random walk to reach value TA and before it reaches TB.
The claim is that the probability for this random variable to take a value k is the corresponding probability from the distribution. For k = 0, Z = 0 has a probability p0. For k > 0 we can use the theorem that computes the probability of the random variable we defined and we see that this has a value pk.
Wednesday, August 20, 2014
Today we will look at Skorokhod embedding theorem. We consider a simple symmetric random walk where the probability for an event with value v is 1/2 and that against is 1/2. When we sample, we want the probability that the gambler's fortune will reach level b > x before reaching level a < x is given by (x-a)/(b-a). When we start the random walk from S0 = 0, the probability will reach level b > x before reaching level a < x equals -a / (b-a) and the probability that it will reach level a < x before b > x is b / (b-a).
These same probabilities can now be read as simple symmetric random walks with distribution as the range between a and b to reach b and a respectively.
We view the random walk this way. Given a 2 point probability distribution (p, 1-p) such that p is rational, we can always find integers a,b a < 0 < b such that pa + (1-p)b = 0.
This means we can simulate the distribution by generating a simple symmetric random walk starting from 0 and watching it till the first time it exits the interval [a,b] The probability it exits from the left point is p and the the probability it exits from the right point is 1-p. Having introduced the theorem, lets look at some more complicated probability distribution.
If the probabilities were to be many in a distribution, can we find a stopping time T such that the random walk has the given distribution ? This is where Skorokhod embedding theorem helps.
This theorem states that for a simple symmetric random walk and (pi, i belongs to Z) a given probability distribution on Z with zero mean which means satisfying the equation sum of i.pi = 0, then there exits a stopping time T where the probability for random walk to arrive at i = pi
These same probabilities can now be read as simple symmetric random walks with distribution as the range between a and b to reach b and a respectively.
We view the random walk this way. Given a 2 point probability distribution (p, 1-p) such that p is rational, we can always find integers a,b a < 0 < b such that pa + (1-p)b = 0.
This means we can simulate the distribution by generating a simple symmetric random walk starting from 0 and watching it till the first time it exits the interval [a,b] The probability it exits from the left point is p and the the probability it exits from the right point is 1-p. Having introduced the theorem, lets look at some more complicated probability distribution.
If the probabilities were to be many in a distribution, can we find a stopping time T such that the random walk has the given distribution ? This is where Skorokhod embedding theorem helps.
This theorem states that for a simple symmetric random walk and (pi, i belongs to Z) a given probability distribution on Z with zero mean which means satisfying the equation sum of i.pi = 0, then there exits a stopping time T where the probability for random walk to arrive at i = pi
Monday, August 18, 2014
Today we will continue to discuss random walks. We discussed single dimension and two dimension random walks so far.We saw that the two dimensional random walk can be expressed in terms of random walks in single dimensions that are independent. We proved this by enumerating the probabilities of the transitions and by considering them relative to origin. We now look at another lemma that states the probability for the two dimensional random walk to return to zero after 2n times is equal to twice that of the simple symmetric random walk (ssrw) to return to zero after 2n times. This intuitively follows from the first argument we made where we described the two dimensional random walk to be expressed in terms of the two independent ssrw. Further more, both the ssrw have the same probabilities for the final state we are interested in. It follows that the probability for the two dimensional ssrw is twice that of the same for a ssrw which in this case is the count of number of paths from (0,0) to (2n,0) to the overall paths and works out to be 2n Choose n times 2 ^ -2n. The probability for the final state for the two dimensional random walk is double that.
We next look at recurrence
Remember that Markov chain has finitely many states and at least one state must be visited infinitely many times. Perhaps some states will be visited again and again infinitely. Such a state is called recurrent. If the chain returns to a state i infinitely many times starting from i, we say that the Pi(Xn = i for infinitely many n) = 1
In the two dimensional symmetric random walk, from the previous lemma we can say that the recurrence probability holds true for both sides as it tends to 1 for infinitely many series and given the probability we worked out and substituting for different n and approximating with Stirling's approximation to c/n for some constant c, we see that the recurrence holds true for two dimensional random walk.
We next look at recurrence
Remember that Markov chain has finitely many states and at least one state must be visited infinitely many times. Perhaps some states will be visited again and again infinitely. Such a state is called recurrent. If the chain returns to a state i infinitely many times starting from i, we say that the Pi(Xn = i for infinitely many n) = 1
In the two dimensional symmetric random walk, from the previous lemma we can say that the recurrence probability holds true for both sides as it tends to 1 for infinitely many series and given the probability we worked out and substituting for different n and approximating with Stirling's approximation to c/n for some constant c, we see that the recurrence holds true for two dimensional random walk.
Sunday, August 17, 2014
In the previous post, we considered a simple symmetric random walk in two dimensions. We said that when we change the co-ordinates say by rotating the axis and scaling, the random walks along the single dimensions are independent. To prove this, we start with the random walk in two dimensions and express it as a sequence of vectors where each vector is an incremental displacement along the rotated and scaled axis in cartesian co-ordinate pairs. This sequence is also i.i.d because it is taken with some function on the elements of sequence consisting of incremental vectors along the original axis. When we look at this function, we see that the rotation and the scaling implies taking the sum and the difference of the incremental displacements along the original axis. This kind of sum and difference is independent for each of the elements in the sequence of the two dimension walk. Moreover, the possible values for each incremental in the two dimension walk are either positive or negative 1. So we can write the probabilites for state transtions of the two dimension walk as follows:
Probability to get to (1,1) in the two dimension walk = Probability to get to (1,0) in the single dimension walk = 1/4
Probability to get to (-1, -1) in the two dimension walk = Probability to get to (-1,0) in the single dimension walk = 1/4
Probability to get to (1, -1) in the two dimension walk = Probability to get to (0,1) in the single dimension walk = 1/4
Probability to get to (-1,1) in the two dimension walk = Probability to get to (0, -1) in the single dimension walk = 1/4
This implies that the probability for each of the incremental displacements along the rotated axis is 1/2. And so we can decompose the random walk in two dimensions with probability for the final state to a pair (n1 to be some value e1 , n2 to be some value e2) as a combination of the probability for the final state to be (n1 to be value e1) and the probability of the final state to be (n2 to be some value e2) for all choices of the value e1 and e2 to be in positive or negative unit distance. Thus they are independent.
We will next look at a few more lemmas in this case.
Probability to get to (1,1) in the two dimension walk = Probability to get to (1,0) in the single dimension walk = 1/4
Probability to get to (-1, -1) in the two dimension walk = Probability to get to (-1,0) in the single dimension walk = 1/4
Probability to get to (1, -1) in the two dimension walk = Probability to get to (0,1) in the single dimension walk = 1/4
Probability to get to (-1,1) in the two dimension walk = Probability to get to (0, -1) in the single dimension walk = 1/4
This implies that the probability for each of the incremental displacements along the rotated axis is 1/2. And so we can decompose the random walk in two dimensions with probability for the final state to a pair (n1 to be some value e1 , n2 to be some value e2) as a combination of the probability for the final state to be (n1 to be value e1) and the probability of the final state to be (n2 to be some value e2) for all choices of the value e1 and e2 to be in positive or negative unit distance. Thus they are independent.
We will next look at a few more lemmas in this case.
Thursday, August 14, 2014
Today I will discuss simple symmetric random walk in two dimensions. We can consider the same drunkard problem in two dimensions. Consider a drunkard in a city whose roads are arranged in a rectangular grid. The corners have coordinates (x,y) and the origin is (0,0). He moves east, west, north or south with equal probability of 1/4. When he reaches a corner, he moves again in the same manner.
This random walk can be considered in the form Z^2 = Z * Z. Let E1, E2 be i.i.d random numbers such that the incremental relative displacements are each 1/4.
Because the relative displacements are all the same, we move the start of all random walks to be the origin.
We further describe the horizontal and the vertical displacements with separate notations say x1 and x2 for horizontal and vertical respectively such that they belong to the grid and the final state is the sum of the incrementals in both directions taken together.
One of the lemmas now is that if both the horizontal and the vertical displacements are random walks, the two are not independent.
Since E1, E2 are independent their x and y projections are also independent and they are identically distributed with a transition probability to zero as 1/2 and to either 1 or -1 as 1/4.
So the horizontal states are a sum of iid random variables and hence a random walk. But it is not a simple random walk because the transition to zero has a different probability.
The same argument applies to vertical states.
Thus we see that the two random walks are not independent. Further, if we change the co-ordinates by rotating the axis by 45 degrees and scaling them then we can suppose the final state is described by the sum and the difference of their horizontal and vertical final states. In such a case the random walks along those axis is now considered independent.
This random walk can be considered in the form Z^2 = Z * Z. Let E1, E2 be i.i.d random numbers such that the incremental relative displacements are each 1/4.
Because the relative displacements are all the same, we move the start of all random walks to be the origin.
We further describe the horizontal and the vertical displacements with separate notations say x1 and x2 for horizontal and vertical respectively such that they belong to the grid and the final state is the sum of the incrementals in both directions taken together.
One of the lemmas now is that if both the horizontal and the vertical displacements are random walks, the two are not independent.
Since E1, E2 are independent their x and y projections are also independent and they are identically distributed with a transition probability to zero as 1/2 and to either 1 or -1 as 1/4.
So the horizontal states are a sum of iid random variables and hence a random walk. But it is not a simple random walk because the transition to zero has a different probability.
The same argument applies to vertical states.
Thus we see that the two random walks are not independent. Further, if we change the co-ordinates by rotating the axis by 45 degrees and scaling them then we can suppose the final state is described by the sum and the difference of their horizontal and vertical final states. In such a case the random walks along those axis is now considered independent.
Subscribe to:
Comments (Atom)