Continuing from the previous post ...
The number of steps taken between any two nodes is called hitting time. The hitting time between an unclassified and domain term and can be averaged over all the walks that connect these two nodes.
Since the walks are selected based on transition probabilities, the most probable paths are selected first. The same pair of nodes can be connected with many paths and the same unclassified term can be connected to many other domain terms.
The contextual similarity of a classification pair n,m can then be described as the relative frequency of the hitting of those two nodes and other normalized nodes linked to that start node.
This is calculated as Contextual Similarity L(n,m) = H(n, m) / Sigma-i(H(n,m)).
We can also look at Stability of a random walk or a Markov chain in general.
Stability refers to the convergence of probabilities as the series becomes infinite. When the recurrence is positive and the series is not reducible, the average (called Cesaro average) 1/n Sum (PXk = x) converges to pi (x), as n -> infinity.
Stability is interesting because a Markov chain is a simple model of a stochastic dynamical system that remains within a small region. For example, when a pendulum swings, it finally comes to a stable position with dissipation of energy. Even if there were no friction, it would be deemed stable because it cannot swing too far away.
What the stability tells us is that when a Markov chain has certain properties ( irreducibility, positive recurrence, unique and stationary distribution pi) , the n-step transition matrix converges to a matrix with rows all equal to pi. This is called the fundamental stability theorem.
Stability works based on coupling.
Coupling refers to the various methods for constructing a combination of the two random variables. If the random variables are independent, then they can be combined in a straightforward manner taking co-occurrence. Coupling helps us define a third Markov chain Z from an arbitrary distribution X and a stationary distribution Y where Z is X prior to the meeting with Y and Z is Y after the meeting point. This then shows that the transition matrix converges to all rows as pi.
By choosing the most probable paths, the random walk follows the preferred state transitions. Thus while not all paths may end within the predetermined steps, we know that when it does, it would have chosen the higher transition probabilities.
A simple random walk has equal probabilities to move from one state to another.
To implement a simple random walk in (x,y) dimension, we can have a naive one like this:
for ( i = 1; i < 2^n; i++ )
if move in x-did
x[i] = x[i-1] + sample(step, 1);
y[i] = y[i-1];
else
x[i] = x[i-1];
y[i] = y[i-1] + sample(step,1);
print(x,y);
We can have the following metrics in a simple random walk:
first return time to zero
total number of visits to a state.
For all the random walks, we can have metrics like
total number of paths between two nodes
total number of paths in which a node appears
The number of steps taken between any two nodes is called hitting time. The hitting time between an unclassified and domain term and can be averaged over all the walks that connect these two nodes.
Since the walks are selected based on transition probabilities, the most probable paths are selected first. The same pair of nodes can be connected with many paths and the same unclassified term can be connected to many other domain terms.
The contextual similarity of a classification pair n,m can then be described as the relative frequency of the hitting of those two nodes and other normalized nodes linked to that start node.
This is calculated as Contextual Similarity L(n,m) = H(n, m) / Sigma-i(H(n,m)).
We can also look at Stability of a random walk or a Markov chain in general.
Stability refers to the convergence of probabilities as the series becomes infinite. When the recurrence is positive and the series is not reducible, the average (called Cesaro average) 1/n Sum (PXk = x) converges to pi (x), as n -> infinity.
Stability is interesting because a Markov chain is a simple model of a stochastic dynamical system that remains within a small region. For example, when a pendulum swings, it finally comes to a stable position with dissipation of energy. Even if there were no friction, it would be deemed stable because it cannot swing too far away.
What the stability tells us is that when a Markov chain has certain properties ( irreducibility, positive recurrence, unique and stationary distribution pi) , the n-step transition matrix converges to a matrix with rows all equal to pi. This is called the fundamental stability theorem.
Stability works based on coupling.
Coupling refers to the various methods for constructing a combination of the two random variables. If the random variables are independent, then they can be combined in a straightforward manner taking co-occurrence. Coupling helps us define a third Markov chain Z from an arbitrary distribution X and a stationary distribution Y where Z is X prior to the meeting with Y and Z is Y after the meeting point. This then shows that the transition matrix converges to all rows as pi.
By choosing the most probable paths, the random walk follows the preferred state transitions. Thus while not all paths may end within the predetermined steps, we know that when it does, it would have chosen the higher transition probabilities.
A simple random walk has equal probabilities to move from one state to another.
To implement a simple random walk in (x,y) dimension, we can have a naive one like this:
for ( i = 1; i < 2^n; i++ )
if move in x-did
x[i] = x[i-1] + sample(step, 1);
y[i] = y[i-1];
else
x[i] = x[i-1];
y[i] = y[i-1] + sample(step,1);
print(x,y);
We can have the following metrics in a simple random walk:
first return time to zero
total number of visits to a state.
For all the random walks, we can have metrics like
total number of paths between two nodes
total number of paths in which a node appears
No comments:
Post a Comment