Today we cover eigenvectors graph centrality. This is a measure of closeness centrality but it is not based on the sum of the distances from nodes to all others because that can be misleading. For example, a node A that is close to small and a fairly closed group within a large graph may be distant from the rest of the graph. Another node B may be at moderate distance from the rest of the graph. Yet the farness measures for node A and node B may be quite similar in magnitude. However node B is really more central than A.
This method finds the more central nodes in terms of the overall structure of the graph and pays less attention to the influences of local sub-graphs. This approach identifies "dimensions" of the distances between the nodes. The first dimension captures the global aspects of distances among nodes and the subsequent dimensions capture more specific and local sub-structures.
In a way the eigenvector centrality is a measure of the centrality of the vertex based on how important its neighbors are. We will try to get a firm understanding of eigenvectors.
An eigenvector v of a matrix B is a non-zero vector that does not rotate when B is applied to it except perhaps to point it in precisely the opposite direction. v may change length or reverse its direction but it won't turn sideways. In other words there is some scalar constant lambda such that Bv = lambda v and this lambda is the eigen value. If you scale an eigenvector, it is still an eigenvector.
Iterating methods apply B to a vector over and over again. In such cases if the eigenvalue is less than one the application of B to v that many times results in smaller and smaller v to the point that the product vanishes. If the eigenvalue is greater than one, the product will grow to infinity.
These eigenvectors are found by the definition det ( B - lambda I ) = 0.
As an example, we have matrix B as
B = 2 -4
-1 -1
det (B-lambda I) = 2-lambda -4
-1 --1-lambda
= (2-lambda) (-1-lambda) - (-4)(-1)
= lambda ^ 2 - lambda - 6
= (lambda -3)(lambda+2)
This gives eigen values 3 and -2 and we find the eigenvector v by solving (B-lambda.I)v = 0
For lambda = 3, we get -v1- 4v2 = 0.
Thus -4
1
is an eigenvector.
If B is symmetric, then there exists a set of n linearly independent eigenvectors of B, denoted by v1, v2, ... vn. This set is not unique because each eigenvector can be scaled. Each eigenvector has a corresponding eigenvalue that is uniquely defined for matrix B.
Eigenvectors are useful because any vector can be considered a sum of other vectors whose behavior is understood. Any n-dimensional vector can be expressed as a linear combination of these vectors while retaining the effect of B on each eigenvector separately.
Having understood eigenvectors, it is now straightforward to find eigenvector centrality. because we know that the graph is represented as an adjacency matrix B and Bx= lambda x.
There are many eigenvectors possible but we find the one with the largest eigenvalue based on Perrona Frobenius theorem which states that there is a unique and positive solution if lambda is the largest eigenvalue associated with the eigenvector of the adjacency matrix B.
NetworkX implements a method to find this:
>>> G = nx.path_graph(4)
>>> centrality = nx.eigenvector_centrality_numpy(G)
>>> print(['%s %0.2f'%(node,centrality[node]) for node in centrality])
['0 0.37', '1 0.60', '2 0.60', '3 0.37']
It uses SciPy sparse eigenvalue solver to find the largest eigenvalue/eigenvector pair. The eigenvector centrality is based on the normalization of the largest eigen vector.
Courtesy : introduction to social network methods by Hanneman and Riddle
Introduction to conjugate gradients by Shewchuk
#puzzle
Yesterday the question was how many ways can n elements be divided into k groups and the answer was the (n+k-1)!/(n!(k-1)!)
Today we want to find the number of k tuples of positive integers whose sum is n. The answer is the binomial coefficient (n-1) Choose (k-1)
This is the same as k-1 element subsets of a set with n-1 elements.
This method finds the more central nodes in terms of the overall structure of the graph and pays less attention to the influences of local sub-graphs. This approach identifies "dimensions" of the distances between the nodes. The first dimension captures the global aspects of distances among nodes and the subsequent dimensions capture more specific and local sub-structures.
In a way the eigenvector centrality is a measure of the centrality of the vertex based on how important its neighbors are. We will try to get a firm understanding of eigenvectors.
An eigenvector v of a matrix B is a non-zero vector that does not rotate when B is applied to it except perhaps to point it in precisely the opposite direction. v may change length or reverse its direction but it won't turn sideways. In other words there is some scalar constant lambda such that Bv = lambda v and this lambda is the eigen value. If you scale an eigenvector, it is still an eigenvector.
Iterating methods apply B to a vector over and over again. In such cases if the eigenvalue is less than one the application of B to v that many times results in smaller and smaller v to the point that the product vanishes. If the eigenvalue is greater than one, the product will grow to infinity.
These eigenvectors are found by the definition det ( B - lambda I ) = 0.
As an example, we have matrix B as
B = 2 -4
-1 -1
det (B-lambda I) = 2-lambda -4
-1 --1-lambda
= (2-lambda) (-1-lambda) - (-4)(-1)
= lambda ^ 2 - lambda - 6
= (lambda -3)(lambda+2)
This gives eigen values 3 and -2 and we find the eigenvector v by solving (B-lambda.I)v = 0
For lambda = 3, we get -v1- 4v2 = 0.
Thus -4
1
is an eigenvector.
If B is symmetric, then there exists a set of n linearly independent eigenvectors of B, denoted by v1, v2, ... vn. This set is not unique because each eigenvector can be scaled. Each eigenvector has a corresponding eigenvalue that is uniquely defined for matrix B.
Eigenvectors are useful because any vector can be considered a sum of other vectors whose behavior is understood. Any n-dimensional vector can be expressed as a linear combination of these vectors while retaining the effect of B on each eigenvector separately.
Having understood eigenvectors, it is now straightforward to find eigenvector centrality. because we know that the graph is represented as an adjacency matrix B and Bx= lambda x.
There are many eigenvectors possible but we find the one with the largest eigenvalue based on Perrona Frobenius theorem which states that there is a unique and positive solution if lambda is the largest eigenvalue associated with the eigenvector of the adjacency matrix B.
NetworkX implements a method to find this:
>>> G = nx.path_graph(4)
>>> centrality = nx.eigenvector_centrality_numpy(G)
>>> print(['%s %0.2f'%(node,centrality[node]) for node in centrality])
['0 0.37', '1 0.60', '2 0.60', '3 0.37']
It uses SciPy sparse eigenvalue solver to find the largest eigenvalue/eigenvector pair. The eigenvector centrality is based on the normalization of the largest eigen vector.
Courtesy : introduction to social network methods by Hanneman and Riddle
Introduction to conjugate gradients by Shewchuk
#puzzle
Yesterday the question was how many ways can n elements be divided into k groups and the answer was the (n+k-1)!/(n!(k-1)!)
Today we want to find the number of k tuples of positive integers whose sum is n. The answer is the binomial coefficient (n-1) Choose (k-1)
This is the same as k-1 element subsets of a set with n-1 elements.
No comments:
Post a Comment