Today I will write about a few more trivia on graph theory. There are always an even number of odd vertices in a simple graph. A large number of operations can be defined on collections of graphs. For example, graph sums, differences, powers and even graph eigenvalues can be calculated on these collections. We wanted to use these collections in text mining.
We referred to finding the Laplacian and the eigenvalues. The eigenvalues of a graph are the eigenvalues of its adjacency matrix. The set of eigenvalues of a graph is called the graph spectrum. The eigenvalues of a graph are a special set of scalars associated with a matrix equation that decomposes a square matrix into an equivalent set Each eigenvalue is associated with a so called eigenvalues vector.
If we have a vector X then a square matrix A can be expressed as
AX=Lambda.X
or more succinctly as
(A-Lambda.I).X = 0
and the scalar Lambda is called the eigenvalue of A for the corresponding right eigenvector X.
Given a square matrix A, it can now be represented as A=PD(P^-1)
where P = eigenvalues vector
D= diagonal matrix constructed from eigenvalues
(P^-1) is the inverse of P
.
This form of decomposition is called eigenvalues decomposition.
What this helps us with is the canonicalization of a system to its simplest form where we reduce the number of parameters from n×n to n for the diagonal matrix.
We referred to finding the Laplacian and the eigenvalues. The eigenvalues of a graph are the eigenvalues of its adjacency matrix. The set of eigenvalues of a graph is called the graph spectrum. The eigenvalues of a graph are a special set of scalars associated with a matrix equation that decomposes a square matrix into an equivalent set Each eigenvalue is associated with a so called eigenvalues vector.
If we have a vector X then a square matrix A can be expressed as
AX=Lambda.X
or more succinctly as
(A-Lambda.I).X = 0
and the scalar Lambda is called the eigenvalue of A for the corresponding right eigenvector X.
Given a square matrix A, it can now be represented as A=PD(P^-1)
where P = eigenvalues vector
D= diagonal matrix constructed from eigenvalues
(P^-1) is the inverse of P
.
This form of decomposition is called eigenvalues decomposition.
What this helps us with is the canonicalization of a system to its simplest form where we reduce the number of parameters from n×n to n for the diagonal matrix.
No comments:
Post a Comment