In this blog post, I talk about matrix operations from the book on Algorithms and Data Structures. I'm covering these as part of the series of earlier posts from that book. The book says operations on matrices are at the heart of scientific computing. Efficient algorithms for working with matrices are therefore discussed. One important issue that arises in practice is numerical stability. Numerical stability means that there can be rounding errors which affect the results and are considered numerically unstable.
A matrix is a rectangular array of numbers. The elements of a matrix in row i and column j is aij. The set of all m * n matrices with real valued entries is denoted by R m*n. The transpose of a matrix A is the matrix A' obtained by exchanging the rows and columns of A.
A vector is a one dimensional array of numbers. A column vector is a n * 1 matrix. A row vector is a 1 * n matrix. A transpose of a column vector is a row vector.
The unit vector is one whose ith element is 1 and all the other elements are 0. The size of a unit vector is clear from the context. A zero matrix is a matrix whose every entry is 0.
Square n * n matrices arise frequently. Several special cases of square matrices are of particular interest.
A diagonal matrix has aij = 0 whenever i != j. An upper triagonal matrix is one for which uij = 0 if i > j. All entries below the diagonal are zero. A lower triagonal matrix is one for which lij = 0 for i < j.
Operations on matrices involve matrix addition such as in cij = aij + bij.
Matrix subtraction is the addition of a negative matrix.
In Matrix multiplication, we start with two matrices A and B that are compatible in the sense that the number of columns of A is equal to the number of rows of B. We start with two matrices A and B that are compatible. then we perform C = AB where cik = Sum-j=1 to n (aij.bjk)
Matrices may have some of the associative properties.
Identity matrices are identities for matrix multiplication.
Matrix multiplication is associative
A(BC) = (AB)C
A(B+C) = AB + AC
(B+C)D = BD + CD
The inverse of an n x n matrix A is the n x n matrix A(-1) where paranthesis denotes superscript such that AA(-1) = A(-1)A = In
If a matrix has an inverse, it is called determinant or non-singular.
The vectors x1, x2, xn are linearly dependent if there exists co-efficients not all of which are zero.
The ijth minor of an n x n matrix A, for n > 1 is the (n-1)x(n-1) matrix obtained by deleting the ith row and the jth column of A. The determinant of an n x n matrix A can be defined recursively in terms of its minors by
det (A) = { a11 if n = 1
{ Sum for j = 1 to n [ (-1)^(1+j) a1j det(A 1,j) if n > 1
The term (-1)^(i+j).det(Aij) is known as the cofactor of the element aij.
The determinant of square matrix A has the following properties:
If any row or any column of A is zero, then det(A) is zero
The det(A) is multiplied by lambda if the entries of any one row or any one column are all multiplied by lambda.
The determinant of A is unchanged if the entries in one row are added to those in another row.
The det(A) equals the det(A transpose)
The det(A) is multiplied by -1 if any two rows or any two columns are exchanged.
For any square matrix, we have det(AB) = det(A).det(B)
A matrix is a rectangular array of numbers. The elements of a matrix in row i and column j is aij. The set of all m * n matrices with real valued entries is denoted by R m*n. The transpose of a matrix A is the matrix A' obtained by exchanging the rows and columns of A.
A vector is a one dimensional array of numbers. A column vector is a n * 1 matrix. A row vector is a 1 * n matrix. A transpose of a column vector is a row vector.
The unit vector is one whose ith element is 1 and all the other elements are 0. The size of a unit vector is clear from the context. A zero matrix is a matrix whose every entry is 0.
Square n * n matrices arise frequently. Several special cases of square matrices are of particular interest.
A diagonal matrix has aij = 0 whenever i != j. An upper triagonal matrix is one for which uij = 0 if i > j. All entries below the diagonal are zero. A lower triagonal matrix is one for which lij = 0 for i < j.
Operations on matrices involve matrix addition such as in cij = aij + bij.
Matrix subtraction is the addition of a negative matrix.
In Matrix multiplication, we start with two matrices A and B that are compatible in the sense that the number of columns of A is equal to the number of rows of B. We start with two matrices A and B that are compatible. then we perform C = AB where cik = Sum-j=1 to n (aij.bjk)
Matrices may have some of the associative properties.
Identity matrices are identities for matrix multiplication.
Matrix multiplication is associative
A(BC) = (AB)C
A(B+C) = AB + AC
(B+C)D = BD + CD
The inverse of an n x n matrix A is the n x n matrix A(-1) where paranthesis denotes superscript such that AA(-1) = A(-1)A = In
If a matrix has an inverse, it is called determinant or non-singular.
The vectors x1, x2, xn are linearly dependent if there exists co-efficients not all of which are zero.
The ijth minor of an n x n matrix A, for n > 1 is the (n-1)x(n-1) matrix obtained by deleting the ith row and the jth column of A. The determinant of an n x n matrix A can be defined recursively in terms of its minors by
det (A) = { a11 if n = 1
{ Sum for j = 1 to n [ (-1)^(1+j) a1j det(A 1,j) if n > 1
The term (-1)^(i+j).det(Aij) is known as the cofactor of the element aij.
The determinant of square matrix A has the following properties:
If any row or any column of A is zero, then det(A) is zero
The det(A) is multiplied by lambda if the entries of any one row or any one column are all multiplied by lambda.
The determinant of A is unchanged if the entries in one row are added to those in another row.
The det(A) equals the det(A transpose)
The det(A) is multiplied by -1 if any two rows or any two columns are exchanged.
For any square matrix, we have det(AB) = det(A).det(B)
No comments:
Post a Comment