1. Vector——Vector Products
inner product or dot product
Note that it is always the case that
outer product
2. Matrix-Vector Products
第一种表达方式:
In other words, the ith entry of y is equal to the inner product of the ith row of A and x,
第二种表达方式:
In other words, y is a linear combination of the columns of A, where the coefficients of the linear combination are given by the entries of x
So far we have been multiplying on the right by a column vector, but it is also possible to multiply on the left by a row vector.
第一种表达方式:
which demonstrates that the ith entry of y.T is equal the inner product of x and the ith column of A
第二种表达方式:
So we see that y.T is a linear combination of the rows of A, where the coefficients for the linear combination are given by the entries of x
3. Matrix-Matrix Products
Remember that since so these inner products all
make senses.
Put another way, AB is equal to the sum, over all i, of the outer product of the ith column of A and the ith row of B.
second, We can also view matrix-matrix multiplication as a set of matrix-vector products.
Here the ith column of C is given by the matrix-vector product with the vector on the right, ci = Abi.
Symmetric Matrices
A square matrix is symmetric if A=A.T. It is
anti-symmetric if A=-A.T
It is esay to show that for ant square matrix, the matrix A+A.T is symmetric and the matrix A-A.T is anti-symmetric.
Norm:
Norms can also be defined for matrices, such as the Frobenius norm,
Orthogonal Matrices:
It follows immediately from the definition of orthogonality and normality that
In other words, the inverse of an orthogonal matrix is its transpose.
Note that if U is not square——n<m——but its column
are still orthonormal, then U.T*U=I
Range and Nullspace of a Matrix:
The projection of a vector onto the span
such that v is as close as possible to y, as measured by the Euclidean norm .We
denote the
The range (sometimes also called the columnspace) of a matrix denote
R(A), is the span of the columns of A. In others words,
Making a few technical assumptions(namely that A is full rank and that n<m), the
Looking at the definition for the projection, it should not be too hard to convince yourself that this is in fact the same objective that we minimized in our least squares problem and so these problems are naturally very connected.
When A contains only a single column, this gives the special case for a projection
of a vector on to a line:
Quadratic Forms and Positive Semidefinite Matrices
There is one type of positive definite matrix that comes up frequently, and so deserves some special mention. Given any matrix (not
necessarily or even square), the matrix
(sometimes called a Gram matrix)is always
positive semidefinite. Further, if m>n(and we assume for convenience that A is full rank), the
is
positive definite.
Eigenvalues and Eigenvectors of Symmetric Matrices
Two remarkable properties come about when we looked at the eigenvalues and eigenvectors of a symmtetric matrix. First, it can be shown that all the eigenvalues of A are real. Secondly, the eigenvecors of A are orthonormal, i.e., the matrix X defined above us an orthoginal matrix(for this reasoon, we denote the matrix of eigenvectors as U in this case).
An application where eigenvalues and eigenvectors come up frequently is in maximizing some function of a matrix. In particular, for a symmetric matrix A, consider the following maximization problem.
i.e., we want to find the vector (of norm 1) which maximizes the quadratic form. Assuming the eigenvalues are ordered as ,the
optimal x for this optimization problem is x1, the eigenvector corresponding to lambda1.
1.The inverse of an orthogonal matrix is its transpose
2.A matrix is symmetric if and only if it is orthogonally diagonalizable
3.A symmetric matrix is diagonalized by a matrix of its orthonormal eigenvectors
本文深入讲解了矩阵运算的基础概念,包括向量内积与外积、矩阵-向量及矩阵-矩阵乘法的不同表达形式。此外,还探讨了对称矩阵、正交矩阵及其性质,矩阵的范数定义,以及矩阵的行列式空间与空集的概念。通过实例解释了如何求向量投影,并介绍了正定矩阵、特征值和特征向量等高级主题。

被折叠的 条评论
为什么被折叠?



