Basic Notes for Matrices

  • Matrix Multiplication: The meaning of $Ax$ (range of $A$) and $Ax=b$.

Most students, after finishing the course linear algebra, may not understand the matrix multiplication yet. Here I will show the readers, roughly speaking, the real and the most significant meaning of matrix multiplication.

Denotes a matrix $A$ in it's columns by $\left[ {\begin{array}{*{20}{c}}
{{A_1}}&{{A_2}}& \cdots &{{A_n}}
\end{array}} \right]$, where ${A_i}$ is the ith column of $A$. A vector $x = {\left[ {\begin{array}{*{20}{c}}
{{x_1}}&{{x_2}}& \cdots &{{x_n}}
\end{array}} \right]^T}$, where ${x_i} \in R$. Then \[Ax = \left[ {\begin{array}{*{20}{c}}
{{A_1}}&{{A_2}}& \cdots &{{A_n}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_1}}\\
{{x_2}}\\
 \vdots \\
{{x_n}}
\end{array}} \right] = {x_1}{A_1} + {x_2}{A_2} +  \cdots  + {x_n}{A_n} = \sum\limits_{i = 1}^n {{x_i}{A_i}} \]

which implies that the result of $Ax$ is a linear combination of the columns of $A$ with corresponding coefficients, the components of the vector $x$. That evidently indicate that the range of $A$, $\left\{ {v:v = Ax,x \in {R^n}} \right\} \buildrel \Delta \over = {\rm{range}}\left( A \right)$, is spanned by the columns of $A$. That is, if $v \in {\rm{range}}\left( A \right)$, then $v = {a_1}{A_1} + {a_2}{A_2} +  \cdots  + {a_n}{A_n}$ for some ${a_i}$. This may be the most significant for matrix multiplication. Since for matrix $B = \left[ {\begin{array}{*{20}{c}}
{{B_1}}&{{B_2}}& \cdots &{{B_n}}
\end{array}} \right]$ (where ${B_i}$ is the ith column of $B$), $AB = A\left[ {\begin{array}{*{20}{c}}
{{B_1}}&{{B_2}}& \cdots &{{B_n}}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
{A{B_1}}&{A{B_2}}& \cdots &{A{B_n}}
\end{array}} \right]$, that matrix multiply matrix may be reduced to that matrix multiply several related vectors.

  1. Simplifying computation.

Que. Suppose $A = \left[ {\begin{array}{*{20}{c}}
1&2\\
{ - 1}&4\\
0&7
\end{array}} \right]$ and $x = \left[ {\begin{array}{*{20}{c}}
2\\
1
\end{array}} \right]$. Find $Ax$.

Ans. If we use the original algorithm, we cann't get the result until 6 times computation. However, if we use the algorithm above, we may complete the computation whthin 3 steps, \[Ax = \left[ {\begin{array}{*{20}{c}}
1&2\\
{ - 1}&4\\
0&7
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
2\\
1
\end{array}} \right] = 2\left[ {\begin{array}{*{20}{c}}
1\\
{ - 1}\\
0
\end{array}} \right] + 1\left[ {\begin{array}{*{20}{c}}
2\\
4\\
7
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
2\\
{ - 2}\\
0
\end{array}} \right] + \left[ {\begin{array}{*{20}{c}}
2\\
4\\
7
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
4\\
2\\
7
\end{array}} \right].\]

  2. Helping to understand some concepts

The rank of a matrix $A$ is the size of the largest collection of linearly independent columns of $A$ (the column rank) or the size of the largest collection of linearly independent rows of $A$ (the row rank). For every matrix, the column rank is equal to the row rank.

The column space $C(A)$ of a matrix $A$ (sometimes called the range of a matrix) is the set of all possible linear combinations of its column vectors.

Moreover, it's obvious that the column rank of $A$ is the dimension of the column space of $A$.

It's well-known that if ${\rm{rank}}\left( A \right) = {\rm{rank}}\left( {A,b} \right)$ then the linear equations $Ax = b$ is consistent, which means that this equation has at least one solution. Now we will explain this proposition by the note above.

Since $Ax = b$ is consistent, there exists ${x_0} = {\left[ {\begin{array}{*{20}{c}}
{{a_1}}&{{a_2}}& \cdots &{{a_n}}
\end{array}} \right]^T}$ such that \[Ax = \left[ {\begin{array}{*{20}{c}}
{{A_1}}&{{A_2}}& \cdots &{{A_n}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{a_0}}\\
{{a_1}}\\
 \vdots \\
{{a_n}}
\end{array}} \right] = {a_1}{A_1} + {a_2}{A_2} +  \cdots  + {a_n}{A_n} = b.\]

That indicates that the vector $b$ at the right hand side is in the column space of $A$, i.e. it can be represented by the linear combination of the columns of $A$. That is, $b$ is linearly dependent on the columns of $A$. Thus the size of the largest collection of linearly independent columns of $(A,b)$ is equal to that of $A$, which immediately gets that ${\rm{rank}}\left( {A,b} \right) = {\rm{rank}}\left( A \right)$ by the definition of rank.

  • Matrix with rank 1.

A matrix, say $A$, with rank 1 is that the matrix has only one nozero linearly independent column which other columns depends on it. Suppose ${\rm{rank}}\left( A \right) = 1$ and ${A_1} \ne 0$ (this $0$ is the zero vector). Then we have ${A_i} = {a_i}{A_1}$ for $i = 2, \cdots ,n$. Thus \[A = \left[ {\begin{array}{*{20}{c}}
{{A_1}}&{{A_2}}& \cdots &{{A_n}}
\end{array}} \right] = {A_1}\left[ {\begin{array}{*{20}{c}}
1&{{a_2}}& \cdots &{{a_n}}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
1&{{a_2}}& \cdots &{{a_n}}
\end{array}} \right]\]

where ${{A_{i1}}}$ is the i1th component of the matrix $A$. We easily see that matrices with rank 1 can be represented by the multiplication of a column vector with a row vector.

Now let us discuss the eigenvectors and eigenvalues of these matrices. Multiplying the vector ${\left[ {\begin{array}{*{20}{c}}
{{A_{11}}}&{{A_{21}}}& \cdots &{{A_{n1}}}
\end{array}} \right]^T}$ on the right of the matrix $A$, we have \[\begin{array}{l}
A\left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{a_1}}&{{a_2}}& \cdots &{{a_n}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right]\\
{\rm{            }} ~~~~~~~~~~~~~~~~~= \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right]\left( {\left[ {\begin{array}{*{20}{c}}
{{a_1}}&{{a_2}}& \cdots &{{a_n}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right]} \right) = \mathop {\underline {\left( {\sum\limits_{i = 1}^n {{a_i}{A_{i1}}} } \right)} }\limits_{\mathop \lambda \limits^\parallel  } \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right]
\end{array}\]

which implies that one eigenvector of $A$ is $\left[ {\begin{array}{*{20}{c}}
{{A_{11}}}\\
{{A_{21}}}\\
 \vdots \\
{{A_{n1}}}
\end{array}} \right]$ and one eigenvalue of $A$ is ${\lambda  = \sum\limits_{i = 1}^n {{a_i}{A_{i1}}} }$, which is the trace of $A$, where ${\rm{trace}}\left( A \right) = \sum\limits_{i = 1}^n {{A_{ii}}} $. Since ${\rm{rank}}\left( A \right) = 1$, the other eigenvalues of $A$ are all zeros.

Ex. Consider the matrix $A = \left[ {\begin{array}{*{20}{c}}
1&3&2\\
{ - 1}&{ - 3}&{ - 2}\\
2&6&4
\end{array}} \right]$. Then it's easy to see that $A = \left[ {\begin{array}{*{20}{c}}
1\\
{ - 1}\\
2
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
1&3&2
\end{array}} \right]$. The uniquely nonzero eigenvalue of $A$ is $\left[ {\begin{array}{*{20}{c}}
1&3&2
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
1\\
{ - 1}\\
2
\end{array}} \right] = 1 + \left( { - 3} \right) + 4 = 2 = {\rm{trace}}\left( A \right)$. And the corresponding eigenvector is $\left[ {\begin{array}{*{20}{c}}
1\\
{ - 1}\\
2
\end{array}} \right]$.

转载于:https://www.cnblogs.com/aujun/p/3895992.html

### 投影矩阵的概念 在计算机图形学和线性代数中,投影矩阵用于将三维空间中的坐标转换到二维平面上。这种变换对于创建逼真的图像至关重要,因为屏幕本质上是一个二维表面。通过应用特定类型的投影矩阵,可以模拟不同的视角效果。 常见的两种投影方式是正交投影和平行透视投影: - **正交投影**保持物体的比例不变,适用于工程图或CAD设计等场景[^1]。 - **平行透视投影**则模仿人类视觉系统的特性,远处的对象看起来更小,近处的对象更大,从而提供更加自然的观看体验. ### 实现方法 为了实现这些投影,在着色器程序或者图形API内部会构建相应的4×4齐次坐标系下的矩阵,并将其应用于顶点数据上完成最终位置计算。以下是使用GLSL编写的一个简单的透视投影矩阵的例子: ```glsl mat4 perspective(float fov, float aspect, float zNear, float zFar){ float tanHalfFov = tan(radians(fov / 2)); mat4 projMat; projMat[0][0] = 1/(aspect * tanHalfFov); projMat[1][1] = 1/tanHalfFov; projMat[2][2] = -(zFar + zNear)/(zFar - zNear); projMat[2][3] = -1; projMat[3][2] = (-2*zFar*zNear)/(zFar-zNear); return projMat; } ``` 这段代码定义了一个函数`perspective()`来生成一个标准的透视投影矩阵,其中参数分别代表视野角度(`fov`)、宽高比(`aspect`)以及最近最远裁剪面的距离(`zNear`, `zFar`). ### 应用领域 除了基本的游戏开发外,投影技术广泛应用于虚拟现实(VR),增强现实(AR),建筑设计等多个行业当中。它帮助开发者们创造出沉浸式的交互环境让用户能够更好地理解和感受数字世界的内容.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值