Paper writing templete
In this section we develop two simple but powerful prediction methods: the linear model fit by least squres and the KNN prediction rule. The linear model makes huge assumptions about structure and yields stable but possibly inaccurate predictions. The method of knn makes very mild structural assumptions: its predictions are often accurate but can be unstable.
- The usage of colon.
- Section description.
- Punctuation in equation.
- Context above the equation.
Given a vector of inputs XT=(X1,X2,...,Xp)X^T=(X_1,X_2,...,X_p)XT=(X1,X2,...,Xp), we predict the ouput YYY via the model
Y^=β^0+∑j=1pXjβ^j \hat Y = \hat \beta_0 + \sum_{j=1}^p X_j \hat \beta_j Y^=β^0+j=1∑pXjβ^j
The term β^0\hat\beta_0β^0 is the intercept, also known as the bias in machine learning. Often it is convenient to include the constant variable 1 in XXX, include β^0\hat\beta_0β^0 in the vector of coefficients β^\hat\betaβ^, and then write the linear model in vector form as an inner product
Y^=XTβ^, \hat Y = X^T\hat\beta, Y^=XTβ^,
where XTX^TXT denotes vector or matrix transpose (XXX being a column vector) .
In this approach, we pick the coefficients β\betaβ to minimize the residual sum of squares
RSS(β)=∑i=1N(yi−xiTβ)2
RSS(\beta)=\sum_{i=1}^N(y_i-x_i^T\beta)^2
RSS(β)=i=1∑N(yi−xiTβ)2
RSS(β)RSS(\beta)RSS(β) is a quadratic function of the parameters, and hence its minimum always exists, but may not be unique. The solution is easiest to characterize in matrix notation. We can write
RSS(β)=(y−X)T(y−X)
RSS(\beta)=(\mathbf y-\mathbf X)^T(\mathbf y-\mathbf X)
RSS(β)=(y−X)T(y−X)
where X\mathbf XX is an N×pN\times pN×p matrix with each row an input vector, and y\mathbf yy is an N-vectorN\text{-vector}N-vector of the outputs in the training set. Differentiating w.r.t. β\betaβ we get the normal equations
XT(y−Xβ)=0.
\mathbf{X^T(y-X}\beta)=0.
XT(y−Xβ)=0.
If TTX\mathbf T^T\mathbf XTTX is nonsingular, then the unique solution is given by
β^=(XTX)−1XTy,
\hat\beta = \mathbf{(X^TX)^{-1}X^Ty},
β^=(XTX)−1XTy,
and the fitted value at the iiith input xix_ixi is y^i=y^(xi)=xiTβ^\hat y_i=\hat y(x_i)=x_i^T\hat\betay^i=y^(xi)=xiTβ^.
本文探讨了两种预测方法——线性模型(最小二乘法)和KNN(K最近邻)。线性模型假设数据具有简单结构,可能不够准确但稳定;而KNN则假设很少的结构,预测准确但可能不稳定。线性模型通过求解最小二乘问题确定参数,KNN依赖于训练样本的最近邻。文章详细阐述了这两种方法的数学基础和实现细节,包括误差平方和最小化及KNN的距离计算。
1371





