Paper writing templete - the usage of colon

本文探讨了两种预测方法——线性模型(最小二乘法)和KNN(K最近邻)。线性模型假设数据具有简单结构,可能不够准确但稳定;而KNN则假设很少的结构,预测准确但可能不稳定。线性模型通过求解最小二乘问题确定参数,KNN依赖于训练样本的最近邻。文章详细阐述了这两种方法的数学基础和实现细节,包括误差平方和最小化及KNN的距离计算。

Paper writing templete

In this section we develop two simple but powerful prediction methods: the linear model fit by least squres and the KNN prediction rule. The linear model makes huge assumptions about structure and yields stable but possibly inaccurate predictions. The method of knn makes very mild structural assumptions: its predictions are often accurate but can be unstable.

  • The usage of colon.
  • Section description.
  • Punctuation in equation.
  • Context above the equation.
    Given a vector of inputs XT=(X1,X2,...,Xp)X^T=(X_1,X_2,...,X_p)XT=(X1,X2,...,Xp), we predict the ouput YYY via the model
    Y^=β^0+∑j=1pXjβ^j \hat Y = \hat \beta_0 + \sum_{j=1}^p X_j \hat \beta_j Y^=β^0+j=1pXjβ^j
    The term β^0\hat\beta_0β^0 is the intercept, also known as the bias in machine learning. Often it is convenient to include the constant variable 1 in XXX, include β^0\hat\beta_0β^0 in the vector of coefficients β^\hat\betaβ^, and then write the linear model in vector form as an inner product
    Y^=XTβ^, \hat Y = X^T\hat\beta, Y^=XTβ^,
    where XTX^TXT denotes vector or matrix transpose (XXX being a column vector) .

In this approach, we pick the coefficients β\betaβ to minimize the residual sum of squares
RSS(β)=∑i=1N(yi−xiTβ)2 RSS(\beta)=\sum_{i=1}^N(y_i-x_i^T\beta)^2 RSS(β)=i=1N(yixiTβ)2
RSS(β)RSS(\beta)RSS(β) is a quadratic function of the parameters, and hence its minimum always exists, but may not be unique. The solution is easiest to characterize in matrix notation. We can write
RSS(β)=(y−X)T(y−X) RSS(\beta)=(\mathbf y-\mathbf X)^T(\mathbf y-\mathbf X) RSS(β)=(yX)T(yX)
where X\mathbf XX is an N×pN\times pN×p matrix with each row an input vector, and y\mathbf yy is an N-vectorN\text{-vector}N-vector of the outputs in the training set. Differentiating w.r.t. β\betaβ we get the normal equations
XT(y−Xβ)=0. \mathbf{X^T(y-X}\beta)=0. XT(yXβ)=0.
If TTX\mathbf T^T\mathbf XTTX is nonsingular, then the unique solution is given by
β^=(XTX)−1XTy, \hat\beta = \mathbf{(X^TX)^{-1}X^Ty}, β^=(XTX)1XTy,
and the fitted value at the iiith input xix_ixi is y^i=y^(xi)=xiTβ^\hat y_i=\hat y(x_i)=x_i^T\hat\betay^i=y^(xi)=xiTβ^.

评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值