Regression
预测模型
1.Model
Linear model:
y=b+∑wixiy = b+ \sum w_i x_iy=b+∑wixi
2.Goodness of Function
Loss Function : output( how bad it is)
L(f)=∑i=1N(yn−(b+wxji))2L(f) = \sum_{i=1}^{N} (y^n -(b+w x_j ^i))^2L(f)=i=1∑N(yn−(b+wxji))2
3.Best Function
w∗,b∗=argminw,b∑i=1N(yn−(b+wxji))2w^{*},b^* = \mathop{\arg\min_{w,b}} \sum_{i=1}^{N} (y^n -(b+w x_j ^i))^2w∗,b∗=argw,bmini=1∑N(yn−(b+wxji))2
Gradient Descent (无约束最优化问题)
minx∈Rnf(x)\mathop{\min_{x \in R^n}} f(x)x∈Rnminf(x)
If in the convex (eg linear regression) will find the global minimum
Algorithm
- get x0x^{0}x0
- calculate the gk=g(xk)g_k=g(x^{k})gk=g(xk) ,when gk<εg_k < \varepsilongk<ε and x∗=xkx^{*} = x^{k}x∗=xk,end the algorithm. otherwise ,
f(x(k)−λkgk)=minλ≥0(f(x(k)−λgk))f(x^{(k)}-\lambda_{k} g_k) = \mathop{\min_{\lambda \geq0}}(f(x^{(k)}-\lambda g_k))f(x(k)−λkgk)=λ≥0min(f(x(k)−λgk))