回归问题
- linear regression
- y∈[−∞,+∞]y \in [-\infty, +\infty]y∈[−∞,+∞]
- logistics regression
- 把y值压缩到[0,1],变为概率问题
- 代码:
- loss=(WX+b−y)2loss = (WX+b-y)^2loss=(WX+b−y)2
代码实现如下:def compute_error_for_line_given_points(b,w,points): totalError = 0 for i in range(0,len(points)): x = points[i,0] y = points[i,1] totalError += (y-(w*x+b)) ** 2 return totalError / float(len(points)) ``` - w′=w−lr∗∂loss∂ww' = w -lr * \frac{\partial loss}{\partial w}w′=w−lr∗∂w∂loss
def step_gradient(b_current,w_current,points,learningRate): b_gradient = 0 w_gradient = 0 N = float(len(points)) for i in range(0,len(points)): x = point[i,0] y = point[i,1] b_gradient += -(2/N) * (y-(w_current * x) + b_current) w_gradient += -(2/N) * x * (y - ((w_current * x) + b_current)) new_b = b_current - (learningRate * b_gradient) new_m = w_current - (learningRate * w_gradient) - Iterate to optimize
def gradient_descent_runner(points,starting_b,starting_m,learning_rate,num_iterations): b = starting_b m = starting_m for i in range(num_iterations): b, m = step_gradient(b,m,np.array(points),learning_rate) return [b,m]
- loss=(WX+b−y)2loss = (WX+b-y)^2loss=(WX+b−y)2
本文深入探讨了线性回归的基本原理,包括损失函数的定义及其最小化过程。通过梯度下降算法优化参数,实现了从理论到实践的转换。同时,文章提供了详细的代码示例,帮助读者理解如何计算误差并迭代更新权重。
453

被折叠的 条评论
为什么被折叠?



