神经网路中回归问题梯度下降的直观理解

 修改迭代次数,就可以直观看到回归问题中采用梯度下降法寻找local minimal的过程。下图给出了迭代次数分别为1,10,100000时到达local minimal的路径。

    import matplotlib
    import matplotlib.pyplot as plt
    matplotlib.use('Agg')
    %matplotlib inline
    import random as random
    import numpy as np
    import csv

    x_data = [ 338.,  333.,  328. , 207. , 226.  , 25. , 179. ,  60. , 208.,  606.]
    y_data = [  640.  , 633. ,  619.  , 393.  , 428. ,   27.  , 193.  ,  66. ,  226. , 1591.]
   
    x = np.arange(-200,-100,1) #bias
    y = np.arange(-5,15,0.2) #weight
    Z =  np.zeros((len(x), len(y)))
    X, Y = np.meshgrid(x, y)
    for i in range(len(x)):
        for j in range(len(y)):
            b = x[i]
            w = y[j]
            Z[j][i] = 0
            for n in range(len(x_data)):
                Z[j][i] = Z[j][i] +  (y_data[n] - b - w*x_data[n])**2 #计算每一个坐标点(w,b)下的loss值
            Z[j][i] = Z[j][i]/len(x_data)
   


# ydata = b + w * xdata 
    b = -120 # initial b
    w = -4 # initial w
    lr = 1 # learning rate
    iteration = 100000
  
    b_lr = 0.0
    w_lr = 0.0
 
    # Store initial values for plotting.
    b_history = [b]
    w_history = [w]
   
    # Iterations
    for i in range(iteration):
 
        b_grad = 0.0
        w_grad = 0.0
        for n in range(len(x_data)):       
            b_grad = b_grad  - 2.0*(y_data[n] - b - w*x_data[n])*1.0
            w_grad = w_grad  - 2.0*(y_data[n] - b - w*x_data[n])*x_data[n]
        # use Adagrad : first calculate sum of square of previous derivative g
        b_lr = b_lr + b_grad**2
        w_lr = w_lr + w_grad**2
     
        # Update parameters.
        b = b - lr/np.sqrt(b_lr) * b_grad 
        w = w - lr/np.sqrt(w_lr) * w_grad

        # Store parameters for plotting
        b_history.append(b)
        w_history.append(w)

    # plot the figure ms:size
    # plt.contourf(x,y,Z, 10, alpha=0.5, cmap=plt.get_cmap('jet'))

    plt.contourf(x,y,Z, 50, alpha=0.5, cmap=plt.get_cmap('jet'))
    plt.plot([-188.4], [2.67], 'x', ms=19, markeredgewidth=3, color='red')
    plt.plot(b_history, w_history, 'o-', ms=3, lw=1.5, color='black')
    plt.xlim(-200,-100)
    plt.ylim(-5,15)
    plt.xlabel(r'$b$', fontsize=16)
    plt.ylabel(r'$w$', fontsize=16)
    plt.show()

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值