Tensorflow撸代码之1线性回归

本文详细介绍了一个使用TensorFlow实现线性回归的实际案例。通过定义输入、初始化参数、建立线性模型和损失函数,采用梯度下降算法优化参数,最终在训练集上得到良好的拟合效果,并在测试集上验证了模型的泛化能力。

线性回归


参考: 地址

# _*_ encoding=utf8 _*_

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# 初始化
learn_rate = 0.01
train_epochs = 1000
step = 50

# 初始化训练数据
train_x = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,
                7.042,10.791,5.313,7.997,5.654,9.27,3.1])
train_y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,
                2.827,3.465,1.65,2.904,2.42,2.94,1.3])
# 17个数
n_samples = train_x.shape[0]


# 查看一下数据点的分布
plt.scatter(train_x, train_y)
plt.show()

#定义输入
X = tf.placeholder(tf.float32, name = "float")
Y = tf.placeholder(tf.float32, name = "float")

# 初始化参数  np.random.rand()函数可以返回一个或一组服从“0~1”均匀分布的随机样本值。
# 随机样本取值范围是[0,1)
W = tf.Variable(np.random.rand(),name = "weight")
b = tf.Variable(np.random.rand(),name = "bias")

# 线性模型 点乘 对应位置相乘
pred = tf.add(tf.multiply(X,W),b)

# 损失函数  均方误差函数
cost = tf.reduce_sum(tf.pow(pred-Y,2))/(2*n_samples)
# 梯度下降算法 去优化cost函数的值 找到最小值 learn_rate下降的步伐
optimizer = tf.train.GradientDescentOptimizer(learn_rate).minimize(cost)

# 初始化变量
init = tf.global_variables_initializer()

# 开始训练
with tf.Session() as sess:
    sess.run(init)

    # 训练样本数据
    for epoch in range(train_epochs):
        for (x,y) in zip(train_x,train_y):
            # 梯度下降损失函数的最小值
            sess.run(optimizer,feed_dict={X:x,Y:y})

        # 训练参数 找到最合适的w b
        if (epoch+1) % step ==0 :
            c = sess.run(cost,feed_dict={X:train_x,Y:train_y})
            print("epoch:",'%04d' % (epoch+1),"cost=", "{:.9f}".format(c),
                "W=", sess.run(W), "b=", sess.run(b))


    print("optimizaton finished")
    training_cost = sess.run(cost, feed_dict={X:train_x,Y:train_y})
    print("最终的cost值=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')

    #现在训练好了参数w和b 就找到了拟合直线

    # 图 x, y  r表示红色,o表示实心  label 标识
    plt.plot(train_x, train_y, 'ro', label='Original data')
    plt.plot(train_x, sess.run(W) * train_x + sess.run(b), label='训练集的拟合线')
    plt.legend()
    plt.show()

    # 测试数据
    test_x = np.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])
    test_y = np.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])

    print("均分误差")
    testing_cost = sess.run(
        tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_x.shape[0]),
        feed_dict={X: test_x, Y: test_y})  # 损失函数
    print("测试集的损失=", testing_cost)
    print("均方误差的绝对值:", abs(
        training_cost - testing_cost))

    plt.plot(test_x, test_y, 'bo', label='Testing data')
    plt.plot(train_y, sess.run(W) * train_x + sess.run(b), label='测试集的拟合线')
    plt.legend()
    plt.show()

结果打印
17
epoch: 0050 cost= 0.115379333 W= 0.3592451 b= 0.012641402
epoch: 0100 cost= 0.110939667 W= 0.35272354 b= 0.05955659
epoch: 0150 cost= 0.107012950 W= 0.34658992 b= 0.1036815
epoch: 0200 cost= 0.103539936 W= 0.34082115 b= 0.14518192
epoch: 0250 cost= 0.100468241 W= 0.3353954 b= 0.18421417
epoch: 0300 cost= 0.097751491 W= 0.3302924 b= 0.22092499
epoch: 0350 cost= 0.095348708 W= 0.32549286 b= 0.25545272
epoch: 0400 cost= 0.093223654 W= 0.32097882 b= 0.28792647
epoch: 0450 cost= 0.091344208 W= 0.31673303 b= 0.31846946
epoch: 0500 cost= 0.089682057 W= 0.3127401 b= 0.34719515
epoch: 0550 cost= 0.088212043 W= 0.30898446 b= 0.37421298
epoch: 0600 cost= 0.086911999 W= 0.3054522 b= 0.39962384
epoch: 0650 cost= 0.085762337 W= 0.30213004 b= 0.42352298
epoch: 0700 cost= 0.084745593 W= 0.29900542 b= 0.44600105
epoch: 0750 cost= 0.083846480 W= 0.2960666 b= 0.46714252
epoch: 0800 cost= 0.083051346 W= 0.29330263 b= 0.48702663
epoch: 0850 cost= 0.082348242 W= 0.29070315 b= 0.5057272
epoch: 0900 cost= 0.081726506 W= 0.2882582 b= 0.52331626
epoch: 0950 cost= 0.081176721 W= 0.28595862 b= 0.5398589
epoch: 1000 cost= 0.080690555 W= 0.28379577 b= 0.55541867
optimizaton finished
Training cost= 0.080690555 W= 0.28379577 b= 0.55541867

Testing… (Mean square loss Comparison)
Testing cost= 0.07644593
Absolute mean square loss difference: 0.0042446256

Process finished with exit code 0

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值