使用tensorflow实现线性回归

# -*- coding: UTF-8 -*-

'''使用tensorflow实现线性回归
reference:https://www.cnblogs.com/selenaf/p/9102398.html
思路:在数据上选择一条直线y=Wx+b,在这条直线附近随机生成一些数据点,让TensorFlow建立回归模型,去学习什么样的W和b能更好去拟合这些数据点。
'''

'''
1)随机生成1000个数据点,围绕在y=0.1x+0.3 周围,设置W=0.1,b=0.3,届时看构建的模型是否能学习到w和b的值。
'''
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
num_points=1000
vectors_set=[]
for i in range(num_points):
    x1=np.random.normal(0.0,0.55)   #横坐标,进行随机高斯处理化,以0为均值,以0.55为标准差
    y1=x1*0.1+0.3+np.random.normal(0.0,0.03)   #纵坐标,数据点在y1=x1*0.1+0.3上小范围浮动
    vectors_set.append([x1,y1])
    x_data=[v[0] for v in vectors_set]
    y_data=[v[1] for v in vectors_set]
plt.scatter(x_data,y_data,c='r')
plt.show()

'''
2)构造线性回归模型,学习上面的数据是符合一个怎么样的W和b
'''
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0), name='W')  # 生成1维的W矩阵,取值是[-1,1]之间的随机数
b = tf.Variable(tf.zeros([1]), name='b') # 生成1维的b矩阵,初始值是0
y = W * x_data + b     # 经过计算得出预估值y
loss = tf.reduce_mean(tf.square(y - y_data), name='loss') # 以预估值y和实际值y_data之间的均方误差作为损失
optimizer = tf.train.GradientDescentOptimizer(0.5) # 采用梯度下降法来优化参数  学习率为0.5
train = optimizer.minimize(loss, name='train')  # 训练的过程就是最小化这个误差值
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss))  # 初始化的W和b是多少
for step in range(20):   # 执行20次训练
  sess.run(train)
  print ("W =", sess.run(W), "b =", sess.run(b), "loss =", sess.run(loss)) # 输出训练好的W和b

'''
打印每一次结果,如下,随着迭代进行,训练的W、b越来越接近0.1、0.3,说明构建的回归模型确实学习到了之间建立的数据的规则。
loss一开始很大,后来慢慢变小,说明模型表达效果随着迭代越来越好。

W = [-0.68837476] b = [0.] loss = 0.29411852
W = [-0.43151537] b = [0.3049943] loss = 0.09167927
W = [-0.26009744] b = [0.30306205] loss = 0.042414192
W = [-0.14415416] b = [0.30177253] loss = 0.019876134
W = [-0.06573282] b = [0.30090034] loss = 0.0095652975
W = [-0.01269045] b = [0.30031043] loss = 0.004848239
W = [0.02318618] b = [0.2999114] loss = 0.0026902542
W = [0.04745229] b = [0.29964152] loss = 0.0017030073
W = [0.06386533] b = [0.29945898] loss = 0.0012513561
W = [0.07496673] b = [0.2993355] loss = 0.0010447322
W = [0.08247545] b = [0.299252] loss = 0.00095020473
W = [0.08755418] b = [0.29919553] loss = 0.0009069599
W = [0.09098931] b = [0.29915732] loss = 0.000887176
W = [0.09331276] b = [0.29913148] loss = 0.0008781252
W = [0.09488428] b = [0.299114] loss = 0.00087398454
W = [0.09594722] b = [0.2991022] loss = 0.0008720903
W = [0.09666617] b = [0.29909417] loss = 0.0008712237
W = [0.09715245] b = [0.29908878] loss = 0.0008708272
W = [0.09748136] b = [0.2990851] loss = 0.00087064586
W = [0.09770382] b = [0.29908264] loss = 0.0008705628
W = [0.09785429] b = [0.29908097] loss = 0.00087052496
'''

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值