tensorflow均方误差损失函数预测酸奶价格

本文介绍使用TensorFlow的均方误差(MSE)损失函数进行酸奶价格预测的详细过程,包括数据准备、模型训练及参数调整。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

tensorflow均方误差损失函数

引用API:tensorflow.kreas.losses.MSE

MSE(y,y')=\tfrac{\sum _{i=1}^{n}(y_{i}-y_{i}')^{2}}{n}

均方误差(Mean Square Erroe)是回归问题最常用的损失函数,回归问题解决的是对具体数值的预测,而不是预测某个事先定义好的分类

则,lose_mean = tf.reduce_mean(tf.square(y_, y)) #y_为真实值,y_为计算出的预测值

tensorflow均方误差损失函数预测酸奶价格

import tensorflow as tf
import numpy as np

seed = 23456 #定义种子数值

rdm = np.random.RandomState(seed) #生成[0, 1)之间的小数,由于固定seed数值,每次生成的数值都一样

x = rdm.rand(32, 2) #生成32行2列的输入特征x

np.set_printoptions(threshold=6) #概略显示

print("32行2列的特征xs为:", x)  # 打印输入特征,概略显示5行

y_ = [[x1 + x2 + (rdm.rand()  / 10.0 - 0.05)] for (x1, x2) in x] #从x中取出x1,x2相加,并且加上-0.05到+0.05随机噪声,得到标准答案y_

x = tf.cast(x, dtype = tf.float32) #转换x数据类型

print("转换数据类型后x:", x) #打印转换书籍类型后的x

w1 = tf.Variable(tf.random.normal([2, 1], stddev = 1, seed = 1)) #初始化参数w1,生成两行一列的可训练参数

print("w1为:", w1)

epoch = 15000 #迭代次数为15000次

lr = 0.002 #学习率为0.002

for epoch in range(epoch):
    with tf.GradientTape() as tape: #用with上下文管理器调用GradientTape()方法
        y = tf.matmul(x, w1) #求前向传播计算结果y
        loss_mse = tf.reduce_mean(tf.square(y_ - y)) #求均方误差loss_mean
# print("y为:", y)
# print("loss_mean为:", loss_mse)
    grads = tape.gradient(loss_mse, w1) #损失函数对待训练参数w1求偏导
    w1.assign_sub(lr * grads) #更新参数w1

    if epoch % 500 == 0: #每迭代500轮
        print("After %d training steps, w1 is " % epoch) #每迭代500轮打印当前w1参数
        print(w1.numpy(), "\n") #打印当前w1参数

print("Final w1 is :", w1.numpy()) #打印最终w1



结果为: 

E:\Anaconda3\envs\TF2\python.exe C:/Users/Administrator/PycharmProjects/untitled8/酸奶价格预测.py
32行2列的特征xs为: [[0.32180029 0.32730047]
 [0.92742231 0.31169778]
 [0.16195411 0.36407808]
 ...
 [0.83554249 0.59131388]
 [0.2829476  0.05663651]
 [0.2916721  0.33175172]]
转换数据类型后x: tf.Tensor(
[[0.3218003  0.32730046]
 [0.9274223  0.31169778]
 [0.1619541  0.36407807]
 ...
 [0.8355425  0.5913139 ]
 [0.2829476  0.05663651]
 [0.29167208 0.33175173]], shape=(32, 2), dtype=float32)
w1为: <tf.Variable 'Variable:0' shape=(2, 1) dtype=float32, numpy=
array([[-0.8113182],
       [ 1.4845988]], dtype=float32)>
After 0 training steps, w1 is 
[[-0.8093924]
 [ 1.4857364]] 

After 500 training steps, w1 is 
[[-0.17545831]
 [ 1.7493846 ]] 

After 1000 training steps, w1 is 
[[0.12224181]
 [1.7290779 ]] 

After 1500 training steps, w1 is 
[[0.30012047]
 [1.639089  ]] 

After 2000 training steps, w1 is 
[[0.42697424]
 [1.5418574 ]] 

After 2500 training steps, w1 is 
[[0.5261725]
 [1.4540646]] 

After 3000 training steps, w1 is 
[[0.6068525]
 [1.378847 ]] 

After 3500 training steps, w1 is 
[[0.6734775]
 [1.3155416]] 

After 4000 training steps, w1 is 
[[0.72881055]
 [1.2626004 ]] 

After 4500 training steps, w1 is 
[[0.77486306]
 [1.2184267 ]] 

After 5000 training steps, w1 is 
[[0.81322  ]
 [1.1816002]] 

After 5500 training steps, w1 is 
[[0.8451773]
 [1.1509084]] 

After 6000 training steps, w1 is 
[[0.87180513]
 [1.1253314 ]] 

After 6500 training steps, w1 is 
[[0.8939932]
 [1.1040187]] 

After 7000 training steps, w1 is 
[[0.9124815]
 [1.0862588]] 

After 7500 training steps, w1 is 
[[0.9278873]
 [1.07146  ]] 

After 8000 training steps, w1 is 
[[0.94072455]
 [1.0591285 ]] 

After 8500 training steps, w1 is 
[[0.9514216]
 [1.0488532]] 

After 9000 training steps, w1 is 
[[0.96033514]
 [1.040291  ]] 

After 9500 training steps, w1 is 
[[0.9677624]
 [1.0331562]] 

After 10000 training steps, w1 is 
[[0.9739516]
 [1.0272108]] 

After 10500 training steps, w1 is 
[[0.97910905]
 [1.0222566 ]] 

After 11000 training steps, w1 is 
[[0.98340666]
 [1.0181286 ]] 

After 11500 training steps, w1 is 
[[0.98698765]
 [1.0146885 ]] 

After 12000 training steps, w1 is 
[[0.98997146]
 [1.0118226 ]] 

After 12500 training steps, w1 is 
[[0.9924576]
 [1.0094334]] 

After 13000 training steps, w1 is 
[[0.99452955]
 [1.0074434 ]] 

After 13500 training steps, w1 is 
[[0.9962558]
 [1.0057855]] 

After 14000 training steps, w1 is 
[[0.997694 ]
 [1.0044047]] 

After 14500 training steps, w1 is 
[[0.9988928]
 [1.0032523]] 

Final w1 is : [[0.99988985]
 [1.0022951 ]]

Process finished with exit code 0
 

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值