[Tensorflow]构建神经网络完成线性回归分析

本文用tensorflow构建神经网络完成一个简单的例子:线性回归。

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# 对于线性关系y = x1 + x0,x1和x0都是一维向量
input_num = 2
output_num = 1
# 用16个神经元
neurons_num = 16
batch_size = 5
epochs = 200

x_data = np.random.normal(size=(1000, 2))
w = np.array([[1], [1]])
y_data = np.dot(x_data, w).reshape((-1, 1)) + 1

X = tf.placeholder(tf.float32, [None, input_num])
y = tf.placeholder(tf.float32, [None, output_num])

weights = tf.Variable(tf.truncated_normal([input_num, 1]), dtype=tf.float32)
bias = tf.Variable(tf.constant(0.1, shape=[1]), dtype=tf.float32)

with tf.name_scope('convd'):
    y_ = tf.add(tf.matmul(X, weights), bias)
with tf.name_scope('mse'):
    mse = tf.reduce_mean(tf.square(y_ - y))
with tf.name_scope('optimizer'):
    opt = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(mse)
init = tf.global_variables_initializer()

with tf.Session() as sess:
    # Run the initializer
    sess.run(init)

    # Fit all training data
    mse_list = []
    for epoch in range(epochs):
        x_train = x_data[epoch * batch_size: (epoch + 1) * batch_size, :]
        y_train = y_data[epoch * batch_size: (epoch + 1) * batch_size, :]
        mse_, w_, b_, o_ = sess.run([mse, weights, bias, opt], feed_dict={X: x_train, y: y_train})

        if epoch % 5 == 0:
            mse_list.append(mse_)
            print('epoch: %d' % epoch, 'mse=', mse_, 'weights=', w_, 'bias=', b_)
    plt.plot(mse_list)
    plt.show()

 

损失曲线:

 

每一步模拟结果:

epoch: 0 mse= 0.95470846 weights= [[0.37130916]
 [0.5389818 ]] bias= [0.11342223]
epoch: 5 mse= 1.6448042 weights= [[0.42115292]
 [0.5673728 ]] bias= [0.1854951]
epoch: 10 mse= 0.9690776 weights= [[0.47701502]
 [0.63785   ]] bias= [0.27959865]
epoch: 15 mse= 0.5702162 weights= [[0.55095905]
 [0.6883684 ]] bias= [0.36654383]
epoch: 20 mse= 0.38775384 weights= [[0.56151056]
 [0.71580756]] bias= [0.41816548]
epoch: 25 mse= 0.71004486 weights= [[0.60053504]
 [0.71535575]] bias= [0.48022684]
epoch: 30 mse= 0.21135831 weights= [[0.64272714]
 [0.7356484 ]] bias= [0.52237076]
epoch: 35 mse= 0.6288473 weights= [[0.70176554]
 [0.7579718 ]] bias= [0.5681559]
epoch: 40 mse= 0.14831199 weights= [[0.7259164]
 [0.7772914]] bias= [0.6122901]
epoch: 45 mse= 0.17581545 weights= [[0.75020236]
 [0.7950606 ]] bias= [0.6455283]
epoch: 50 mse= 0.31268176 weights= [[0.777244 ]
 [0.8116195]] bias= [0.67804706]
epoch: 55 mse= 0.26954082 weights= [[0.799988 ]
 [0.8274919]] bias= [0.71080756]
epoch: 60 mse= 0.110923626 weights= [[0.81554675]
 [0.84473526]] bias= [0.73851544]
epoch: 65 mse= 0.3364119 weights= [[0.8351821 ]
 [0.86834836]] bias= [0.76178485]
epoch: 70 mse= 0.11775869 weights= [[0.8496617 ]
 [0.88230723]] bias= [0.78296524]
epoch: 75 mse= 0.15410557 weights= [[0.8716818]
 [0.8911777]] bias= [0.80560607]
epoch: 80 mse= 0.06427675 weights= [[0.8916166 ]
 [0.89697725]] bias= [0.82531697]
epoch: 85 mse= 0.10565728 weights= [[0.90494376]
 [0.9080882 ]] bias= [0.84353626]
epoch: 90 mse= 0.010155492 weights= [[0.9199746 ]
 [0.91889423]] bias= [0.8615363]
epoch: 95 mse= 0.01568078 weights= [[0.9244    ]
 [0.92285496]] bias= [0.8725688]
epoch: 100 mse= 0.037834167 weights= [[0.9321953 ]
 [0.93196994]] bias= [0.88768554]
epoch: 105 mse= 0.006278408 weights= [[0.9405269]
 [0.9397645]] bias= [0.89913684]
epoch: 110 mse= 0.016463825 weights= [[0.9462303]
 [0.9466858]] bias= [0.9079533]
epoch: 115 mse= 0.016840978 weights= [[0.9486531]
 [0.9503261]] bias= [0.91491836]
epoch: 120 mse= 0.02027573 weights= [[0.95348746]
 [0.95571154]] bias= [0.92315996]
epoch: 125 mse= 0.005527997 weights= [[0.9594867]
 [0.9587776]] bias= [0.93095905]
epoch: 130 mse= 0.008891637 weights= [[0.96296716]
 [0.9639074 ]] bias= [0.9394498]
epoch: 135 mse= 0.0022668275 weights= [[0.96632165]
 [0.96931326]] bias= [0.94564474]
epoch: 140 mse= 0.0031881095 weights= [[0.9703764 ]
 [0.97241443]] bias= [0.95126355]
epoch: 145 mse= 0.001595168 weights= [[0.9707678]
 [0.9744987]] bias= [0.95516884]
epoch: 150 mse= 0.0019761706 weights= [[0.97468543]
 [0.9750065 ]] bias= [0.9587112]
epoch: 155 mse= 0.0023416674 weights= [[0.97562784]
 [0.9777103 ]] bias= [0.9612137]
epoch: 160 mse= 0.0012770665 weights= [[0.97862244]
 [0.98051095]] bias= [0.96506375]
epoch: 165 mse= 0.0005977473 weights= [[0.9810191]
 [0.9808582]] bias= [0.9682847]
epoch: 170 mse= 0.004160084 weights= [[0.98460144]
 [0.9831914 ]] bias= [0.9717908]
epoch: 175 mse= 0.0006283476 weights= [[0.9864475 ]
 [0.98375654]] bias= [0.9745427]
epoch: 180 mse= 0.0011961826 weights= [[0.9877456 ]
 [0.98501045]] bias= [0.9769379]
epoch: 185 mse= 0.0008161774 weights= [[0.9886735]
 [0.9869667]] bias= [0.9794328]
epoch: 190 mse= 0.00035172925 weights= [[0.9895241]
 [0.9880932]] bias= [0.98158604]
epoch: 195 mse= 0.00052211457 weights= [[0.9904907]
 [0.9897504]] bias= [0.98362017]

weights和bias都收敛于1,这与我们给出的模拟函数y = 1 * x0 + 1 * x1是一致的。

### 如何使用 TensorFlow.NET 实现神经网络线性回归 TensorFlow.NET 是由 SciSharp 社区开发的一个 .NET 绑定库,用于支持在 C# 和 F# 中构建、训练和部署机器学习模型[^1]。通过该库,可以轻松实现类似于 Python 的 TensorFlow 功能。下面是一个基于 TensorFlow.NET 的简单线性回归示例。 #### 示例代码:使用 TensorFlow.NET 构建线性回归模型 以下是完整的代码示例: ```csharp using System; using Tensorflow; class Program { static void Main(string[] args) { // 初始化 TensorFlow 环境 tf.Graph graph = new Graph(); using (var session = graph.Session()) { // 定义输入变量 X 和目标变量 Y var X = tf.placeholder(tf.float32, shape: null); var Y = tf.placeholder(tf.float32, shape: null); // 定义权重 W 和偏置 b var W = tf.Variable(0.0f, name: "weight"); var b = tf.Variable(0.0f, name: "bias"); // 建立线性模型 y_pred = W * X + b var y_pred = tf.add(tf.multiply(W, X), b); // 计算损失函数 MSE (均方误差) var loss = tf.reduce_mean(tf.square(y_pred - Y)); // 设置优化器 AdamOptimizer 并定义学习率 float learning_rate = 0.01f; var optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss); // 准备训练数据 float[] train_X = { 1, 2, 3, 4 }; float[] train_Y = { 2, 4, 6, 8 }; // 初始化所有变量 session.run(tf.global_variables_initializer()); // 执行训练过程 int num_epochs = 1000; // 总迭代次数 for (int epoch = 0; epoch < num_epochs; epoch++) { // 运行单次梯度下降更新 session.run(optimizer, new FeedItem(X, train_X), new FeedItem(Y, train_Y)); // 每隔一定步数打印当前状态 if (epoch % 100 == 0) { var current_loss = loss.eval(new FeedItem(X, train_X), new FeedItem(Y, train_Y)); Console.WriteLine($"Epoch {epoch}: Loss={current_loss}, Weight={W.value()}, Bias={b.value()}"); } } // 输出最终的结果 Console.WriteLine("Final Model Parameters:"); Console.WriteLine($"Weight: {W.value()} | Bias: {b.value()}"); // 测试模型预测能力 float test_input = 5; var prediction = y_pred.eval(new FeedItem(X, test_input)); Console.WriteLine($"Prediction for input {test_input} is {prediction}"); } } } ``` #### 解析代码中的关键部分 1. **初始化 TensorFlow 环境** 使用 `Graph` 对象创建一个新的计算图,并通过 `Session` 来运行操作[^1]。 2. **定义占位符和变量** 占位符 (`placeholder`) 表示输入的数据流,而变量 (`Variable`) 则表示可训练参数。这里我们定义了两个变量 `W`(权重)和 `b`(偏置)。 3. **建立线性模型** 线性模型的形式为 \(y_{\text{pred}} = W \cdot X + b\),其中 \(X\) 是输入特征向量[^3]。 4. **定义损失函数** 我们选择了均方误差作为损失函数,即 \(L = \frac{1}{N}\sum{(Y - y_{\text{pred}})^2}\)[^3]。 5. **设置优化器并执行训练** 使用 Adam 优化器来最小化损失函数。Adam 是一种常用的自适应学习率方法,在许多场景下表现良好[^2]。 6. **评估模型性能** 在每次迭代过程中记录损失值以及权重和偏置的变化情况;最后测试模型对于新样本的预测效果。 --- #### 注意事项 - 需要安装 TensorFlow.NET 库才能运行此代码。可以通过 NuGet 包管理工具添加依赖项。 - 如果遇到任何错误,请确认已正确配置开发环境,并确保版本兼容性。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值