[TensorFlow] demo1 创建100个float32的随机数y_data

import numpy as np
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data*0.1 + 0.3
print(x_data)
print(y_data)



### Python 深度学习训练 Demo 示例代码 为了展示如何使用 Python 进行深度学习训练,下面是一个基于 PyTorch 的简单示例。该示例涵盖了从数据准备到模型训练的过程。 #### 1. 导入必要的库并设置随机种子 ```python import numpy as np import torch import torch.nn as nn from torch.utils.data import DataLoader, TensorDataset from sklearn.model_selection import train_test_split import random seed = 42 torch.manual_seed(seed) np.random.seed(seed) random.seed(seed) ``` #### 2. 准备数据集 假设已经有一个简单的回归问题的数据集 `X` 和标签 `y`。 ```python # 假设 X 是特征矩阵,y 是目标向量 X = np.random.rand(1000, 10).astype(np.float32) # 生成一些随机数作为特征 y = np.random.rand(1000, 1).astype(np.float32) # 生成一些随机数作为标签 # 将数据分为训练集和验证集 x_train, x_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2) # 转换为 PyTorch Tensors x_train, y_train, x_valid, y_valid = map(torch.tensor, (x_train, y_train, x_valid, y_valid)) ``` #### 3. 创建自定义数据加载器 ```python train_dataset = TensorDataset(x_train, y_train) valid_dataset = TensorDataset(x_valid, y_valid) batch_size = 64 train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) valid_loader = DataLoader(valid_dataset, batch_size=batch_size * 2) ``` #### 4. 定义神经网络模型 ```python class SimpleNN(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim): super(SimpleNN, self).__init__() self.fc1 = nn.Linear(input_dim, hidden_dim) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_dim, output_dim) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out input_dim = 10 hidden_dim = 50 output_dim = 1 model = SimpleNN(input_dim, hidden_dim, output_dim) ``` #### 5. 设置损失函数和优化器 ```python criterion = nn.MSELoss() # 使用均方误差作为损失函数 optimizer = torch.optim.Adam(model.parameters(), lr=0.001) ``` #### 6. 训练循环 ```python num_epochs = 10 for epoch in range(num_epochs): model.train() running_loss = 0.0 for i, (inputs, labels) in enumerate(train_loader): optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {running_loss/len(train_loader):.4f}') ``` #### 7. 验证模型性能 ```python model.eval() with torch.no_grad(): valid_loss = 0.0 for inputs, labels in valid_loader: outputs = model(inputs) loss = criterion(outputs, labels) valid_loss += loss.item() print(f'Validation Loss: {valid_loss/len(valid_loader):.4f}') ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值