Pytorch gru.flatten_parameters() lstm.flatten_parameters()

本文探讨了在使用循环神经网络(RNN)过程中遇到的一个常见问题:RNN模块权重不在单一连续内存块中导致的警告信息。文章分析了该警告产生的原因,并提供了解决方案,帮助读者理解如何优化内存布局以提高模型训练效率。
部署运行你感兴趣的模型镜像

您可能感兴趣的与本文相关的镜像

PyTorch 2.9

PyTorch 2.9

PyTorch
Cuda

PyTorch 是一个开源的 Python 机器学习库,基于 Torch 库,底层由 C++ 实现,应用于人工智能领域,如计算机视觉和自然语言处理

class HybridModel(nn.Module): def __init__(self): super(HybridModel, self).__init__() self.lstm = nn.LSTM(input_size=1, hidden_size=64, batch_first=True) self.conv = nn.Conv1d(in_channels=1, out_channels=64, kernel_size=3, padding='same') self.pool = nn.MaxPool1d(kernel_size=2) self.gru = nn.GRU(input_size=64, hidden_size=64, batch_first=True) self.fc = nn.Linear(64 + (64 * (time_window // 2)), 1) def forward(self, x): lstm_out, _ = self.lstm(x) conv_out = self.conv(x.transpose(1, 2)) conv_out = self.pool(conv_out) gru_out, _ = self.gru(lstm_out) gru_out = gru_out[:, -1, :] conv_out = conv_out.flatten(start_dim=1) combined_out = torch.cat((gru_out, conv_out), dim=1) output = self.fc(combined_out) return output hybrid_model = HybridModel() criterion = nn.MSELoss() optimizer = optim.Adam(hybrid_model.parameters(), lr=0.001) # 训练模型 num_epochs = 1500 train_losses = [] for epoch in range(num_epochs): outputs = hybrid_model(X_train) loss = criterion(outputs, y_train) optimizer.zero_grad() loss.backward() optimizer.step() train_losses.append(loss.item()) if (epoch + 1) % 10 == 0: print(f'Epoch [{epoch + 1}/{num_epochs}], Loss: {loss.item():.4f}') # 测试模型 hybrid_model.eval() with torch.no_grad(): y_hybrid_pred = hybrid_model(X_test) # 反归一化 y_hybrid_pred = scaler.inverse_transform(y_hybrid_pred.numpy()) # 评估指标 hybrid_mse = mean_squared_error(y_test, y_hybrid_pred) hybrid_mae = mean_absolute_error(y_test, y_hybrid_pred) hybrid_r2 = r2_score(y_test, y_hybrid_pred) print(f'Hybrid Model Mean Squared Error: {hybrid_mse:.4f}') print(f'Hybrid Model Mean Absolute Error: {hybrid_mae:.4f}') print(f'Hybrid Model R-squared: {hybrid_r2:.4f}')详细解释上述代码
05-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值