机器学习项目-智能鞋垫开发项目记录(自用)十

2025.2.14电脑cpu不足以支撑训练,故尝试租用服务器

1.租用服务器

2.下载Xshell(未成功)

https://blog.youkuaiyun.com/m0_67400972/article/details/125346023

3.但是最后还是使用windows powershell进行连接的

4.然后跟着李沐 

https://www.bilibili.com/video/BV18p4y1h7Dr/?spm_id_from=333.1007.top_right_bar_window_history.content.click&vd_source=de5f852470f7a066628325ffdb44fa10

 安装miniconda

https://blog.youkuaiyun.com/2301_76831056/article/details/143165738

安装速度太慢,使用镜像

pip install torch torchvision -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com

5.将数据集传入jupyter

安装zip

将数据集解压

6.安装scikit-learn

pip install scikit-learn -i https://pypi.tuna.tsinghua.edu.cn/simple

7.配置GPU

import torch
from torch import nn
torch.device('cpu'),torch.cuda.device('cuda'),torch.cuda.device('cuda:0')

 结果:

(device(type='cpu'),
 <torch.cuda.device at 0x7f77dd709d00>,
 <torch.cuda.device at 0x7f77dd709820>)

8.将LSTM 模型代码与交叉验证代码结合,搭建模型

import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset
import numpy as np
from sklearn.model_selection import KFold

# 1. 定义LSTM模型
class LSTMModel(nn.Module):
    # ... (与您提供的代码相同)
    def __init__(self, input_size, hidden_size, num_layers, num_classes):
        super(LSTMModel, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        
        # 定义LSTM层
        self.lstm = nn.LSTM(
            input_size=input_size,
            hidden_size=hidden_size,
            num_layers=num_layers,
            batch_first=True  # 输入输出张量格式为(batch, seq_len, feature)
        )
        
        # 定义全连接层
        self.fc = nn.Linear(hidden_size, num_classes)
    
    def forward(self, x):
        # 初始化隐藏状态和细胞状态
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
        
        # 前向传播LSTM
        out, _ = self.lstm(x, (h0, c0))  # 输出维度(batch_size, seq_length, hidden_size)
        
        # 只取最后一个时间步的输出
        out = out[:, -1, :]
        
        # 全连接层
        out = self.fc(out)
        return out

# 2. 超参数设置
input_size = 21
hidden_size = 128
num_layers = 2
num_classes = 2
batch_size = 64
learning_rate = 0.001
num_epochs = 10
k = 5  # 折数

# 3. 数据准备 (假设您的数据是numpy数组格式)
# X.shape = (31549, 600, 21)
# y.shape = (31549,)

# 4. KFold 交叉验证
kf = KFold(n_splits=k, shuffle=True, random_state=42)

for fold, (train_index, val_index) in enumerate(kf.split(X, y)):
    print(f"Fold {fold + 1}:")

    # 划分训练集和验证集
    X_train, X_val = X[train_index], X[val_index]
    y_train, y_val = y[train_index], y[val_index]

    # 转换为PyTorch数据集
    train_dataset = TensorDataset(
        torch.from_numpy(X_train).float(),
        torch.from_numpy(y_train).long()
    )
    val_dataset = TensorDataset(
        torch.from_numpy(X_val).float(),
        torch.from_numpy(y_val).long()
    )

    # 创建数据加载器
    train_loader = DataLoader(
        dataset=train_dataset,
        batch_size=batch_size,
        shuffle=True,
        num_workers=2
    )
    val_loader = DataLoader(
        dataset=val_dataset,
        batch_size=batch_size,
        shuffle=False,  # 验证集不需要打乱
        num_workers=2
    )

    # 5. 初始化模型
    #device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    model = LSTMModel(input_size, hidden_size, num_layers, num_classes).to(device)

    # 6. 定义损失函数和优化器
    criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

    # 7. 训练循环
    for epoch in range(num_epochs):
        for i, (sequences, labels) in enumerate(train_loader):
            sequences = sequences.to(device)
            labels = labels.to(device)
        
            # 前向传播
            outputs = model(sequences)
            loss = criterion(outputs, labels)
        
            # 反向传播和优化
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
        
            if (i+1) % 100 == 0:
                print(f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{len(train_loader)}], Loss: {loss.item():.4f}')
        # 8. 验证
        correct = 0
        total = 0
        with torch.no_grad():
            for sequences, labels in val_loader:
                sequences = sequences.to(device)
                labels = labels.to(device)
                outputs = model(sequences)
                _, predicted = torch.max(outputs.data, 1)
                total += labels.size(0)
                correct += (predicted == labels).sum().item()

        print(f"Epoch [{epoch+1}/{num_epochs}], Validation Accuracy: {100 * correct / total:.2f}%")

    print("-" * 20)

8.训练结果:

Fold 1:
Epoch [1/10], Step [100/395], Loss: 0.5017
Epoch [1/10], Step [200/395], Loss: 0.5464
Epoch [1/10], Step [300/395], Loss: 0.7033
Epoch [1/10], Validation Accuracy: 77.08%
Epoch [2/10], Step [100/395], Loss: 0.3463
Epoch [2/10], Step [200/395], Loss: 0.6087
Epoch [2/10], Step [300/395], Loss: 0.7428
Epoch [2/10], Validation Accuracy: 76.20%
Epoch [3/10], Step [100/395], Loss: 0.5385
Epoch [3/10], Step [200/395], Loss: 0.3845
Epoch [3/10], Step [300/395], Loss: 0.4977
Epoch [3/10], Validation Accuracy: 79.67%
Epoch [4/10], Step [100/395], Loss: 0.3336
Epoch [4/10], Step [200/395], Loss: 0.4343
Epoch [4/10], Step [300/395], Loss: 0.3964
Epoch [4/10], Validation Accuracy: 79.37%
Epoch [5/10], Step [100/395], Loss: 0.4464
Epoch [5/10], Step [200/395], Loss: 0.3734
Epoch [5/10], Step [300/395], Loss: 0.2493
Epoch [5/10], Validation Accuracy: 90.54%
Epoch [6/10], Step [100/395], Loss: 0.0937
Epoch [6/10], Step [200/395], Loss: 0.3071
Epoch [6/10], Step [300/395], Loss: 0.1935
Epoch [6/10], Validation Accuracy: 96.85%
Epoch [7/10], Step [100/395], Loss: 0.1393
Epoch [7/10], Step [200/395], Loss: 0.1384
Epoch [7/10], Step [300/395], Loss: 0.0738
Epoch [7/10], Validation Accuracy: 97.50%
Epoch [8/10], Step [100/395], Loss: 0.1021
Epoch [8/10], Step [200/395], Loss: 0.0307
Epoch [8/10], Step [300/395], Loss: 0.1038
Epoch [8/10], Validation Accuracy: 94.90%
Epoch [9/10], Step [100/395], Loss: 0.1032
Epoch [9/10], Step [200/395], Loss: 0.0139
Epoch [9/10], Step [300/395], Loss: 0.0031
Epoch [9/10], Validation Accuracy: 97.75%
Epoch [10/10], Step [100/395], Loss: 0.0714
Epoch [10/10], Step [200/395], Loss: 0.1020
Epoch [10/10], Step [300/395], Loss: 0.0334
Epoch [10/10], Validation Accuracy: 99.02%
--------------------
Fold 2:
Epoch [1/10], Step [100/395], Loss: 0.4848
Epoch [1/10], Step [200/395], Loss: 0.4125
Epoch [1/10], Step [300/395], Loss: 0.4249
Epoch [1/10], Validation Accuracy: 79.46%
Epoch [2/10], Step [100/395], Loss: 0.6724
Epoch [2/10], Step [200/395], Loss: 0.4669
Epoch [2/10], Step [300/395], Loss: 0.5287
Epoch [2/10], Validation Accuracy: 78.13%
Epoch [3/10], Step [100/395], Loss: 0.2836
Epoch [3/10], Step [200/395], Loss: 0.4645
Epoch [3/10], Step [300/395], Loss: 0.3780
Epoch [3/10], Validation Accuracy: 80.40%
Epoch [4/10], Step [100/395], Loss: 0.2461
Epoch [4/10], Step [200/395], Loss: 0.3173
Epoch [4/10], Step [300/395], Loss: 0.1452
Epoch [4/10], Validation Accuracy: 90.98%
Epoch [5/10], Step [100/395], Loss: 0.3913
Epoch [5/10], Step [200/395], Loss: 0.2494
Epoch [5/10], Step [300/395], Loss: 0.3230
Epoch [5/10], Validation Accuracy: 93.58%
Epoch [6/10], Step [100/395], Loss: 0.0816
Epoch [6/10], Step [200/395], Loss: 0.1794
Epoch [6/10], Step [300/395], Loss: 0.2768
Epoch [6/10], Validation Accuracy: 92.49%
Epoch [7/10], Step [100/395], Loss: 0.1752
Epoch [7/10], Step [200/395], Loss: 0.1088
Epoch [7/10], Step [300/395], Loss: 0.2150
Epoch [7/10], Validation Accuracy: 96.48%
Epoch [8/10], Step [100/395], Loss: 0.1164
Epoch [8/10], Step [200/395], Loss: 0.1151
Epoch [8/10], Step [300/395], Loss: 0.1561
Epoch [8/10], Validation Accuracy: 96.01%
Epoch [9/10], Step [100/395], Loss: 0.0974
Epoch [9/10], Step [200/395], Loss: 0.1123
Epoch [9/10], Step [300/395], Loss: 0.1261
Epoch [9/10], Validation Accuracy: 95.83%
Epoch [10/10], Step [100/395], Loss: 0.0982
Epoch [10/10], Step [200/395], Loss: 0.0444
Epoch [10/10], Step [300/395], Loss: 0.0461
Epoch [10/10], Validation Accuracy: 97.89%
--------------------
Fold 3:
Epoch [1/10], Step [100/395], Loss: 0.4977
Epoch [1/10], Step [200/395], Loss: 0.5673
Epoch [1/10], Step [300/395], Loss: 0.4354
Epoch [1/10], Validation Accuracy: 84.33%
Epoch [2/10], Step [100/395], Loss: 0.3990
Epoch [2/10], Step [200/395], Loss: 0.5802
Epoch [2/10], Step [300/395], Loss: 0.3587
Epoch [2/10], Validation Accuracy: 81.89%
Epoch [3/10], Step [100/395], Loss: 0.4083
Epoch [3/10], Step [200/395], Loss: 0.4352
Epoch [3/10], Step [300/395], Loss: 0.3774
Epoch [3/10], Validation Accuracy: 69.29%
Epoch [4/10], Step [100/395], Loss: 0.5158
Epoch [4/10], Step [200/395], Loss: 0.5685
Epoch [4/10], Step [300/395], Loss: 0.4904
Epoch [4/10], Validation Accuracy: 82.47%
Epoch [5/10], Step [100/395], Loss: 0.3287
Epoch [5/10], Step [200/395], Loss: 0.2838
Epoch [5/10], Step [300/395], Loss: 0.2876
Epoch [5/10], Validation Accuracy: 85.36%
Epoch [6/10], Step [100/395], Loss: 0.2592
Epoch [6/10], Step [200/395], Loss: 0.3600
Epoch [6/10], Step [300/395], Loss: 0.2365
Epoch [6/10], Validation Accuracy: 87.70%
Epoch [7/10], Step [100/395], Loss: 0.3201
Epoch [7/10], Step [200/395], Loss: 0.3716
Epoch [7/10], Step [300/395], Loss: 0.2927
Epoch [7/10], Validation Accuracy: 86.96%
Epoch [8/10], Step [100/395], Loss: 0.3415
Epoch [8/10], Step [200/395], Loss: 0.2263
Epoch [8/10], Step [300/395], Loss: 0.1731
Epoch [8/10], Validation Accuracy: 90.10%
Epoch [9/10], Step [100/395], Loss: 0.2893
Epoch [9/10], Step [200/395], Loss: 0.3388
Epoch [9/10], Step [300/395], Loss: 0.3509
Epoch [9/10], Validation Accuracy: 89.52%
Epoch [10/10], Step [100/395], Loss: 0.2174
Epoch [10/10], Step [200/395], Loss: 0.1554
Epoch [10/10], Step [300/395], Loss: 0.1092
Epoch [10/10], Validation Accuracy: 95.82%
--------------------
Fold 4:
Epoch [1/10], Step [100/395], Loss: 0.5843
Epoch [1/10], Step [200/395], Loss: 0.4975
Epoch [1/10], Step [300/395], Loss: 0.3452
Epoch [1/10], Validation Accuracy: 69.56%
Epoch [2/10], Step [100/395], Loss: 0.4956
Epoch [2/10], Step [200/395], Loss: 0.4836
Epoch [2/10], Step [300/395], Loss: 0.4534
Epoch [2/10], Validation Accuracy: 83.30%
Epoch [3/10], Step [100/395], Loss: 0.3182
Epoch [3/10], Step [200/395], Loss: 0.4520
Epoch [3/10], Step [300/395], Loss: 0.2990
Epoch [3/10], Validation Accuracy: 82.71%
Epoch [4/10], Step [100/395], Loss: 0.4792
Epoch [4/10], Step [200/395], Loss: 0.3710
Epoch [4/10], Step [300/395], Loss: 0.5065
Epoch [4/10], Validation Accuracy: 81.43%
Epoch [5/10], Step [100/395], Loss: 0.3549
Epoch [5/10], Step [200/395], Loss: 0.2655
Epoch [5/10], Step [300/395], Loss: 0.5377
Epoch [5/10], Validation Accuracy: 80.57%
Epoch [6/10], Step [100/395], Loss: 0.7786
Epoch [6/10], Step [200/395], Loss: 0.4332
Epoch [6/10], Step [300/395], Loss: 0.4119
Epoch [6/10], Validation Accuracy: 75.50%
Epoch [7/10], Step [100/395], Loss: 0.3652
Epoch [7/10], Step [200/395], Loss: 0.3263
Epoch [7/10], Step [300/395], Loss: 0.2774
Epoch [7/10], Validation Accuracy: 88.70%
Epoch [8/10], Step [100/395], Loss: 0.2723
Epoch [8/10], Step [200/395], Loss: 0.2825
Epoch [8/10], Step [300/395], Loss: 0.3172
Epoch [8/10], Validation Accuracy: 88.83%
Epoch [9/10], Step [100/395], Loss: 0.2246
Epoch [9/10], Step [200/395], Loss: 0.3250
Epoch [9/10], Step [300/395], Loss: 0.2093
Epoch [9/10], Validation Accuracy: 88.97%
Epoch [10/10], Step [100/395], Loss: 0.3092
Epoch [10/10], Step [200/395], Loss: 0.1830
Epoch [10/10], Step [300/395], Loss: 0.2820
Epoch [10/10], Validation Accuracy: 92.54%
--------------------
Fold 5:
Epoch [1/10], Step [100/395], Loss: 0.3261
Epoch [1/10], Step [200/395], Loss: 0.5698
Epoch [1/10], Step [300/395], Loss: 0.4739
Epoch [1/10], Validation Accuracy: 78.92%
Epoch [2/10], Step [100/395], Loss: 0.3079
Epoch [2/10], Step [200/395], Loss: 0.4412
Epoch [2/10], Step [300/395], Loss: 0.2897
Epoch [2/10], Validation Accuracy: 83.44%
Epoch [3/10], Step [100/395], Loss: 0.4870
Epoch [3/10], Step [200/395], Loss: 0.2603
Epoch [3/10], Step [300/395], Loss: 0.4270
Epoch [3/10], Validation Accuracy: 89.92%
Epoch [4/10], Step [100/395], Loss: 0.7423
Epoch [4/10], Step [200/395], Loss: 0.4999
Epoch [4/10], Step [300/395], Loss: 0.4962
Epoch [4/10], Validation Accuracy: 82.37%
Epoch [5/10], Step [100/395], Loss: 0.6103
Epoch [5/10], Step [200/395], Loss: 0.4024
Epoch [5/10], Step [300/395], Loss: 0.4403
Epoch [5/10], Validation Accuracy: 84.83%
Epoch [6/10], Step [100/395], Loss: 0.3716
Epoch [6/10], Step [200/395], Loss: 0.3998
Epoch [6/10], Step [300/395], Loss: 0.2721
Epoch [6/10], Validation Accuracy: 87.62%
Epoch [7/10], Step [100/395], Loss: 0.4655
Epoch [7/10], Step [200/395], Loss: 0.4612
Epoch [7/10], Step [300/395], Loss: 0.4656
Epoch [7/10], Validation Accuracy: 74.27%
Epoch [8/10], Step [100/395], Loss: 0.3697
Epoch [8/10], Step [200/395], Loss: 0.3434
Epoch [8/10], Step [300/395], Loss: 0.3524
Epoch [8/10], Validation Accuracy: 87.26%
Epoch [9/10], Step [100/395], Loss: 0.4635
Epoch [9/10], Step [200/395], Loss: 0.1440
Epoch [9/10], Step [300/395], Loss: 0.1623
Epoch [9/10], Validation Accuracy: 93.36%
Epoch [10/10], Step [100/395], Loss: 0.2480
Epoch [10/10], Step [200/395], Loss: 0.0923
Epoch [10/10], Step [300/395], Loss: 0.2037
Epoch [10/10], Validation Accuracy: 95.88%
--------------------

结果解释:

这段输出是你的 LSTM 模型在进行 5 折交叉验证训练过程中的日志信息。它详细展示了每个 epoch(训练周期)和每个 step(步骤)的训练损失以及验证准确率。下面我将逐行解释这段输出的含义:

整体结构:

  • Fold 1, Fold 2, ..., Fold 5: 表示 5 折交叉验证的每一折。模型在每一折上都会进行训练和验证。

  • Epoch [X/10]: 表示当前是第几个 epoch,总共训练 10 个 epoch。一个 epoch 指的是模型完整地遍历一次训练数据集。

  • Step [X/395]: 表示当前是第几个 step,总共有 395 个 step。一个 step 通常指的是一个 batch 的训练过程。由于你的数据量和 batch size,计算得出每折有 395 个 step。

  • Loss: X.XXXX: 表示当前 step 的训练损失值。损失值越低,通常表示模型在当前 batch 上的表现越好。

  • Validation Accuracy: X.XX%: 表示当前 epoch 结束后,模型在验证集上的准确率。准确率越高,表示模型在验证集上的表现越好。

  • --------------------: 用于分隔不同折的训练结果。

具体解释(以 Fold 1 为例):

  • Fold 1:: 表示当前是第一折交叉验证。

  • Epoch [1/10], Step [100/395], Loss: 0.5017: 表示在第一折的第一个 epoch 中,第 100 个 step 的训练损失值为 0.5017。

  • Epoch [1/10], Validation Accuracy: 77.08%: 表示在第一折的第一个 epoch 结束后,模型在验证集上的准确率为 77.08%。

  • Epoch [2/10], Step [200/395], Loss: 0.6087: 表示在第一折的第二个 epoch 中,第 200 个 step 的训练损失值为 0.6087。

  • Epoch [2/10], Validation Accuracy: 76.20%: 表示在第一折的第二个 epoch 结束后,模型在验证集上的准确率为 76.20%。

  • 以此类推,直到 Fold 5 的训练结束。

分析这段输出:

  • 训练损失 (Loss): 训练损失在每个 epoch 中通常会逐渐下降,表示模型在不断学习和优化。

  • 验证准确率 (Validation Accuracy): 验证准确率是评估模型泛化能力的重要指标。在理想情况下,验证准确率应该随着 epoch 的增加而逐渐提高,但也有可能出现过拟合的情况,即模型在训练集上表现很好,但在验证集上表现不佳。

  • 交叉验证: 通过 5 折交叉验证,你可以更全面地评估模型的性能,避免模型在单一数据集上的过拟合。

总结

这段输出详细记录了 LSTM 模型在 5 折交叉验证过程中的训练情况。通过分析损失值和验证准确率,你可以了解模型的训练效果,并根据需要调整模型参数或训练策略。如果你想得到最终的评估结果,通常需要计算 5 折验证准确率的平均值和标准差。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值